00:00:00.002 Started by upstream project "autotest-per-patch" build number 121263 00:00:00.002 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.079 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.080 The recommended git tool is: git 00:00:00.080 using credential 00000000-0000-0000-0000-000000000002 00:00:00.084 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.120 Fetching changes from the remote Git repository 00:00:00.122 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.178 Using shallow fetch with depth 1 00:00:00.178 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.178 > git --version # timeout=10 00:00:00.214 > git --version # 'git version 2.39.2' 00:00:00.214 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.214 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.214 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.241 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.253 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.267 Checking out Revision f964f6d3463483adf05cc5c086f2abd292e05f1d (FETCH_HEAD) 00:00:06.267 > git config core.sparsecheckout # timeout=10 00:00:06.278 > git read-tree -mu HEAD # timeout=10 00:00:06.295 > git checkout -f f964f6d3463483adf05cc5c086f2abd292e05f1d # timeout=5 00:00:06.317 Commit message: "ansible/roles/custom_facts: Drop nvme features" 00:00:06.317 > git rev-list --no-walk f964f6d3463483adf05cc5c086f2abd292e05f1d # timeout=10 00:00:06.399 [Pipeline] Start of Pipeline 00:00:06.412 [Pipeline] library 00:00:06.414 Loading library shm_lib@master 00:00:06.414 Library shm_lib@master is cached. Copying from home. 00:00:06.431 [Pipeline] node 00:00:06.444 Running on CYP12 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.446 [Pipeline] { 00:00:06.458 [Pipeline] catchError 00:00:06.460 [Pipeline] { 00:00:06.475 [Pipeline] wrap 00:00:06.485 [Pipeline] { 00:00:06.493 [Pipeline] stage 00:00:06.495 [Pipeline] { (Prologue) 00:00:06.693 [Pipeline] sh 00:00:06.979 + logger -p user.info -t JENKINS-CI 00:00:07.000 [Pipeline] echo 00:00:07.002 Node: CYP12 00:00:07.011 [Pipeline] sh 00:00:07.314 [Pipeline] setCustomBuildProperty 00:00:07.326 [Pipeline] echo 00:00:07.328 Cleanup processes 00:00:07.333 [Pipeline] sh 00:00:07.619 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.619 1292513 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.635 [Pipeline] sh 00:00:07.922 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.922 ++ grep -v 'sudo pgrep' 00:00:07.922 ++ awk '{print $1}' 00:00:07.922 + sudo kill -9 00:00:07.922 + true 00:00:07.937 [Pipeline] cleanWs 00:00:07.946 [WS-CLEANUP] Deleting project workspace... 00:00:07.946 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.952 [WS-CLEANUP] done 00:00:07.957 [Pipeline] setCustomBuildProperty 00:00:07.973 [Pipeline] sh 00:00:08.251 + sudo git config --global --replace-all safe.directory '*' 00:00:08.329 [Pipeline] nodesByLabel 00:00:08.331 Found a total of 1 nodes with the 'sorcerer' label 00:00:08.338 [Pipeline] httpRequest 00:00:08.343 HttpMethod: GET 00:00:08.344 URL: http://10.211.164.96/packages/jbp_f964f6d3463483adf05cc5c086f2abd292e05f1d.tar.gz 00:00:08.346 Sending request to url: http://10.211.164.96/packages/jbp_f964f6d3463483adf05cc5c086f2abd292e05f1d.tar.gz 00:00:08.357 Response Code: HTTP/1.1 200 OK 00:00:08.358 Success: Status code 200 is in the accepted range: 200,404 00:00:08.358 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_f964f6d3463483adf05cc5c086f2abd292e05f1d.tar.gz 00:00:10.066 [Pipeline] sh 00:00:10.349 + tar --no-same-owner -xf jbp_f964f6d3463483adf05cc5c086f2abd292e05f1d.tar.gz 00:00:10.367 [Pipeline] httpRequest 00:00:10.372 HttpMethod: GET 00:00:10.373 URL: http://10.211.164.96/packages/spdk_f93182c78e3c077975126b50452fed761f9587e0.tar.gz 00:00:10.373 Sending request to url: http://10.211.164.96/packages/spdk_f93182c78e3c077975126b50452fed761f9587e0.tar.gz 00:00:10.382 Response Code: HTTP/1.1 200 OK 00:00:10.382 Success: Status code 200 is in the accepted range: 200,404 00:00:10.383 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_f93182c78e3c077975126b50452fed761f9587e0.tar.gz 00:00:39.044 [Pipeline] sh 00:00:39.336 + tar --no-same-owner -xf spdk_f93182c78e3c077975126b50452fed761f9587e0.tar.gz 00:00:42.650 [Pipeline] sh 00:00:42.934 + git -C spdk log --oneline -n5 00:00:42.934 f93182c78 accel: remove flags 00:00:42.934 bebe61b53 util: remove spdk_iov_one() 00:00:42.934 975bb24ba nvmf: remove spdk_nvmf_subsytem_any_listener_allowed() 00:00:42.934 f8d98be2d nvmf: remove cb_fn/cb_arg from spdk_nvmf_qpair_disconnect() 00:00:42.934 3dbaa93c1 nvmf: pass command dword 12 and 13 for write 00:00:42.945 [Pipeline] } 00:00:42.956 [Pipeline] // stage 00:00:42.962 [Pipeline] stage 00:00:42.963 [Pipeline] { (Prepare) 00:00:42.977 [Pipeline] writeFile 00:00:42.998 [Pipeline] sh 00:00:43.287 + logger -p user.info -t JENKINS-CI 00:00:43.300 [Pipeline] sh 00:00:43.584 + logger -p user.info -t JENKINS-CI 00:00:43.596 [Pipeline] sh 00:00:43.880 + cat autorun-spdk.conf 00:00:43.880 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:43.880 SPDK_TEST_NVMF=1 00:00:43.880 SPDK_TEST_NVME_CLI=1 00:00:43.880 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:43.880 SPDK_TEST_NVMF_NICS=e810 00:00:43.880 SPDK_TEST_VFIOUSER=1 00:00:43.880 SPDK_RUN_UBSAN=1 00:00:43.880 NET_TYPE=phy 00:00:43.887 RUN_NIGHTLY=0 00:00:43.892 [Pipeline] readFile 00:00:43.920 [Pipeline] withEnv 00:00:43.922 [Pipeline] { 00:00:43.935 [Pipeline] sh 00:00:44.221 + set -ex 00:00:44.221 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:44.221 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:44.221 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:44.221 ++ SPDK_TEST_NVMF=1 00:00:44.221 ++ SPDK_TEST_NVME_CLI=1 00:00:44.221 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:44.221 ++ SPDK_TEST_NVMF_NICS=e810 00:00:44.221 ++ SPDK_TEST_VFIOUSER=1 00:00:44.221 ++ SPDK_RUN_UBSAN=1 00:00:44.221 ++ NET_TYPE=phy 00:00:44.221 ++ RUN_NIGHTLY=0 00:00:44.221 + case $SPDK_TEST_NVMF_NICS in 00:00:44.221 + DRIVERS=ice 00:00:44.221 + [[ tcp == \r\d\m\a ]] 00:00:44.221 + [[ -n ice ]] 00:00:44.221 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:44.221 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:44.221 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:44.221 rmmod: ERROR: Module irdma is not currently loaded 00:00:44.221 rmmod: ERROR: Module i40iw is not currently loaded 00:00:44.221 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:44.221 + true 00:00:44.221 + for D in $DRIVERS 00:00:44.221 + sudo modprobe ice 00:00:44.221 + exit 0 00:00:44.232 [Pipeline] } 00:00:44.249 [Pipeline] // withEnv 00:00:44.254 [Pipeline] } 00:00:44.268 [Pipeline] // stage 00:00:44.276 [Pipeline] catchError 00:00:44.278 [Pipeline] { 00:00:44.291 [Pipeline] timeout 00:00:44.291 Timeout set to expire in 40 min 00:00:44.293 [Pipeline] { 00:00:44.306 [Pipeline] stage 00:00:44.308 [Pipeline] { (Tests) 00:00:44.323 [Pipeline] sh 00:00:44.604 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:44.604 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:44.604 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:44.604 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:44.604 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:44.604 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:44.604 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:44.604 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:44.604 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:44.604 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:44.604 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:44.604 + source /etc/os-release 00:00:44.604 ++ NAME='Fedora Linux' 00:00:44.604 ++ VERSION='38 (Cloud Edition)' 00:00:44.604 ++ ID=fedora 00:00:44.604 ++ VERSION_ID=38 00:00:44.604 ++ VERSION_CODENAME= 00:00:44.604 ++ PLATFORM_ID=platform:f38 00:00:44.604 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:44.604 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:44.604 ++ LOGO=fedora-logo-icon 00:00:44.604 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:44.604 ++ HOME_URL=https://fedoraproject.org/ 00:00:44.604 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:44.604 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:44.604 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:44.604 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:44.604 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:44.604 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:44.604 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:44.604 ++ SUPPORT_END=2024-05-14 00:00:44.604 ++ VARIANT='Cloud Edition' 00:00:44.604 ++ VARIANT_ID=cloud 00:00:44.604 + uname -a 00:00:44.604 Linux spdk-cyp-12 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:44.604 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:47.902 Hugepages 00:00:47.902 node hugesize free / total 00:00:47.902 node0 1048576kB 0 / 0 00:00:47.902 node0 2048kB 0 / 0 00:00:47.902 node1 1048576kB 0 / 0 00:00:47.902 node1 2048kB 0 / 0 00:00:47.902 00:00:47.902 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:47.902 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:00:47.902 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:00:47.902 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:00:47.902 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:00:47.902 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:00:47.902 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:00:47.902 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:00:47.902 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:00:47.902 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:00:47.902 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:00:47.902 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:00:47.902 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:00:47.902 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:00:47.902 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:00:47.902 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:00:47.902 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:00:47.902 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:00:47.902 + rm -f /tmp/spdk-ld-path 00:00:47.902 + source autorun-spdk.conf 00:00:47.902 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:47.902 ++ SPDK_TEST_NVMF=1 00:00:47.902 ++ SPDK_TEST_NVME_CLI=1 00:00:47.902 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:47.902 ++ SPDK_TEST_NVMF_NICS=e810 00:00:47.902 ++ SPDK_TEST_VFIOUSER=1 00:00:47.902 ++ SPDK_RUN_UBSAN=1 00:00:47.902 ++ NET_TYPE=phy 00:00:47.902 ++ RUN_NIGHTLY=0 00:00:47.902 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:47.902 + [[ -n '' ]] 00:00:47.902 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:47.902 + for M in /var/spdk/build-*-manifest.txt 00:00:47.902 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:47.902 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:47.902 + for M in /var/spdk/build-*-manifest.txt 00:00:47.902 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:47.902 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:47.902 ++ uname 00:00:47.902 + [[ Linux == \L\i\n\u\x ]] 00:00:47.902 + sudo dmesg -T 00:00:47.902 + sudo dmesg --clear 00:00:47.902 + dmesg_pid=1293505 00:00:47.902 + [[ Fedora Linux == FreeBSD ]] 00:00:47.902 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:47.902 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:47.902 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:47.902 + [[ -x /usr/src/fio-static/fio ]] 00:00:47.902 + export FIO_BIN=/usr/src/fio-static/fio 00:00:47.902 + FIO_BIN=/usr/src/fio-static/fio 00:00:47.902 + sudo dmesg -Tw 00:00:47.902 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:47.902 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:47.902 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:47.902 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:47.902 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:47.902 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:47.902 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:47.902 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:47.902 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:47.903 Test configuration: 00:00:47.903 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:47.903 SPDK_TEST_NVMF=1 00:00:47.903 SPDK_TEST_NVME_CLI=1 00:00:47.903 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:47.903 SPDK_TEST_NVMF_NICS=e810 00:00:47.903 SPDK_TEST_VFIOUSER=1 00:00:47.903 SPDK_RUN_UBSAN=1 00:00:47.903 NET_TYPE=phy 00:00:47.903 RUN_NIGHTLY=0 15:11:05 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:47.903 15:11:05 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:47.903 15:11:05 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:47.903 15:11:05 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:47.903 15:11:05 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:47.903 15:11:05 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:47.903 15:11:05 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:47.903 15:11:05 -- paths/export.sh@5 -- $ export PATH 00:00:47.903 15:11:05 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:47.903 15:11:05 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:47.903 15:11:05 -- common/autobuild_common.sh@435 -- $ date +%s 00:00:47.903 15:11:05 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1714137065.XXXXXX 00:00:47.903 15:11:05 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1714137065.XnIMNk 00:00:47.903 15:11:05 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:00:47.903 15:11:05 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:00:47.903 15:11:05 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:47.903 15:11:05 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:47.903 15:11:05 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:47.903 15:11:05 -- common/autobuild_common.sh@451 -- $ get_config_params 00:00:47.903 15:11:05 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:00:47.903 15:11:05 -- common/autotest_common.sh@10 -- $ set +x 00:00:47.903 15:11:05 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:47.903 15:11:05 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:00:47.903 15:11:05 -- pm/common@17 -- $ local monitor 00:00:47.903 15:11:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:47.903 15:11:05 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=1293540 00:00:47.903 15:11:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:47.903 15:11:05 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=1293542 00:00:47.903 15:11:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:47.903 15:11:05 -- pm/common@21 -- $ date +%s 00:00:47.903 15:11:05 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=1293544 00:00:47.903 15:11:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:47.903 15:11:05 -- pm/common@21 -- $ date +%s 00:00:47.903 15:11:05 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=1293547 00:00:47.903 15:11:05 -- pm/common@26 -- $ sleep 1 00:00:47.903 15:11:05 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1714137065 00:00:47.903 15:11:05 -- pm/common@21 -- $ date +%s 00:00:47.903 15:11:05 -- pm/common@21 -- $ date +%s 00:00:47.903 15:11:05 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1714137065 00:00:47.903 15:11:05 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1714137065 00:00:47.903 15:11:05 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1714137065 00:00:47.903 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1714137065_collect-cpu-load.pm.log 00:00:48.164 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1714137065_collect-vmstat.pm.log 00:00:48.164 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1714137065_collect-bmc-pm.bmc.pm.log 00:00:48.164 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1714137065_collect-cpu-temp.pm.log 00:00:49.106 15:11:06 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:00:49.106 15:11:06 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:49.106 15:11:06 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:49.106 15:11:06 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:49.106 15:11:06 -- spdk/autobuild.sh@16 -- $ date -u 00:00:49.106 Fri Apr 26 01:11:06 PM UTC 2024 00:00:49.106 15:11:06 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:49.106 v24.05-pre-450-gf93182c78 00:00:49.106 15:11:06 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:49.106 15:11:06 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:49.106 15:11:06 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:49.106 15:11:06 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:00:49.106 15:11:06 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:00:49.106 15:11:06 -- common/autotest_common.sh@10 -- $ set +x 00:00:49.106 ************************************ 00:00:49.106 START TEST ubsan 00:00:49.106 ************************************ 00:00:49.106 15:11:06 -- common/autotest_common.sh@1111 -- $ echo 'using ubsan' 00:00:49.106 using ubsan 00:00:49.106 00:00:49.106 real 0m0.001s 00:00:49.106 user 0m0.000s 00:00:49.106 sys 0m0.000s 00:00:49.106 15:11:06 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:00:49.106 15:11:06 -- common/autotest_common.sh@10 -- $ set +x 00:00:49.106 ************************************ 00:00:49.106 END TEST ubsan 00:00:49.106 ************************************ 00:00:49.106 15:11:06 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:49.106 15:11:06 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:49.106 15:11:06 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:49.106 15:11:06 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:49.106 15:11:06 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:49.106 15:11:06 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:49.106 15:11:06 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:49.106 15:11:06 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:49.106 15:11:06 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:49.366 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:49.366 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:49.626 Using 'verbs' RDMA provider 00:01:05.476 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:17.705 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:17.705 Creating mk/config.mk...done. 00:01:17.705 Creating mk/cc.flags.mk...done. 00:01:17.706 Type 'make' to build. 00:01:17.706 15:11:34 -- spdk/autobuild.sh@69 -- $ run_test make make -j144 00:01:17.706 15:11:34 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:17.706 15:11:34 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:17.706 15:11:34 -- common/autotest_common.sh@10 -- $ set +x 00:01:17.706 ************************************ 00:01:17.706 START TEST make 00:01:17.706 ************************************ 00:01:17.706 15:11:34 -- common/autotest_common.sh@1111 -- $ make -j144 00:01:17.706 make[1]: Nothing to be done for 'all'. 00:01:18.645 The Meson build system 00:01:18.645 Version: 1.3.1 00:01:18.645 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:18.645 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:18.645 Build type: native build 00:01:18.645 Project name: libvfio-user 00:01:18.645 Project version: 0.0.1 00:01:18.645 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:18.645 C linker for the host machine: cc ld.bfd 2.39-16 00:01:18.645 Host machine cpu family: x86_64 00:01:18.645 Host machine cpu: x86_64 00:01:18.645 Run-time dependency threads found: YES 00:01:18.645 Library dl found: YES 00:01:18.645 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:18.645 Run-time dependency json-c found: YES 0.17 00:01:18.645 Run-time dependency cmocka found: YES 1.1.7 00:01:18.645 Program pytest-3 found: NO 00:01:18.645 Program flake8 found: NO 00:01:18.645 Program misspell-fixer found: NO 00:01:18.645 Program restructuredtext-lint found: NO 00:01:18.645 Program valgrind found: YES (/usr/bin/valgrind) 00:01:18.645 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:18.645 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:18.645 Compiler for C supports arguments -Wwrite-strings: YES 00:01:18.645 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:18.645 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:18.645 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:18.645 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:18.645 Build targets in project: 8 00:01:18.645 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:18.645 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:18.645 00:01:18.645 libvfio-user 0.0.1 00:01:18.645 00:01:18.645 User defined options 00:01:18.645 buildtype : debug 00:01:18.645 default_library: shared 00:01:18.645 libdir : /usr/local/lib 00:01:18.645 00:01:18.645 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:18.902 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:18.902 [1/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:18.902 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:18.902 [3/37] Compiling C object samples/null.p/null.c.o 00:01:19.160 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:19.160 [5/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:19.160 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:19.160 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:19.160 [8/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:19.160 [9/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:19.160 [10/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:19.160 [11/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:19.160 [12/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:19.160 [13/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:19.160 [14/37] Compiling C object samples/server.p/server.c.o 00:01:19.160 [15/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:19.160 [16/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:19.160 [17/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:19.160 [18/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:19.160 [19/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:19.160 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:19.160 [21/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:19.160 [22/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:19.160 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:19.160 [24/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:19.160 [25/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:19.160 [26/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:19.160 [27/37] Compiling C object samples/client.p/client.c.o 00:01:19.160 [28/37] Linking target lib/libvfio-user.so.0.0.1 00:01:19.160 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:19.160 [30/37] Linking target samples/client 00:01:19.160 [31/37] Linking target test/unit_tests 00:01:19.160 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:19.419 [33/37] Linking target samples/server 00:01:19.419 [34/37] Linking target samples/null 00:01:19.419 [35/37] Linking target samples/shadow_ioeventfd_server 00:01:19.419 [36/37] Linking target samples/gpio-pci-idio-16 00:01:19.419 [37/37] Linking target samples/lspci 00:01:19.419 INFO: autodetecting backend as ninja 00:01:19.419 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:19.419 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:19.679 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:19.679 ninja: no work to do. 00:01:26.288 The Meson build system 00:01:26.288 Version: 1.3.1 00:01:26.288 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:26.288 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:26.288 Build type: native build 00:01:26.288 Program cat found: YES (/usr/bin/cat) 00:01:26.288 Project name: DPDK 00:01:26.288 Project version: 23.11.0 00:01:26.288 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:26.288 C linker for the host machine: cc ld.bfd 2.39-16 00:01:26.288 Host machine cpu family: x86_64 00:01:26.288 Host machine cpu: x86_64 00:01:26.288 Message: ## Building in Developer Mode ## 00:01:26.288 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:26.288 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:26.288 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:26.288 Program python3 found: YES (/usr/bin/python3) 00:01:26.288 Program cat found: YES (/usr/bin/cat) 00:01:26.288 Compiler for C supports arguments -march=native: YES 00:01:26.288 Checking for size of "void *" : 8 00:01:26.288 Checking for size of "void *" : 8 (cached) 00:01:26.288 Library m found: YES 00:01:26.288 Library numa found: YES 00:01:26.288 Has header "numaif.h" : YES 00:01:26.288 Library fdt found: NO 00:01:26.288 Library execinfo found: NO 00:01:26.288 Has header "execinfo.h" : YES 00:01:26.288 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:26.288 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:26.288 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:26.288 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:26.288 Run-time dependency openssl found: YES 3.0.9 00:01:26.288 Run-time dependency libpcap found: YES 1.10.4 00:01:26.288 Has header "pcap.h" with dependency libpcap: YES 00:01:26.288 Compiler for C supports arguments -Wcast-qual: YES 00:01:26.288 Compiler for C supports arguments -Wdeprecated: YES 00:01:26.288 Compiler for C supports arguments -Wformat: YES 00:01:26.288 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:26.288 Compiler for C supports arguments -Wformat-security: NO 00:01:26.288 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:26.288 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:26.288 Compiler for C supports arguments -Wnested-externs: YES 00:01:26.288 Compiler for C supports arguments -Wold-style-definition: YES 00:01:26.288 Compiler for C supports arguments -Wpointer-arith: YES 00:01:26.288 Compiler for C supports arguments -Wsign-compare: YES 00:01:26.288 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:26.288 Compiler for C supports arguments -Wundef: YES 00:01:26.288 Compiler for C supports arguments -Wwrite-strings: YES 00:01:26.288 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:26.288 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:26.288 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:26.288 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:26.288 Program objdump found: YES (/usr/bin/objdump) 00:01:26.288 Compiler for C supports arguments -mavx512f: YES 00:01:26.288 Checking if "AVX512 checking" compiles: YES 00:01:26.288 Fetching value of define "__SSE4_2__" : 1 00:01:26.288 Fetching value of define "__AES__" : 1 00:01:26.288 Fetching value of define "__AVX__" : 1 00:01:26.288 Fetching value of define "__AVX2__" : 1 00:01:26.288 Fetching value of define "__AVX512BW__" : 1 00:01:26.288 Fetching value of define "__AVX512CD__" : 1 00:01:26.288 Fetching value of define "__AVX512DQ__" : 1 00:01:26.288 Fetching value of define "__AVX512F__" : 1 00:01:26.288 Fetching value of define "__AVX512VL__" : 1 00:01:26.288 Fetching value of define "__PCLMUL__" : 1 00:01:26.288 Fetching value of define "__RDRND__" : 1 00:01:26.288 Fetching value of define "__RDSEED__" : 1 00:01:26.288 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:26.288 Fetching value of define "__znver1__" : (undefined) 00:01:26.288 Fetching value of define "__znver2__" : (undefined) 00:01:26.288 Fetching value of define "__znver3__" : (undefined) 00:01:26.288 Fetching value of define "__znver4__" : (undefined) 00:01:26.288 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:26.288 Message: lib/log: Defining dependency "log" 00:01:26.288 Message: lib/kvargs: Defining dependency "kvargs" 00:01:26.288 Message: lib/telemetry: Defining dependency "telemetry" 00:01:26.288 Checking for function "getentropy" : NO 00:01:26.288 Message: lib/eal: Defining dependency "eal" 00:01:26.288 Message: lib/ring: Defining dependency "ring" 00:01:26.288 Message: lib/rcu: Defining dependency "rcu" 00:01:26.288 Message: lib/mempool: Defining dependency "mempool" 00:01:26.288 Message: lib/mbuf: Defining dependency "mbuf" 00:01:26.288 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:26.288 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:26.288 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:26.288 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:26.288 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:26.288 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:26.288 Compiler for C supports arguments -mpclmul: YES 00:01:26.288 Compiler for C supports arguments -maes: YES 00:01:26.288 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:26.288 Compiler for C supports arguments -mavx512bw: YES 00:01:26.288 Compiler for C supports arguments -mavx512dq: YES 00:01:26.288 Compiler for C supports arguments -mavx512vl: YES 00:01:26.288 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:26.288 Compiler for C supports arguments -mavx2: YES 00:01:26.288 Compiler for C supports arguments -mavx: YES 00:01:26.288 Message: lib/net: Defining dependency "net" 00:01:26.288 Message: lib/meter: Defining dependency "meter" 00:01:26.288 Message: lib/ethdev: Defining dependency "ethdev" 00:01:26.288 Message: lib/pci: Defining dependency "pci" 00:01:26.288 Message: lib/cmdline: Defining dependency "cmdline" 00:01:26.288 Message: lib/hash: Defining dependency "hash" 00:01:26.288 Message: lib/timer: Defining dependency "timer" 00:01:26.288 Message: lib/compressdev: Defining dependency "compressdev" 00:01:26.288 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:26.288 Message: lib/dmadev: Defining dependency "dmadev" 00:01:26.288 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:26.288 Message: lib/power: Defining dependency "power" 00:01:26.288 Message: lib/reorder: Defining dependency "reorder" 00:01:26.288 Message: lib/security: Defining dependency "security" 00:01:26.288 Has header "linux/userfaultfd.h" : YES 00:01:26.288 Has header "linux/vduse.h" : YES 00:01:26.288 Message: lib/vhost: Defining dependency "vhost" 00:01:26.288 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:26.288 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:26.288 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:26.288 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:26.288 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:26.288 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:26.288 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:26.288 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:26.288 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:26.288 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:26.288 Program doxygen found: YES (/usr/bin/doxygen) 00:01:26.288 Configuring doxy-api-html.conf using configuration 00:01:26.288 Configuring doxy-api-man.conf using configuration 00:01:26.288 Program mandb found: YES (/usr/bin/mandb) 00:01:26.288 Program sphinx-build found: NO 00:01:26.289 Configuring rte_build_config.h using configuration 00:01:26.289 Message: 00:01:26.289 ================= 00:01:26.289 Applications Enabled 00:01:26.289 ================= 00:01:26.289 00:01:26.289 apps: 00:01:26.289 00:01:26.289 00:01:26.289 Message: 00:01:26.289 ================= 00:01:26.289 Libraries Enabled 00:01:26.289 ================= 00:01:26.289 00:01:26.289 libs: 00:01:26.289 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:26.289 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:26.289 cryptodev, dmadev, power, reorder, security, vhost, 00:01:26.289 00:01:26.289 Message: 00:01:26.289 =============== 00:01:26.289 Drivers Enabled 00:01:26.289 =============== 00:01:26.289 00:01:26.289 common: 00:01:26.289 00:01:26.289 bus: 00:01:26.289 pci, vdev, 00:01:26.289 mempool: 00:01:26.289 ring, 00:01:26.289 dma: 00:01:26.289 00:01:26.289 net: 00:01:26.289 00:01:26.289 crypto: 00:01:26.289 00:01:26.289 compress: 00:01:26.289 00:01:26.289 vdpa: 00:01:26.289 00:01:26.289 00:01:26.289 Message: 00:01:26.289 ================= 00:01:26.289 Content Skipped 00:01:26.289 ================= 00:01:26.289 00:01:26.289 apps: 00:01:26.289 dumpcap: explicitly disabled via build config 00:01:26.289 graph: explicitly disabled via build config 00:01:26.289 pdump: explicitly disabled via build config 00:01:26.289 proc-info: explicitly disabled via build config 00:01:26.289 test-acl: explicitly disabled via build config 00:01:26.289 test-bbdev: explicitly disabled via build config 00:01:26.289 test-cmdline: explicitly disabled via build config 00:01:26.289 test-compress-perf: explicitly disabled via build config 00:01:26.289 test-crypto-perf: explicitly disabled via build config 00:01:26.289 test-dma-perf: explicitly disabled via build config 00:01:26.289 test-eventdev: explicitly disabled via build config 00:01:26.289 test-fib: explicitly disabled via build config 00:01:26.289 test-flow-perf: explicitly disabled via build config 00:01:26.289 test-gpudev: explicitly disabled via build config 00:01:26.289 test-mldev: explicitly disabled via build config 00:01:26.289 test-pipeline: explicitly disabled via build config 00:01:26.289 test-pmd: explicitly disabled via build config 00:01:26.289 test-regex: explicitly disabled via build config 00:01:26.289 test-sad: explicitly disabled via build config 00:01:26.289 test-security-perf: explicitly disabled via build config 00:01:26.289 00:01:26.289 libs: 00:01:26.289 metrics: explicitly disabled via build config 00:01:26.289 acl: explicitly disabled via build config 00:01:26.289 bbdev: explicitly disabled via build config 00:01:26.289 bitratestats: explicitly disabled via build config 00:01:26.289 bpf: explicitly disabled via build config 00:01:26.289 cfgfile: explicitly disabled via build config 00:01:26.289 distributor: explicitly disabled via build config 00:01:26.289 efd: explicitly disabled via build config 00:01:26.289 eventdev: explicitly disabled via build config 00:01:26.289 dispatcher: explicitly disabled via build config 00:01:26.289 gpudev: explicitly disabled via build config 00:01:26.289 gro: explicitly disabled via build config 00:01:26.289 gso: explicitly disabled via build config 00:01:26.289 ip_frag: explicitly disabled via build config 00:01:26.289 jobstats: explicitly disabled via build config 00:01:26.289 latencystats: explicitly disabled via build config 00:01:26.289 lpm: explicitly disabled via build config 00:01:26.289 member: explicitly disabled via build config 00:01:26.289 pcapng: explicitly disabled via build config 00:01:26.289 rawdev: explicitly disabled via build config 00:01:26.289 regexdev: explicitly disabled via build config 00:01:26.289 mldev: explicitly disabled via build config 00:01:26.289 rib: explicitly disabled via build config 00:01:26.289 sched: explicitly disabled via build config 00:01:26.289 stack: explicitly disabled via build config 00:01:26.289 ipsec: explicitly disabled via build config 00:01:26.289 pdcp: explicitly disabled via build config 00:01:26.289 fib: explicitly disabled via build config 00:01:26.289 port: explicitly disabled via build config 00:01:26.289 pdump: explicitly disabled via build config 00:01:26.289 table: explicitly disabled via build config 00:01:26.289 pipeline: explicitly disabled via build config 00:01:26.289 graph: explicitly disabled via build config 00:01:26.289 node: explicitly disabled via build config 00:01:26.289 00:01:26.289 drivers: 00:01:26.289 common/cpt: not in enabled drivers build config 00:01:26.289 common/dpaax: not in enabled drivers build config 00:01:26.289 common/iavf: not in enabled drivers build config 00:01:26.289 common/idpf: not in enabled drivers build config 00:01:26.289 common/mvep: not in enabled drivers build config 00:01:26.289 common/octeontx: not in enabled drivers build config 00:01:26.289 bus/auxiliary: not in enabled drivers build config 00:01:26.289 bus/cdx: not in enabled drivers build config 00:01:26.289 bus/dpaa: not in enabled drivers build config 00:01:26.289 bus/fslmc: not in enabled drivers build config 00:01:26.289 bus/ifpga: not in enabled drivers build config 00:01:26.289 bus/platform: not in enabled drivers build config 00:01:26.289 bus/vmbus: not in enabled drivers build config 00:01:26.289 common/cnxk: not in enabled drivers build config 00:01:26.289 common/mlx5: not in enabled drivers build config 00:01:26.289 common/nfp: not in enabled drivers build config 00:01:26.289 common/qat: not in enabled drivers build config 00:01:26.289 common/sfc_efx: not in enabled drivers build config 00:01:26.289 mempool/bucket: not in enabled drivers build config 00:01:26.289 mempool/cnxk: not in enabled drivers build config 00:01:26.289 mempool/dpaa: not in enabled drivers build config 00:01:26.289 mempool/dpaa2: not in enabled drivers build config 00:01:26.289 mempool/octeontx: not in enabled drivers build config 00:01:26.289 mempool/stack: not in enabled drivers build config 00:01:26.289 dma/cnxk: not in enabled drivers build config 00:01:26.289 dma/dpaa: not in enabled drivers build config 00:01:26.289 dma/dpaa2: not in enabled drivers build config 00:01:26.289 dma/hisilicon: not in enabled drivers build config 00:01:26.289 dma/idxd: not in enabled drivers build config 00:01:26.289 dma/ioat: not in enabled drivers build config 00:01:26.289 dma/skeleton: not in enabled drivers build config 00:01:26.289 net/af_packet: not in enabled drivers build config 00:01:26.289 net/af_xdp: not in enabled drivers build config 00:01:26.289 net/ark: not in enabled drivers build config 00:01:26.289 net/atlantic: not in enabled drivers build config 00:01:26.289 net/avp: not in enabled drivers build config 00:01:26.289 net/axgbe: not in enabled drivers build config 00:01:26.289 net/bnx2x: not in enabled drivers build config 00:01:26.289 net/bnxt: not in enabled drivers build config 00:01:26.289 net/bonding: not in enabled drivers build config 00:01:26.289 net/cnxk: not in enabled drivers build config 00:01:26.289 net/cpfl: not in enabled drivers build config 00:01:26.289 net/cxgbe: not in enabled drivers build config 00:01:26.289 net/dpaa: not in enabled drivers build config 00:01:26.289 net/dpaa2: not in enabled drivers build config 00:01:26.289 net/e1000: not in enabled drivers build config 00:01:26.289 net/ena: not in enabled drivers build config 00:01:26.289 net/enetc: not in enabled drivers build config 00:01:26.289 net/enetfec: not in enabled drivers build config 00:01:26.289 net/enic: not in enabled drivers build config 00:01:26.289 net/failsafe: not in enabled drivers build config 00:01:26.289 net/fm10k: not in enabled drivers build config 00:01:26.289 net/gve: not in enabled drivers build config 00:01:26.289 net/hinic: not in enabled drivers build config 00:01:26.289 net/hns3: not in enabled drivers build config 00:01:26.289 net/i40e: not in enabled drivers build config 00:01:26.289 net/iavf: not in enabled drivers build config 00:01:26.289 net/ice: not in enabled drivers build config 00:01:26.289 net/idpf: not in enabled drivers build config 00:01:26.289 net/igc: not in enabled drivers build config 00:01:26.289 net/ionic: not in enabled drivers build config 00:01:26.289 net/ipn3ke: not in enabled drivers build config 00:01:26.289 net/ixgbe: not in enabled drivers build config 00:01:26.289 net/mana: not in enabled drivers build config 00:01:26.289 net/memif: not in enabled drivers build config 00:01:26.289 net/mlx4: not in enabled drivers build config 00:01:26.289 net/mlx5: not in enabled drivers build config 00:01:26.289 net/mvneta: not in enabled drivers build config 00:01:26.289 net/mvpp2: not in enabled drivers build config 00:01:26.289 net/netvsc: not in enabled drivers build config 00:01:26.289 net/nfb: not in enabled drivers build config 00:01:26.289 net/nfp: not in enabled drivers build config 00:01:26.289 net/ngbe: not in enabled drivers build config 00:01:26.289 net/null: not in enabled drivers build config 00:01:26.289 net/octeontx: not in enabled drivers build config 00:01:26.289 net/octeon_ep: not in enabled drivers build config 00:01:26.289 net/pcap: not in enabled drivers build config 00:01:26.289 net/pfe: not in enabled drivers build config 00:01:26.289 net/qede: not in enabled drivers build config 00:01:26.289 net/ring: not in enabled drivers build config 00:01:26.289 net/sfc: not in enabled drivers build config 00:01:26.289 net/softnic: not in enabled drivers build config 00:01:26.289 net/tap: not in enabled drivers build config 00:01:26.289 net/thunderx: not in enabled drivers build config 00:01:26.289 net/txgbe: not in enabled drivers build config 00:01:26.289 net/vdev_netvsc: not in enabled drivers build config 00:01:26.289 net/vhost: not in enabled drivers build config 00:01:26.289 net/virtio: not in enabled drivers build config 00:01:26.289 net/vmxnet3: not in enabled drivers build config 00:01:26.289 raw/*: missing internal dependency, "rawdev" 00:01:26.289 crypto/armv8: not in enabled drivers build config 00:01:26.289 crypto/bcmfs: not in enabled drivers build config 00:01:26.289 crypto/caam_jr: not in enabled drivers build config 00:01:26.289 crypto/ccp: not in enabled drivers build config 00:01:26.289 crypto/cnxk: not in enabled drivers build config 00:01:26.289 crypto/dpaa_sec: not in enabled drivers build config 00:01:26.289 crypto/dpaa2_sec: not in enabled drivers build config 00:01:26.289 crypto/ipsec_mb: not in enabled drivers build config 00:01:26.289 crypto/mlx5: not in enabled drivers build config 00:01:26.289 crypto/mvsam: not in enabled drivers build config 00:01:26.289 crypto/nitrox: not in enabled drivers build config 00:01:26.289 crypto/null: not in enabled drivers build config 00:01:26.290 crypto/octeontx: not in enabled drivers build config 00:01:26.290 crypto/openssl: not in enabled drivers build config 00:01:26.290 crypto/scheduler: not in enabled drivers build config 00:01:26.290 crypto/uadk: not in enabled drivers build config 00:01:26.290 crypto/virtio: not in enabled drivers build config 00:01:26.290 compress/isal: not in enabled drivers build config 00:01:26.290 compress/mlx5: not in enabled drivers build config 00:01:26.290 compress/octeontx: not in enabled drivers build config 00:01:26.290 compress/zlib: not in enabled drivers build config 00:01:26.290 regex/*: missing internal dependency, "regexdev" 00:01:26.290 ml/*: missing internal dependency, "mldev" 00:01:26.290 vdpa/ifc: not in enabled drivers build config 00:01:26.290 vdpa/mlx5: not in enabled drivers build config 00:01:26.290 vdpa/nfp: not in enabled drivers build config 00:01:26.290 vdpa/sfc: not in enabled drivers build config 00:01:26.290 event/*: missing internal dependency, "eventdev" 00:01:26.290 baseband/*: missing internal dependency, "bbdev" 00:01:26.290 gpu/*: missing internal dependency, "gpudev" 00:01:26.290 00:01:26.290 00:01:26.290 Build targets in project: 84 00:01:26.290 00:01:26.290 DPDK 23.11.0 00:01:26.290 00:01:26.290 User defined options 00:01:26.290 buildtype : debug 00:01:26.290 default_library : shared 00:01:26.290 libdir : lib 00:01:26.290 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:26.290 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:26.290 c_link_args : 00:01:26.290 cpu_instruction_set: native 00:01:26.290 disable_apps : test-acl,test-bbdev,test-crypto-perf,test-fib,test-pipeline,test-gpudev,test-flow-perf,pdump,dumpcap,test-sad,test-cmdline,test-eventdev,proc-info,test,test-dma-perf,test-pmd,test-mldev,test-compress-perf,test-security-perf,graph,test-regex 00:01:26.290 disable_libs : pipeline,member,eventdev,efd,bbdev,cfgfile,rib,sched,mldev,metrics,lpm,latencystats,pdump,pdcp,bpf,ipsec,fib,ip_frag,table,port,stack,gro,jobstats,regexdev,rawdev,pcapng,dispatcher,node,bitratestats,acl,gpudev,distributor,graph,gso 00:01:26.290 enable_docs : false 00:01:26.290 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:26.290 enable_kmods : false 00:01:26.290 tests : false 00:01:26.290 00:01:26.290 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:26.290 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:26.290 [1/264] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:26.290 [2/264] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:26.290 [3/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:26.290 [4/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:26.290 [5/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:26.290 [6/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:26.290 [7/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:26.290 [8/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:26.290 [9/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:26.290 [10/264] Linking static target lib/librte_kvargs.a 00:01:26.290 [11/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:26.290 [12/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:26.290 [13/264] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:26.290 [14/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:26.290 [15/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:26.549 [16/264] Linking static target lib/librte_log.a 00:01:26.549 [17/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:26.549 [18/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:26.549 [19/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:26.549 [20/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:26.549 [21/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:26.549 [22/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:26.549 [23/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:26.549 [24/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:26.549 [25/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:26.549 [26/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:26.549 [27/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:26.549 [28/264] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:26.549 [29/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:26.549 [30/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:26.549 [31/264] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:26.549 [32/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:26.549 [33/264] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:26.549 [34/264] Linking static target lib/librte_pci.a 00:01:26.549 [35/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:26.549 [36/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:26.549 [37/264] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:26.549 [38/264] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:26.549 [39/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:26.549 [40/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:26.549 [41/264] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:26.549 [42/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:26.809 [43/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:26.809 [44/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:26.809 [45/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:26.809 [46/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:26.809 [47/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:26.809 [48/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:26.809 [49/264] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.809 [50/264] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.809 [51/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:26.809 [52/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:26.809 [53/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:26.809 [54/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:26.809 [55/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:26.809 [56/264] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:26.809 [57/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:26.809 [58/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:26.809 [59/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:26.809 [60/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:26.809 [61/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:26.809 [62/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:26.809 [63/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:26.809 [64/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:26.809 [65/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:26.809 [66/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:26.809 [67/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:26.809 [68/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:26.809 [69/264] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:26.809 [70/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:26.809 [71/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:26.809 [72/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:26.809 [73/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:26.809 [74/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:26.809 [75/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:26.809 [76/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:26.809 [77/264] Linking static target lib/librte_cmdline.a 00:01:27.070 [78/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:27.070 [79/264] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:27.070 [80/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:27.070 [81/264] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:27.070 [82/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:27.070 [83/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:27.070 [84/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:27.070 [85/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:27.070 [86/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:27.070 [87/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:27.070 [88/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:27.070 [89/264] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:27.070 [90/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:27.070 [91/264] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:27.070 [92/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:27.070 [93/264] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:27.070 [94/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:27.070 [95/264] Linking static target lib/librte_meter.a 00:01:27.070 [96/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:27.070 [97/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:27.070 [98/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:27.070 [99/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:27.070 [100/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:27.070 [101/264] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:27.070 [102/264] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:27.070 [103/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:27.070 [104/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:27.070 [105/264] Linking static target lib/librte_telemetry.a 00:01:27.070 [106/264] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:27.070 [107/264] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:27.070 [108/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:27.070 [109/264] Linking static target lib/librte_ring.a 00:01:27.070 [110/264] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:27.070 [111/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:27.070 [112/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:27.070 [113/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:27.070 [114/264] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:27.070 [115/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:27.070 [116/264] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:27.070 [117/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:27.070 [118/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:27.070 [119/264] Linking static target lib/librte_mempool.a 00:01:27.070 [120/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:27.070 [121/264] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:27.070 [122/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:27.070 [123/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:27.070 [124/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:27.070 [125/264] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:27.070 [126/264] Linking static target lib/librte_timer.a 00:01:27.070 [127/264] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:27.070 [128/264] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:27.070 [129/264] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.070 [130/264] Linking static target lib/librte_security.a 00:01:27.070 [131/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:27.070 [132/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:27.070 [133/264] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:27.070 [134/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:27.070 [135/264] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:27.070 [136/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:27.070 [137/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:27.070 [138/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:27.070 [139/264] Linking static target lib/librte_compressdev.a 00:01:27.070 [140/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:27.070 [141/264] Linking static target lib/librte_rcu.a 00:01:27.070 [142/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:27.070 [143/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:27.070 [144/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:27.070 [145/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:27.070 [146/264] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:27.070 [147/264] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:27.070 [148/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:27.070 [149/264] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:27.070 [150/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:27.070 [151/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:27.070 [152/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:27.070 [153/264] Linking static target lib/librte_power.a 00:01:27.070 [154/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:27.070 [155/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:27.070 [156/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:27.070 [157/264] Linking target lib/librte_log.so.24.0 00:01:27.070 [158/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:27.070 [159/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:27.070 [160/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:27.070 [161/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:27.070 [162/264] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:27.070 [163/264] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:27.070 [164/264] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:27.070 [165/264] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:27.070 [166/264] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:27.070 [167/264] Linking static target lib/librte_dmadev.a 00:01:27.071 [168/264] Linking static target lib/librte_reorder.a 00:01:27.071 [169/264] Linking static target lib/librte_net.a 00:01:27.071 [170/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:27.071 [171/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:27.071 [172/264] Linking static target lib/librte_eal.a 00:01:27.071 [173/264] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:27.071 [174/264] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:27.331 [175/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:27.331 [176/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:27.331 [177/264] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:27.331 [178/264] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:27.331 [179/264] Linking static target lib/librte_mbuf.a 00:01:27.331 [180/264] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:27.331 [181/264] Linking static target lib/librte_hash.a 00:01:27.331 [182/264] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.331 [183/264] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:27.331 [184/264] Linking target lib/librte_kvargs.so.24.0 00:01:27.331 [185/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:27.331 [186/264] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:27.331 [187/264] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:27.331 [188/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:27.331 [189/264] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:27.331 [190/264] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.331 [191/264] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:27.331 [192/264] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:27.331 [193/264] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:27.331 [194/264] Linking static target drivers/librte_bus_pci.a 00:01:27.331 [195/264] Linking static target drivers/librte_bus_vdev.a 00:01:27.331 [196/264] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:27.591 [197/264] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.591 [198/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:27.591 [199/264] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.591 [200/264] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:27.591 [201/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:27.591 [202/264] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:27.591 [203/264] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:27.591 [204/264] Linking static target lib/librte_cryptodev.a 00:01:27.591 [205/264] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.591 [206/264] Linking static target drivers/librte_mempool_ring.a 00:01:27.592 [207/264] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.592 [208/264] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.592 [209/264] Linking target lib/librte_telemetry.so.24.0 00:01:27.592 [210/264] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.592 [211/264] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.852 [212/264] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.852 [213/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:27.852 [214/264] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:27.852 [215/264] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.852 [216/264] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.852 [217/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:28.113 [218/264] Linking static target lib/librte_ethdev.a 00:01:28.113 [219/264] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.113 [220/264] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.113 [221/264] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.113 [222/264] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.113 [223/264] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.058 [224/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:29.058 [225/264] Linking static target lib/librte_vhost.a 00:01:29.679 [226/264] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.129 [227/264] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.711 [228/264] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.653 [229/264] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.653 [230/264] Linking target lib/librte_eal.so.24.0 00:01:38.914 [231/264] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:38.914 [232/264] Linking target lib/librte_ring.so.24.0 00:01:38.914 [233/264] Linking target lib/librte_meter.so.24.0 00:01:38.914 [234/264] Linking target lib/librte_timer.so.24.0 00:01:38.914 [235/264] Linking target lib/librte_pci.so.24.0 00:01:38.914 [236/264] Linking target lib/librte_dmadev.so.24.0 00:01:38.914 [237/264] Linking target drivers/librte_bus_vdev.so.24.0 00:01:39.175 [238/264] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:39.175 [239/264] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:39.175 [240/264] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:39.175 [241/264] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:39.175 [242/264] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:39.175 [243/264] Linking target lib/librte_rcu.so.24.0 00:01:39.175 [244/264] Linking target lib/librte_mempool.so.24.0 00:01:39.175 [245/264] Linking target drivers/librte_bus_pci.so.24.0 00:01:39.175 [246/264] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:39.175 [247/264] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:39.436 [248/264] Linking target drivers/librte_mempool_ring.so.24.0 00:01:39.436 [249/264] Linking target lib/librte_mbuf.so.24.0 00:01:39.436 [250/264] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:39.436 [251/264] Linking target lib/librte_compressdev.so.24.0 00:01:39.436 [252/264] Linking target lib/librte_reorder.so.24.0 00:01:39.436 [253/264] Linking target lib/librte_net.so.24.0 00:01:39.436 [254/264] Linking target lib/librte_cryptodev.so.24.0 00:01:39.697 [255/264] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:39.697 [256/264] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:39.697 [257/264] Linking target lib/librte_hash.so.24.0 00:01:39.697 [258/264] Linking target lib/librte_security.so.24.0 00:01:39.697 [259/264] Linking target lib/librte_cmdline.so.24.0 00:01:39.697 [260/264] Linking target lib/librte_ethdev.so.24.0 00:01:39.958 [261/264] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:39.958 [262/264] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:39.958 [263/264] Linking target lib/librte_power.so.24.0 00:01:39.958 [264/264] Linking target lib/librte_vhost.so.24.0 00:01:39.958 INFO: autodetecting backend as ninja 00:01:39.958 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:01:41.346 CC lib/log/log.o 00:01:41.346 CC lib/log/log_flags.o 00:01:41.346 CC lib/log/log_deprecated.o 00:01:41.346 CC lib/ut/ut.o 00:01:41.346 CC lib/ut_mock/mock.o 00:01:41.346 LIB libspdk_ut_mock.a 00:01:41.346 LIB libspdk_log.a 00:01:41.346 LIB libspdk_ut.a 00:01:41.346 SO libspdk_ut_mock.so.6.0 00:01:41.346 SO libspdk_log.so.7.0 00:01:41.346 SO libspdk_ut.so.2.0 00:01:41.346 SYMLINK libspdk_ut_mock.so 00:01:41.346 SYMLINK libspdk_ut.so 00:01:41.346 SYMLINK libspdk_log.so 00:01:41.920 CC lib/ioat/ioat.o 00:01:41.920 CC lib/dma/dma.o 00:01:41.920 CC lib/util/base64.o 00:01:41.920 CXX lib/trace_parser/trace.o 00:01:41.920 CC lib/util/bit_array.o 00:01:41.920 CC lib/util/cpuset.o 00:01:41.920 CC lib/util/crc16.o 00:01:41.920 CC lib/util/crc32.o 00:01:41.920 CC lib/util/crc32c.o 00:01:41.920 CC lib/util/crc32_ieee.o 00:01:41.920 CC lib/util/crc64.o 00:01:41.920 CC lib/util/dif.o 00:01:41.920 CC lib/util/fd.o 00:01:41.920 CC lib/util/file.o 00:01:41.920 CC lib/util/hexlify.o 00:01:41.920 CC lib/util/iov.o 00:01:41.920 CC lib/util/math.o 00:01:41.920 CC lib/util/pipe.o 00:01:41.920 CC lib/util/strerror_tls.o 00:01:41.920 CC lib/util/string.o 00:01:41.920 CC lib/util/uuid.o 00:01:41.920 CC lib/util/fd_group.o 00:01:41.920 CC lib/util/xor.o 00:01:41.920 CC lib/util/zipf.o 00:01:41.920 CC lib/vfio_user/host/vfio_user_pci.o 00:01:41.920 CC lib/vfio_user/host/vfio_user.o 00:01:41.920 LIB libspdk_dma.a 00:01:41.920 LIB libspdk_ioat.a 00:01:41.920 SO libspdk_dma.so.4.0 00:01:41.920 SO libspdk_ioat.so.7.0 00:01:41.920 SYMLINK libspdk_dma.so 00:01:42.181 SYMLINK libspdk_ioat.so 00:01:42.181 LIB libspdk_vfio_user.a 00:01:42.181 SO libspdk_vfio_user.so.5.0 00:01:42.181 LIB libspdk_util.a 00:01:42.181 SYMLINK libspdk_vfio_user.so 00:01:42.443 SO libspdk_util.so.9.0 00:01:42.443 SYMLINK libspdk_util.so 00:01:42.443 LIB libspdk_trace_parser.a 00:01:42.443 SO libspdk_trace_parser.so.5.0 00:01:42.704 SYMLINK libspdk_trace_parser.so 00:01:42.704 CC lib/rdma/common.o 00:01:42.704 CC lib/rdma/rdma_verbs.o 00:01:42.704 CC lib/json/json_parse.o 00:01:42.704 CC lib/json/json_util.o 00:01:42.704 CC lib/json/json_write.o 00:01:42.704 CC lib/env_dpdk/pci.o 00:01:42.704 CC lib/env_dpdk/env.o 00:01:42.704 CC lib/env_dpdk/memory.o 00:01:42.704 CC lib/env_dpdk/init.o 00:01:42.704 CC lib/conf/conf.o 00:01:42.704 CC lib/env_dpdk/threads.o 00:01:42.704 CC lib/env_dpdk/pci_ioat.o 00:01:42.704 CC lib/env_dpdk/pci_virtio.o 00:01:42.704 CC lib/idxd/idxd.o 00:01:42.704 CC lib/env_dpdk/pci_vmd.o 00:01:42.704 CC lib/idxd/idxd_user.o 00:01:42.704 CC lib/env_dpdk/pci_idxd.o 00:01:42.704 CC lib/env_dpdk/pci_event.o 00:01:42.704 CC lib/vmd/vmd.o 00:01:42.704 CC lib/env_dpdk/sigbus_handler.o 00:01:42.704 CC lib/vmd/led.o 00:01:42.704 CC lib/env_dpdk/pci_dpdk.o 00:01:42.704 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:42.704 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:42.964 LIB libspdk_conf.a 00:01:42.964 LIB libspdk_rdma.a 00:01:42.964 LIB libspdk_json.a 00:01:42.964 SO libspdk_conf.so.6.0 00:01:42.964 SO libspdk_rdma.so.6.0 00:01:42.964 SO libspdk_json.so.6.0 00:01:43.225 SYMLINK libspdk_conf.so 00:01:43.225 SYMLINK libspdk_rdma.so 00:01:43.225 SYMLINK libspdk_json.so 00:01:43.225 LIB libspdk_idxd.a 00:01:43.225 SO libspdk_idxd.so.12.0 00:01:43.488 LIB libspdk_vmd.a 00:01:43.488 SYMLINK libspdk_idxd.so 00:01:43.488 SO libspdk_vmd.so.6.0 00:01:43.488 SYMLINK libspdk_vmd.so 00:01:43.488 CC lib/jsonrpc/jsonrpc_server.o 00:01:43.488 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:43.488 CC lib/jsonrpc/jsonrpc_client.o 00:01:43.488 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:43.749 LIB libspdk_jsonrpc.a 00:01:43.749 SO libspdk_jsonrpc.so.6.0 00:01:43.750 SYMLINK libspdk_jsonrpc.so 00:01:44.011 LIB libspdk_env_dpdk.a 00:01:44.011 SO libspdk_env_dpdk.so.14.0 00:01:44.272 CC lib/rpc/rpc.o 00:01:44.272 SYMLINK libspdk_env_dpdk.so 00:01:44.534 LIB libspdk_rpc.a 00:01:44.534 SO libspdk_rpc.so.6.0 00:01:44.534 SYMLINK libspdk_rpc.so 00:01:44.795 CC lib/trace/trace.o 00:01:44.795 CC lib/trace/trace_flags.o 00:01:44.795 CC lib/trace/trace_rpc.o 00:01:44.795 CC lib/keyring/keyring.o 00:01:44.795 CC lib/keyring/keyring_rpc.o 00:01:44.795 CC lib/notify/notify.o 00:01:44.795 CC lib/notify/notify_rpc.o 00:01:45.056 LIB libspdk_trace.a 00:01:45.056 LIB libspdk_notify.a 00:01:45.056 SO libspdk_trace.so.10.0 00:01:45.056 SO libspdk_notify.so.6.0 00:01:45.056 LIB libspdk_keyring.a 00:01:45.056 SO libspdk_keyring.so.1.0 00:01:45.056 SYMLINK libspdk_notify.so 00:01:45.056 SYMLINK libspdk_trace.so 00:01:45.319 SYMLINK libspdk_keyring.so 00:01:45.580 CC lib/thread/thread.o 00:01:45.580 CC lib/thread/iobuf.o 00:01:45.580 CC lib/sock/sock.o 00:01:45.580 CC lib/sock/sock_rpc.o 00:01:45.842 LIB libspdk_sock.a 00:01:45.842 SO libspdk_sock.so.9.0 00:01:46.104 SYMLINK libspdk_sock.so 00:01:46.364 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:46.364 CC lib/nvme/nvme_ctrlr.o 00:01:46.364 CC lib/nvme/nvme_fabric.o 00:01:46.364 CC lib/nvme/nvme_ns_cmd.o 00:01:46.364 CC lib/nvme/nvme_ns.o 00:01:46.364 CC lib/nvme/nvme_pcie_common.o 00:01:46.364 CC lib/nvme/nvme_pcie.o 00:01:46.364 CC lib/nvme/nvme_qpair.o 00:01:46.364 CC lib/nvme/nvme.o 00:01:46.364 CC lib/nvme/nvme_quirks.o 00:01:46.364 CC lib/nvme/nvme_transport.o 00:01:46.364 CC lib/nvme/nvme_discovery.o 00:01:46.364 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:46.364 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:46.364 CC lib/nvme/nvme_tcp.o 00:01:46.364 CC lib/nvme/nvme_opal.o 00:01:46.364 CC lib/nvme/nvme_io_msg.o 00:01:46.364 CC lib/nvme/nvme_poll_group.o 00:01:46.364 CC lib/nvme/nvme_zns.o 00:01:46.364 CC lib/nvme/nvme_stubs.o 00:01:46.364 CC lib/nvme/nvme_auth.o 00:01:46.364 CC lib/nvme/nvme_cuse.o 00:01:46.364 CC lib/nvme/nvme_vfio_user.o 00:01:46.364 CC lib/nvme/nvme_rdma.o 00:01:46.934 LIB libspdk_thread.a 00:01:46.934 SO libspdk_thread.so.10.0 00:01:46.934 SYMLINK libspdk_thread.so 00:01:47.192 CC lib/init/json_config.o 00:01:47.192 CC lib/virtio/virtio.o 00:01:47.192 CC lib/blob/blobstore.o 00:01:47.192 CC lib/virtio/virtio_vhost_user.o 00:01:47.192 CC lib/init/subsystem.o 00:01:47.192 CC lib/virtio/virtio_vfio_user.o 00:01:47.192 CC lib/blob/request.o 00:01:47.192 CC lib/init/subsystem_rpc.o 00:01:47.192 CC lib/virtio/virtio_pci.o 00:01:47.192 CC lib/blob/zeroes.o 00:01:47.192 CC lib/init/rpc.o 00:01:47.192 CC lib/accel/accel.o 00:01:47.192 CC lib/blob/blob_bs_dev.o 00:01:47.192 CC lib/accel/accel_rpc.o 00:01:47.192 CC lib/accel/accel_sw.o 00:01:47.192 CC lib/vfu_tgt/tgt_endpoint.o 00:01:47.192 CC lib/vfu_tgt/tgt_rpc.o 00:01:47.453 LIB libspdk_init.a 00:01:47.453 SO libspdk_init.so.5.0 00:01:47.453 LIB libspdk_virtio.a 00:01:47.453 LIB libspdk_vfu_tgt.a 00:01:47.453 SYMLINK libspdk_init.so 00:01:47.453 SO libspdk_virtio.so.7.0 00:01:47.713 SO libspdk_vfu_tgt.so.3.0 00:01:47.713 SYMLINK libspdk_vfu_tgt.so 00:01:47.713 SYMLINK libspdk_virtio.so 00:01:47.975 CC lib/event/app.o 00:01:47.975 CC lib/event/reactor.o 00:01:47.975 CC lib/event/log_rpc.o 00:01:47.975 CC lib/event/app_rpc.o 00:01:47.975 CC lib/event/scheduler_static.o 00:01:47.975 LIB libspdk_accel.a 00:01:48.236 SO libspdk_accel.so.15.0 00:01:48.236 LIB libspdk_nvme.a 00:01:48.236 SYMLINK libspdk_accel.so 00:01:48.236 SO libspdk_nvme.so.13.0 00:01:48.236 LIB libspdk_event.a 00:01:48.236 SO libspdk_event.so.13.0 00:01:48.497 SYMLINK libspdk_event.so 00:01:48.497 SYMLINK libspdk_nvme.so 00:01:48.497 CC lib/bdev/bdev_zone.o 00:01:48.497 CC lib/bdev/bdev.o 00:01:48.497 CC lib/bdev/bdev_rpc.o 00:01:48.497 CC lib/bdev/part.o 00:01:48.497 CC lib/bdev/scsi_nvme.o 00:01:49.441 LIB libspdk_blob.a 00:01:49.703 SO libspdk_blob.so.11.0 00:01:49.703 SYMLINK libspdk_blob.so 00:01:49.964 CC lib/lvol/lvol.o 00:01:49.964 CC lib/blobfs/blobfs.o 00:01:49.964 CC lib/blobfs/tree.o 00:01:50.907 LIB libspdk_bdev.a 00:01:50.907 SO libspdk_bdev.so.15.0 00:01:50.907 LIB libspdk_blobfs.a 00:01:50.907 LIB libspdk_lvol.a 00:01:50.907 SO libspdk_blobfs.so.10.0 00:01:50.907 SO libspdk_lvol.so.10.0 00:01:50.907 SYMLINK libspdk_bdev.so 00:01:50.907 SYMLINK libspdk_blobfs.so 00:01:50.907 SYMLINK libspdk_lvol.so 00:01:51.168 CC lib/ublk/ublk.o 00:01:51.168 CC lib/ublk/ublk_rpc.o 00:01:51.168 CC lib/ftl/ftl_core.o 00:01:51.168 CC lib/ftl/ftl_init.o 00:01:51.168 CC lib/scsi/dev.o 00:01:51.168 CC lib/ftl/ftl_layout.o 00:01:51.168 CC lib/scsi/lun.o 00:01:51.168 CC lib/ftl/ftl_debug.o 00:01:51.168 CC lib/scsi/port.o 00:01:51.168 CC lib/ftl/ftl_io.o 00:01:51.168 CC lib/scsi/scsi.o 00:01:51.168 CC lib/ftl/ftl_l2p_flat.o 00:01:51.168 CC lib/ftl/ftl_sb.o 00:01:51.168 CC lib/scsi/scsi_bdev.o 00:01:51.168 CC lib/scsi/scsi_pr.o 00:01:51.168 CC lib/ftl/ftl_l2p.o 00:01:51.168 CC lib/ftl/ftl_nv_cache.o 00:01:51.168 CC lib/nbd/nbd.o 00:01:51.168 CC lib/nbd/nbd_rpc.o 00:01:51.168 CC lib/scsi/scsi_rpc.o 00:01:51.168 CC lib/nvmf/ctrlr.o 00:01:51.168 CC lib/ftl/ftl_band.o 00:01:51.168 CC lib/scsi/task.o 00:01:51.168 CC lib/nvmf/ctrlr_discovery.o 00:01:51.168 CC lib/ftl/ftl_band_ops.o 00:01:51.168 CC lib/nvmf/ctrlr_bdev.o 00:01:51.168 CC lib/ftl/ftl_writer.o 00:01:51.168 CC lib/ftl/ftl_rq.o 00:01:51.168 CC lib/nvmf/subsystem.o 00:01:51.168 CC lib/nvmf/nvmf.o 00:01:51.168 CC lib/ftl/ftl_reloc.o 00:01:51.168 CC lib/ftl/ftl_l2p_cache.o 00:01:51.168 CC lib/nvmf/nvmf_rpc.o 00:01:51.168 CC lib/ftl/ftl_p2l.o 00:01:51.168 CC lib/nvmf/transport.o 00:01:51.168 CC lib/ftl/mngt/ftl_mngt.o 00:01:51.168 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:51.168 CC lib/nvmf/tcp.o 00:01:51.168 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:51.168 CC lib/nvmf/vfio_user.o 00:01:51.168 CC lib/nvmf/rdma.o 00:01:51.168 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:51.168 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:51.168 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:51.169 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:51.169 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:51.169 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:51.169 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:51.169 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:51.169 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:51.169 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:51.169 CC lib/ftl/utils/ftl_conf.o 00:01:51.169 CC lib/ftl/utils/ftl_md.o 00:01:51.169 CC lib/ftl/utils/ftl_bitmap.o 00:01:51.169 CC lib/ftl/utils/ftl_mempool.o 00:01:51.169 CC lib/ftl/utils/ftl_property.o 00:01:51.169 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:51.169 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:51.169 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:51.169 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:51.169 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:51.169 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:51.169 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:51.169 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:51.169 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:51.169 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:51.428 CC lib/ftl/base/ftl_base_dev.o 00:01:51.428 CC lib/ftl/ftl_trace.o 00:01:51.428 CC lib/ftl/base/ftl_base_bdev.o 00:01:51.687 LIB libspdk_nbd.a 00:01:51.687 SO libspdk_nbd.so.7.0 00:01:51.687 LIB libspdk_scsi.a 00:01:51.948 SYMLINK libspdk_nbd.so 00:01:51.948 SO libspdk_scsi.so.9.0 00:01:51.948 LIB libspdk_ublk.a 00:01:51.948 SO libspdk_ublk.so.3.0 00:01:51.948 SYMLINK libspdk_scsi.so 00:01:51.948 SYMLINK libspdk_ublk.so 00:01:52.210 LIB libspdk_ftl.a 00:01:52.210 CC lib/iscsi/conn.o 00:01:52.210 CC lib/iscsi/init_grp.o 00:01:52.210 CC lib/iscsi/iscsi.o 00:01:52.210 CC lib/iscsi/md5.o 00:01:52.210 CC lib/iscsi/param.o 00:01:52.210 CC lib/iscsi/portal_grp.o 00:01:52.210 CC lib/iscsi/tgt_node.o 00:01:52.210 CC lib/iscsi/iscsi_subsystem.o 00:01:52.210 CC lib/iscsi/iscsi_rpc.o 00:01:52.210 CC lib/iscsi/task.o 00:01:52.210 CC lib/vhost/vhost.o 00:01:52.210 CC lib/vhost/vhost_rpc.o 00:01:52.210 CC lib/vhost/vhost_scsi.o 00:01:52.210 SO libspdk_ftl.so.9.0 00:01:52.210 CC lib/vhost/vhost_blk.o 00:01:52.210 CC lib/vhost/rte_vhost_user.o 00:01:52.782 SYMLINK libspdk_ftl.so 00:01:53.050 LIB libspdk_nvmf.a 00:01:53.050 SO libspdk_nvmf.so.18.0 00:01:53.317 LIB libspdk_vhost.a 00:01:53.317 SO libspdk_vhost.so.8.0 00:01:53.317 SYMLINK libspdk_nvmf.so 00:01:53.317 SYMLINK libspdk_vhost.so 00:01:53.578 LIB libspdk_iscsi.a 00:01:53.578 SO libspdk_iscsi.so.8.0 00:01:53.578 SYMLINK libspdk_iscsi.so 00:01:54.149 CC module/env_dpdk/env_dpdk_rpc.o 00:01:54.149 CC module/vfu_device/vfu_virtio.o 00:01:54.149 CC module/vfu_device/vfu_virtio_blk.o 00:01:54.149 CC module/vfu_device/vfu_virtio_scsi.o 00:01:54.149 CC module/vfu_device/vfu_virtio_rpc.o 00:01:54.410 CC module/accel/ioat/accel_ioat_rpc.o 00:01:54.410 CC module/accel/ioat/accel_ioat.o 00:01:54.410 CC module/accel/error/accel_error.o 00:01:54.410 CC module/accel/error/accel_error_rpc.o 00:01:54.410 CC module/blob/bdev/blob_bdev.o 00:01:54.410 LIB libspdk_env_dpdk_rpc.a 00:01:54.410 CC module/sock/posix/posix.o 00:01:54.410 CC module/accel/dsa/accel_dsa.o 00:01:54.410 CC module/accel/dsa/accel_dsa_rpc.o 00:01:54.410 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:54.410 CC module/accel/iaa/accel_iaa.o 00:01:54.410 CC module/accel/iaa/accel_iaa_rpc.o 00:01:54.410 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:54.410 CC module/keyring/file/keyring.o 00:01:54.410 CC module/keyring/file/keyring_rpc.o 00:01:54.410 CC module/scheduler/gscheduler/gscheduler.o 00:01:54.410 SO libspdk_env_dpdk_rpc.so.6.0 00:01:54.410 SYMLINK libspdk_env_dpdk_rpc.so 00:01:54.672 LIB libspdk_scheduler_gscheduler.a 00:01:54.672 LIB libspdk_accel_error.a 00:01:54.672 LIB libspdk_keyring_file.a 00:01:54.672 LIB libspdk_scheduler_dpdk_governor.a 00:01:54.672 LIB libspdk_scheduler_dynamic.a 00:01:54.672 SO libspdk_scheduler_gscheduler.so.4.0 00:01:54.672 LIB libspdk_accel_ioat.a 00:01:54.672 SO libspdk_keyring_file.so.1.0 00:01:54.672 SO libspdk_accel_error.so.2.0 00:01:54.672 LIB libspdk_accel_iaa.a 00:01:54.672 SO libspdk_scheduler_dynamic.so.4.0 00:01:54.672 LIB libspdk_accel_dsa.a 00:01:54.672 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:54.672 SYMLINK libspdk_scheduler_gscheduler.so 00:01:54.672 SO libspdk_accel_ioat.so.6.0 00:01:54.672 SO libspdk_accel_dsa.so.5.0 00:01:54.672 LIB libspdk_blob_bdev.a 00:01:54.672 SO libspdk_accel_iaa.so.3.0 00:01:54.672 SYMLINK libspdk_keyring_file.so 00:01:54.672 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:54.672 SYMLINK libspdk_accel_error.so 00:01:54.672 SYMLINK libspdk_scheduler_dynamic.so 00:01:54.672 SO libspdk_blob_bdev.so.11.0 00:01:54.672 SYMLINK libspdk_accel_ioat.so 00:01:54.672 SYMLINK libspdk_accel_dsa.so 00:01:54.672 SYMLINK libspdk_accel_iaa.so 00:01:54.672 SYMLINK libspdk_blob_bdev.so 00:01:54.672 LIB libspdk_vfu_device.a 00:01:54.933 SO libspdk_vfu_device.so.3.0 00:01:54.933 SYMLINK libspdk_vfu_device.so 00:01:55.195 LIB libspdk_sock_posix.a 00:01:55.195 SO libspdk_sock_posix.so.6.0 00:01:55.195 SYMLINK libspdk_sock_posix.so 00:01:55.457 CC module/bdev/error/vbdev_error.o 00:01:55.457 CC module/bdev/error/vbdev_error_rpc.o 00:01:55.457 CC module/bdev/gpt/gpt.o 00:01:55.457 CC module/bdev/iscsi/bdev_iscsi.o 00:01:55.457 CC module/bdev/gpt/vbdev_gpt.o 00:01:55.457 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:55.457 CC module/bdev/lvol/vbdev_lvol.o 00:01:55.457 CC module/bdev/nvme/bdev_nvme.o 00:01:55.457 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:55.457 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:55.457 CC module/bdev/null/bdev_null.o 00:01:55.457 CC module/bdev/nvme/nvme_rpc.o 00:01:55.457 CC module/bdev/nvme/vbdev_opal.o 00:01:55.457 CC module/bdev/nvme/bdev_mdns_client.o 00:01:55.457 CC module/bdev/passthru/vbdev_passthru.o 00:01:55.457 CC module/bdev/null/bdev_null_rpc.o 00:01:55.457 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:55.457 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:55.457 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:55.457 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:55.457 CC module/blobfs/bdev/blobfs_bdev.o 00:01:55.457 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:55.457 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:55.457 CC module/bdev/aio/bdev_aio.o 00:01:55.457 CC module/bdev/split/vbdev_split.o 00:01:55.457 CC module/bdev/aio/bdev_aio_rpc.o 00:01:55.457 CC module/bdev/split/vbdev_split_rpc.o 00:01:55.457 CC module/bdev/delay/vbdev_delay.o 00:01:55.457 CC module/bdev/malloc/bdev_malloc.o 00:01:55.457 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:55.457 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:55.457 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:55.457 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:55.457 CC module/bdev/raid/bdev_raid.o 00:01:55.457 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:55.457 CC module/bdev/raid/bdev_raid_rpc.o 00:01:55.457 CC module/bdev/ftl/bdev_ftl.o 00:01:55.457 CC module/bdev/raid/bdev_raid_sb.o 00:01:55.457 CC module/bdev/raid/raid0.o 00:01:55.457 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:55.457 CC module/bdev/raid/raid1.o 00:01:55.457 CC module/bdev/raid/concat.o 00:01:55.718 LIB libspdk_blobfs_bdev.a 00:01:55.718 LIB libspdk_bdev_split.a 00:01:55.718 SO libspdk_blobfs_bdev.so.6.0 00:01:55.718 LIB libspdk_bdev_error.a 00:01:55.718 SO libspdk_bdev_split.so.6.0 00:01:55.718 LIB libspdk_bdev_null.a 00:01:55.718 LIB libspdk_bdev_passthru.a 00:01:55.718 SO libspdk_bdev_error.so.6.0 00:01:55.719 SO libspdk_bdev_null.so.6.0 00:01:55.719 LIB libspdk_bdev_gpt.a 00:01:55.719 LIB libspdk_bdev_ftl.a 00:01:55.719 SYMLINK libspdk_blobfs_bdev.so 00:01:55.719 SO libspdk_bdev_passthru.so.6.0 00:01:55.719 LIB libspdk_bdev_aio.a 00:01:55.719 SO libspdk_bdev_gpt.so.6.0 00:01:55.719 SYMLINK libspdk_bdev_split.so 00:01:55.719 LIB libspdk_bdev_iscsi.a 00:01:55.719 LIB libspdk_bdev_zone_block.a 00:01:55.719 SO libspdk_bdev_ftl.so.6.0 00:01:55.719 SYMLINK libspdk_bdev_null.so 00:01:55.719 SYMLINK libspdk_bdev_error.so 00:01:55.719 LIB libspdk_bdev_malloc.a 00:01:55.719 SYMLINK libspdk_bdev_passthru.so 00:01:55.719 SO libspdk_bdev_iscsi.so.6.0 00:01:55.719 SO libspdk_bdev_aio.so.6.0 00:01:55.719 LIB libspdk_bdev_delay.a 00:01:55.719 SO libspdk_bdev_zone_block.so.6.0 00:01:55.719 SYMLINK libspdk_bdev_gpt.so 00:01:55.719 SO libspdk_bdev_malloc.so.6.0 00:01:55.719 SO libspdk_bdev_delay.so.6.0 00:01:55.719 SYMLINK libspdk_bdev_ftl.so 00:01:55.719 LIB libspdk_bdev_lvol.a 00:01:55.719 SYMLINK libspdk_bdev_aio.so 00:01:55.719 SYMLINK libspdk_bdev_iscsi.so 00:01:55.719 SYMLINK libspdk_bdev_zone_block.so 00:01:55.979 SYMLINK libspdk_bdev_malloc.so 00:01:55.979 SO libspdk_bdev_lvol.so.6.0 00:01:55.979 SYMLINK libspdk_bdev_delay.so 00:01:55.979 LIB libspdk_bdev_virtio.a 00:01:55.979 SO libspdk_bdev_virtio.so.6.0 00:01:55.979 SYMLINK libspdk_bdev_lvol.so 00:01:55.979 SYMLINK libspdk_bdev_virtio.so 00:01:56.240 LIB libspdk_bdev_raid.a 00:01:56.240 SO libspdk_bdev_raid.so.6.0 00:01:56.240 SYMLINK libspdk_bdev_raid.so 00:01:57.277 LIB libspdk_bdev_nvme.a 00:01:57.277 SO libspdk_bdev_nvme.so.7.0 00:01:57.277 SYMLINK libspdk_bdev_nvme.so 00:01:58.221 CC module/event/subsystems/scheduler/scheduler.o 00:01:58.221 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:58.221 CC module/event/subsystems/sock/sock.o 00:01:58.221 CC module/event/subsystems/vmd/vmd.o 00:01:58.221 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:58.221 CC module/event/subsystems/iobuf/iobuf.o 00:01:58.221 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:58.221 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:01:58.221 CC module/event/subsystems/keyring/keyring.o 00:01:58.221 LIB libspdk_event_sock.a 00:01:58.221 LIB libspdk_event_scheduler.a 00:01:58.221 LIB libspdk_event_vhost_blk.a 00:01:58.221 LIB libspdk_event_vfu_tgt.a 00:01:58.221 LIB libspdk_event_vmd.a 00:01:58.221 LIB libspdk_event_keyring.a 00:01:58.221 SO libspdk_event_sock.so.5.0 00:01:58.221 SO libspdk_event_scheduler.so.4.0 00:01:58.221 LIB libspdk_event_iobuf.a 00:01:58.221 SO libspdk_event_vhost_blk.so.3.0 00:01:58.221 SO libspdk_event_vfu_tgt.so.3.0 00:01:58.221 SO libspdk_event_vmd.so.6.0 00:01:58.221 SO libspdk_event_keyring.so.1.0 00:01:58.221 SO libspdk_event_iobuf.so.3.0 00:01:58.221 SYMLINK libspdk_event_scheduler.so 00:01:58.221 SYMLINK libspdk_event_sock.so 00:01:58.481 SYMLINK libspdk_event_vfu_tgt.so 00:01:58.481 SYMLINK libspdk_event_vhost_blk.so 00:01:58.481 SYMLINK libspdk_event_keyring.so 00:01:58.481 SYMLINK libspdk_event_vmd.so 00:01:58.481 SYMLINK libspdk_event_iobuf.so 00:01:58.742 CC module/event/subsystems/accel/accel.o 00:01:59.003 LIB libspdk_event_accel.a 00:01:59.003 SO libspdk_event_accel.so.6.0 00:01:59.003 SYMLINK libspdk_event_accel.so 00:01:59.263 CC module/event/subsystems/bdev/bdev.o 00:01:59.524 LIB libspdk_event_bdev.a 00:01:59.524 SO libspdk_event_bdev.so.6.0 00:01:59.785 SYMLINK libspdk_event_bdev.so 00:02:00.046 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:00.046 CC module/event/subsystems/scsi/scsi.o 00:02:00.046 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:00.046 CC module/event/subsystems/ublk/ublk.o 00:02:00.046 CC module/event/subsystems/nbd/nbd.o 00:02:00.307 LIB libspdk_event_ublk.a 00:02:00.307 LIB libspdk_event_nbd.a 00:02:00.307 LIB libspdk_event_scsi.a 00:02:00.307 SO libspdk_event_ublk.so.3.0 00:02:00.307 SO libspdk_event_nbd.so.6.0 00:02:00.307 SO libspdk_event_scsi.so.6.0 00:02:00.307 LIB libspdk_event_nvmf.a 00:02:00.307 SYMLINK libspdk_event_ublk.so 00:02:00.307 SYMLINK libspdk_event_nbd.so 00:02:00.307 SO libspdk_event_nvmf.so.6.0 00:02:00.307 SYMLINK libspdk_event_scsi.so 00:02:00.307 SYMLINK libspdk_event_nvmf.so 00:02:00.568 CC module/event/subsystems/iscsi/iscsi.o 00:02:00.568 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:00.829 LIB libspdk_event_vhost_scsi.a 00:02:00.829 LIB libspdk_event_iscsi.a 00:02:00.829 SO libspdk_event_vhost_scsi.so.3.0 00:02:00.829 SO libspdk_event_iscsi.so.6.0 00:02:00.829 SYMLINK libspdk_event_vhost_scsi.so 00:02:00.829 SYMLINK libspdk_event_iscsi.so 00:02:01.090 SO libspdk.so.6.0 00:02:01.090 SYMLINK libspdk.so 00:02:01.671 CC app/spdk_top/spdk_top.o 00:02:01.671 CXX app/trace/trace.o 00:02:01.671 CC app/spdk_nvme_discover/discovery_aer.o 00:02:01.671 CC app/spdk_lspci/spdk_lspci.o 00:02:01.671 CC app/trace_record/trace_record.o 00:02:01.671 CC app/spdk_nvme_identify/identify.o 00:02:01.671 TEST_HEADER include/spdk/accel.h 00:02:01.671 TEST_HEADER include/spdk/assert.h 00:02:01.671 CC app/spdk_nvme_perf/perf.o 00:02:01.671 TEST_HEADER include/spdk/barrier.h 00:02:01.671 TEST_HEADER include/spdk/accel_module.h 00:02:01.671 TEST_HEADER include/spdk/base64.h 00:02:01.671 TEST_HEADER include/spdk/bdev.h 00:02:01.671 TEST_HEADER include/spdk/bdev_module.h 00:02:01.671 TEST_HEADER include/spdk/bdev_zone.h 00:02:01.671 TEST_HEADER include/spdk/bit_array.h 00:02:01.671 TEST_HEADER include/spdk/blob_bdev.h 00:02:01.671 TEST_HEADER include/spdk/bit_pool.h 00:02:01.671 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:01.671 TEST_HEADER include/spdk/blob.h 00:02:01.671 TEST_HEADER include/spdk/blobfs.h 00:02:01.671 TEST_HEADER include/spdk/config.h 00:02:01.671 CC test/rpc_client/rpc_client_test.o 00:02:01.671 TEST_HEADER include/spdk/conf.h 00:02:01.671 CC app/iscsi_tgt/iscsi_tgt.o 00:02:01.671 TEST_HEADER include/spdk/cpuset.h 00:02:01.671 TEST_HEADER include/spdk/crc64.h 00:02:01.671 TEST_HEADER include/spdk/crc32.h 00:02:01.671 TEST_HEADER include/spdk/crc16.h 00:02:01.671 TEST_HEADER include/spdk/dif.h 00:02:01.671 CC app/nvmf_tgt/nvmf_main.o 00:02:01.671 TEST_HEADER include/spdk/endian.h 00:02:01.671 TEST_HEADER include/spdk/env_dpdk.h 00:02:01.671 TEST_HEADER include/spdk/dma.h 00:02:01.671 TEST_HEADER include/spdk/env.h 00:02:01.671 TEST_HEADER include/spdk/event.h 00:02:01.671 TEST_HEADER include/spdk/fd.h 00:02:01.671 TEST_HEADER include/spdk/file.h 00:02:01.671 TEST_HEADER include/spdk/fd_group.h 00:02:01.671 TEST_HEADER include/spdk/ftl.h 00:02:01.671 TEST_HEADER include/spdk/gpt_spec.h 00:02:01.671 CC app/spdk_dd/spdk_dd.o 00:02:01.671 TEST_HEADER include/spdk/hexlify.h 00:02:01.671 TEST_HEADER include/spdk/histogram_data.h 00:02:01.671 TEST_HEADER include/spdk/idxd.h 00:02:01.671 TEST_HEADER include/spdk/idxd_spec.h 00:02:01.671 TEST_HEADER include/spdk/init.h 00:02:01.671 TEST_HEADER include/spdk/ioat.h 00:02:01.671 TEST_HEADER include/spdk/ioat_spec.h 00:02:01.671 TEST_HEADER include/spdk/iscsi_spec.h 00:02:01.671 TEST_HEADER include/spdk/json.h 00:02:01.671 TEST_HEADER include/spdk/jsonrpc.h 00:02:01.671 CC app/vhost/vhost.o 00:02:01.671 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:01.671 TEST_HEADER include/spdk/keyring.h 00:02:01.671 TEST_HEADER include/spdk/keyring_module.h 00:02:01.671 TEST_HEADER include/spdk/likely.h 00:02:01.671 TEST_HEADER include/spdk/log.h 00:02:01.671 TEST_HEADER include/spdk/lvol.h 00:02:01.671 TEST_HEADER include/spdk/memory.h 00:02:01.671 TEST_HEADER include/spdk/mmio.h 00:02:01.671 TEST_HEADER include/spdk/nbd.h 00:02:01.671 TEST_HEADER include/spdk/notify.h 00:02:01.671 TEST_HEADER include/spdk/nvme.h 00:02:01.671 CC app/spdk_tgt/spdk_tgt.o 00:02:01.671 TEST_HEADER include/spdk/nvme_intel.h 00:02:01.671 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:01.671 TEST_HEADER include/spdk/nvme_spec.h 00:02:01.671 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:01.671 TEST_HEADER include/spdk/nvme_zns.h 00:02:01.671 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:01.671 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:01.671 TEST_HEADER include/spdk/nvmf_spec.h 00:02:01.671 TEST_HEADER include/spdk/nvmf.h 00:02:01.671 TEST_HEADER include/spdk/nvmf_transport.h 00:02:01.671 TEST_HEADER include/spdk/opal_spec.h 00:02:01.671 TEST_HEADER include/spdk/opal.h 00:02:01.671 TEST_HEADER include/spdk/pci_ids.h 00:02:01.671 TEST_HEADER include/spdk/queue.h 00:02:01.671 TEST_HEADER include/spdk/pipe.h 00:02:01.671 TEST_HEADER include/spdk/reduce.h 00:02:01.671 TEST_HEADER include/spdk/rpc.h 00:02:01.671 TEST_HEADER include/spdk/scsi_spec.h 00:02:01.671 TEST_HEADER include/spdk/scheduler.h 00:02:01.671 TEST_HEADER include/spdk/sock.h 00:02:01.671 TEST_HEADER include/spdk/scsi.h 00:02:01.671 TEST_HEADER include/spdk/stdinc.h 00:02:01.671 TEST_HEADER include/spdk/string.h 00:02:01.671 TEST_HEADER include/spdk/thread.h 00:02:01.671 TEST_HEADER include/spdk/trace.h 00:02:01.671 TEST_HEADER include/spdk/trace_parser.h 00:02:01.671 TEST_HEADER include/spdk/tree.h 00:02:01.671 TEST_HEADER include/spdk/ublk.h 00:02:01.671 TEST_HEADER include/spdk/util.h 00:02:01.671 TEST_HEADER include/spdk/uuid.h 00:02:01.671 TEST_HEADER include/spdk/version.h 00:02:01.671 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:01.671 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:01.671 TEST_HEADER include/spdk/vhost.h 00:02:01.671 TEST_HEADER include/spdk/xor.h 00:02:01.671 TEST_HEADER include/spdk/zipf.h 00:02:01.671 TEST_HEADER include/spdk/vmd.h 00:02:01.671 CXX test/cpp_headers/accel.o 00:02:01.671 CXX test/cpp_headers/accel_module.o 00:02:01.671 CXX test/cpp_headers/assert.o 00:02:01.671 CXX test/cpp_headers/barrier.o 00:02:01.671 CXX test/cpp_headers/base64.o 00:02:01.671 CXX test/cpp_headers/bdev.o 00:02:01.671 CXX test/cpp_headers/bdev_zone.o 00:02:01.671 CXX test/cpp_headers/bdev_module.o 00:02:01.671 CXX test/cpp_headers/bit_pool.o 00:02:01.671 CXX test/cpp_headers/blobfs_bdev.o 00:02:01.671 CXX test/cpp_headers/blob_bdev.o 00:02:01.671 CXX test/cpp_headers/bit_array.o 00:02:01.671 CXX test/cpp_headers/blobfs.o 00:02:01.671 CXX test/cpp_headers/conf.o 00:02:01.671 CXX test/cpp_headers/blob.o 00:02:01.671 CXX test/cpp_headers/config.o 00:02:01.671 CXX test/cpp_headers/crc16.o 00:02:01.671 CXX test/cpp_headers/cpuset.o 00:02:01.671 CXX test/cpp_headers/crc32.o 00:02:01.671 CXX test/cpp_headers/dif.o 00:02:01.671 CXX test/cpp_headers/dma.o 00:02:01.671 CXX test/cpp_headers/crc64.o 00:02:01.671 CXX test/cpp_headers/env.o 00:02:01.671 CXX test/cpp_headers/endian.o 00:02:01.671 CXX test/cpp_headers/env_dpdk.o 00:02:01.671 CXX test/cpp_headers/event.o 00:02:01.671 CXX test/cpp_headers/fd.o 00:02:01.671 CXX test/cpp_headers/ftl.o 00:02:01.671 CXX test/cpp_headers/file.o 00:02:01.671 CXX test/cpp_headers/fd_group.o 00:02:01.671 CXX test/cpp_headers/gpt_spec.o 00:02:01.671 CXX test/cpp_headers/hexlify.o 00:02:01.671 CXX test/cpp_headers/histogram_data.o 00:02:01.671 CXX test/cpp_headers/idxd.o 00:02:01.671 CXX test/cpp_headers/idxd_spec.o 00:02:01.671 CXX test/cpp_headers/init.o 00:02:01.671 CXX test/cpp_headers/ioat.o 00:02:01.671 CXX test/cpp_headers/ioat_spec.o 00:02:01.671 CXX test/cpp_headers/jsonrpc.o 00:02:01.671 CXX test/cpp_headers/iscsi_spec.o 00:02:01.671 CXX test/cpp_headers/json.o 00:02:01.671 CXX test/cpp_headers/keyring.o 00:02:01.671 CXX test/cpp_headers/log.o 00:02:01.671 CXX test/cpp_headers/keyring_module.o 00:02:01.671 CXX test/cpp_headers/likely.o 00:02:01.672 CXX test/cpp_headers/lvol.o 00:02:01.672 CXX test/cpp_headers/memory.o 00:02:01.672 CXX test/cpp_headers/mmio.o 00:02:01.672 CXX test/cpp_headers/nbd.o 00:02:01.672 CXX test/cpp_headers/notify.o 00:02:01.672 CXX test/cpp_headers/nvme.o 00:02:01.672 CXX test/cpp_headers/nvme_ocssd.o 00:02:01.672 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:01.672 CXX test/cpp_headers/nvme_intel.o 00:02:01.672 CXX test/cpp_headers/nvme_spec.o 00:02:01.672 CXX test/cpp_headers/nvmf_cmd.o 00:02:01.672 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:01.672 CXX test/cpp_headers/nvme_zns.o 00:02:01.672 CXX test/cpp_headers/nvmf.o 00:02:01.672 CXX test/cpp_headers/nvmf_spec.o 00:02:01.672 CXX test/cpp_headers/nvmf_transport.o 00:02:01.672 CXX test/cpp_headers/opal.o 00:02:01.672 CXX test/cpp_headers/pci_ids.o 00:02:01.672 CXX test/cpp_headers/opal_spec.o 00:02:01.672 CXX test/cpp_headers/pipe.o 00:02:01.672 CXX test/cpp_headers/queue.o 00:02:01.672 CXX test/cpp_headers/reduce.o 00:02:01.933 CXX test/cpp_headers/rpc.o 00:02:01.933 CXX test/cpp_headers/scheduler.o 00:02:01.933 CXX test/cpp_headers/scsi.o 00:02:01.933 CC examples/util/zipf/zipf.o 00:02:01.933 CC test/nvme/e2edp/nvme_dp.o 00:02:01.933 CC test/nvme/sgl/sgl.o 00:02:01.933 CC examples/accel/perf/accel_perf.o 00:02:01.933 CC app/fio/nvme/fio_plugin.o 00:02:01.933 CC test/nvme/startup/startup.o 00:02:01.933 CC test/nvme/connect_stress/connect_stress.o 00:02:01.933 CC test/nvme/boot_partition/boot_partition.o 00:02:01.933 CC test/nvme/reset/reset.o 00:02:01.933 CC examples/nvme/reconnect/reconnect.o 00:02:01.933 CC examples/nvme/abort/abort.o 00:02:01.933 CC test/accel/dif/dif.o 00:02:01.933 CC test/nvme/overhead/overhead.o 00:02:01.933 CC examples/sock/hello_world/hello_sock.o 00:02:01.933 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:01.933 CC test/event/event_perf/event_perf.o 00:02:01.933 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:01.933 CC examples/vmd/led/led.o 00:02:01.933 CC test/nvme/fused_ordering/fused_ordering.o 00:02:01.933 CC test/nvme/err_injection/err_injection.o 00:02:01.933 CC examples/idxd/perf/perf.o 00:02:01.933 CXX test/cpp_headers/scsi_spec.o 00:02:01.933 CC examples/nvme/hello_world/hello_world.o 00:02:01.933 CC test/env/memory/memory_ut.o 00:02:01.933 CC test/nvme/cuse/cuse.o 00:02:01.933 CC examples/ioat/verify/verify.o 00:02:01.933 CC test/nvme/simple_copy/simple_copy.o 00:02:01.933 CC test/app/histogram_perf/histogram_perf.o 00:02:01.933 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:01.933 CC test/nvme/compliance/nvme_compliance.o 00:02:01.933 CC test/env/vtophys/vtophys.o 00:02:01.933 CC test/nvme/reserve/reserve.o 00:02:01.933 CC examples/nvme/hotplug/hotplug.o 00:02:01.933 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:01.933 CC examples/vmd/lsvmd/lsvmd.o 00:02:01.933 CC test/blobfs/mkfs/mkfs.o 00:02:01.933 CC test/thread/poller_perf/poller_perf.o 00:02:01.933 CC test/nvme/aer/aer.o 00:02:01.933 CC examples/ioat/perf/perf.o 00:02:01.933 CC test/nvme/fdp/fdp.o 00:02:01.933 CC test/event/reactor_perf/reactor_perf.o 00:02:01.933 CC examples/nvme/arbitration/arbitration.o 00:02:01.933 CC test/bdev/bdevio/bdevio.o 00:02:01.933 CC examples/nvmf/nvmf/nvmf.o 00:02:01.933 CC test/event/reactor/reactor.o 00:02:01.933 CC test/app/jsoncat/jsoncat.o 00:02:01.933 CC examples/bdev/hello_world/hello_bdev.o 00:02:01.933 CC test/env/pci/pci_ut.o 00:02:01.933 CC test/event/app_repeat/app_repeat.o 00:02:01.933 CC test/app/stub/stub.o 00:02:01.933 CXX test/cpp_headers/sock.o 00:02:01.933 CC test/dma/test_dma/test_dma.o 00:02:01.933 CC examples/thread/thread/thread_ex.o 00:02:01.933 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:01.933 CC examples/blob/hello_world/hello_blob.o 00:02:01.933 CC examples/blob/cli/blobcli.o 00:02:01.933 CC app/fio/bdev/fio_plugin.o 00:02:01.933 CC examples/bdev/bdevperf/bdevperf.o 00:02:01.933 CC test/event/scheduler/scheduler.o 00:02:01.933 LINK spdk_lspci 00:02:01.933 CC test/app/bdev_svc/bdev_svc.o 00:02:02.201 LINK rpc_client_test 00:02:02.201 LINK nvmf_tgt 00:02:02.201 LINK interrupt_tgt 00:02:02.201 LINK spdk_nvme_discover 00:02:02.201 LINK vhost 00:02:02.201 LINK spdk_trace_record 00:02:02.201 CC test/env/mem_callbacks/mem_callbacks.o 00:02:02.201 LINK iscsi_tgt 00:02:02.201 CC test/lvol/esnap/esnap.o 00:02:02.473 LINK spdk_tgt 00:02:02.473 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:02.473 LINK startup 00:02:02.473 LINK lsvmd 00:02:02.473 LINK zipf 00:02:02.473 LINK led 00:02:02.473 LINK event_perf 00:02:02.473 LINK histogram_perf 00:02:02.473 LINK reactor 00:02:02.473 CXX test/cpp_headers/stdinc.o 00:02:02.473 LINK boot_partition 00:02:02.473 LINK poller_perf 00:02:02.473 CXX test/cpp_headers/string.o 00:02:02.473 LINK reactor_perf 00:02:02.473 LINK jsoncat 00:02:02.473 CXX test/cpp_headers/thread.o 00:02:02.473 LINK cmb_copy 00:02:02.473 CXX test/cpp_headers/trace.o 00:02:02.473 CXX test/cpp_headers/trace_parser.o 00:02:02.473 LINK connect_stress 00:02:02.473 CXX test/cpp_headers/tree.o 00:02:02.473 LINK vtophys 00:02:02.473 LINK app_repeat 00:02:02.732 CXX test/cpp_headers/ublk.o 00:02:02.732 LINK env_dpdk_post_init 00:02:02.732 CXX test/cpp_headers/util.o 00:02:02.732 LINK doorbell_aers 00:02:02.732 CXX test/cpp_headers/uuid.o 00:02:02.732 CXX test/cpp_headers/version.o 00:02:02.732 CXX test/cpp_headers/vfio_user_spec.o 00:02:02.732 CXX test/cpp_headers/vfio_user_pci.o 00:02:02.732 CXX test/cpp_headers/vhost.o 00:02:02.732 CXX test/cpp_headers/xor.o 00:02:02.732 CXX test/cpp_headers/vmd.o 00:02:02.732 LINK spdk_dd 00:02:02.732 LINK pmr_persistence 00:02:02.732 CXX test/cpp_headers/zipf.o 00:02:02.732 LINK err_injection 00:02:02.732 LINK reserve 00:02:02.732 LINK verify 00:02:02.732 LINK stub 00:02:02.732 LINK mkfs 00:02:02.732 LINK hello_world 00:02:02.732 LINK fused_ordering 00:02:02.732 LINK bdev_svc 00:02:02.732 LINK ioat_perf 00:02:02.732 LINK hello_sock 00:02:02.732 LINK reset 00:02:02.732 LINK simple_copy 00:02:02.732 LINK hotplug 00:02:02.732 LINK hello_bdev 00:02:02.732 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:02.732 LINK overhead 00:02:02.732 LINK scheduler 00:02:02.732 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:02.732 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:02.732 LINK hello_blob 00:02:02.732 LINK spdk_trace 00:02:02.732 LINK sgl 00:02:02.732 LINK nvme_dp 00:02:02.732 LINK nvme_compliance 00:02:02.732 LINK aer 00:02:02.732 LINK nvmf 00:02:02.732 LINK abort 00:02:02.732 LINK thread 00:02:02.732 LINK reconnect 00:02:02.732 LINK dif 00:02:02.732 LINK arbitration 00:02:02.732 LINK idxd_perf 00:02:02.732 LINK fdp 00:02:02.732 LINK bdevio 00:02:02.993 LINK test_dma 00:02:02.993 LINK nvme_manage 00:02:02.993 LINK accel_perf 00:02:02.993 LINK pci_ut 00:02:02.993 LINK spdk_nvme 00:02:02.993 LINK spdk_bdev 00:02:02.993 LINK nvme_fuzz 00:02:02.993 LINK blobcli 00:02:02.993 LINK spdk_nvme_perf 00:02:03.254 LINK spdk_top 00:02:03.254 LINK vhost_fuzz 00:02:03.254 LINK spdk_nvme_identify 00:02:03.254 LINK mem_callbacks 00:02:03.254 LINK memory_ut 00:02:03.254 LINK bdevperf 00:02:03.516 LINK cuse 00:02:04.087 LINK iscsi_fuzz 00:02:06.014 LINK esnap 00:02:06.276 00:02:06.276 real 0m49.147s 00:02:06.276 user 6m33.566s 00:02:06.276 sys 4m36.794s 00:02:06.276 15:12:23 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:02:06.276 15:12:23 -- common/autotest_common.sh@10 -- $ set +x 00:02:06.276 ************************************ 00:02:06.276 END TEST make 00:02:06.276 ************************************ 00:02:06.276 15:12:23 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:06.276 15:12:23 -- pm/common@30 -- $ signal_monitor_resources TERM 00:02:06.276 15:12:23 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:02:06.276 15:12:23 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:06.276 15:12:23 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:06.276 15:12:23 -- pm/common@45 -- $ pid=1293553 00:02:06.276 15:12:23 -- pm/common@52 -- $ sudo kill -TERM 1293553 00:02:06.276 15:12:23 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:06.276 15:12:23 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:06.276 15:12:23 -- pm/common@45 -- $ pid=1293561 00:02:06.276 15:12:23 -- pm/common@52 -- $ sudo kill -TERM 1293561 00:02:06.276 15:12:23 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:06.276 15:12:23 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:06.276 15:12:23 -- pm/common@45 -- $ pid=1293562 00:02:06.276 15:12:23 -- pm/common@52 -- $ sudo kill -TERM 1293562 00:02:06.276 15:12:23 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:06.276 15:12:23 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:06.276 15:12:23 -- pm/common@45 -- $ pid=1293563 00:02:06.276 15:12:23 -- pm/common@52 -- $ sudo kill -TERM 1293563 00:02:06.538 15:12:23 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:06.538 15:12:23 -- nvmf/common.sh@7 -- # uname -s 00:02:06.538 15:12:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:06.538 15:12:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:06.538 15:12:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:06.538 15:12:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:06.538 15:12:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:06.538 15:12:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:06.538 15:12:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:06.538 15:12:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:06.538 15:12:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:06.538 15:12:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:06.538 15:12:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:02:06.538 15:12:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:02:06.538 15:12:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:06.538 15:12:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:06.538 15:12:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:06.538 15:12:23 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:06.538 15:12:23 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:06.538 15:12:23 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:06.538 15:12:23 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:06.538 15:12:23 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:06.538 15:12:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:06.538 15:12:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:06.538 15:12:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:06.538 15:12:23 -- paths/export.sh@5 -- # export PATH 00:02:06.538 15:12:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:06.538 15:12:23 -- nvmf/common.sh@47 -- # : 0 00:02:06.538 15:12:23 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:06.538 15:12:23 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:06.538 15:12:23 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:06.538 15:12:23 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:06.538 15:12:23 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:06.538 15:12:23 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:06.538 15:12:23 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:06.538 15:12:23 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:06.538 15:12:23 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:06.538 15:12:23 -- spdk/autotest.sh@32 -- # uname -s 00:02:06.538 15:12:23 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:06.538 15:12:23 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:06.538 15:12:23 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:06.538 15:12:23 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:06.538 15:12:23 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:06.538 15:12:23 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:06.538 15:12:23 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:06.538 15:12:23 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:06.538 15:12:23 -- spdk/autotest.sh@48 -- # udevadm_pid=1356304 00:02:06.538 15:12:23 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:06.538 15:12:23 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:06.538 15:12:23 -- pm/common@17 -- # local monitor 00:02:06.538 15:12:23 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:06.538 15:12:23 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=1356307 00:02:06.538 15:12:23 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:06.538 15:12:23 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=1356309 00:02:06.538 15:12:23 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:06.538 15:12:23 -- pm/common@21 -- # date +%s 00:02:06.538 15:12:23 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=1356311 00:02:06.538 15:12:23 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:06.538 15:12:23 -- pm/common@21 -- # date +%s 00:02:06.538 15:12:23 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=1356315 00:02:06.538 15:12:23 -- pm/common@26 -- # sleep 1 00:02:06.538 15:12:23 -- pm/common@21 -- # date +%s 00:02:06.538 15:12:23 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1714137143 00:02:06.538 15:12:23 -- pm/common@21 -- # date +%s 00:02:06.538 15:12:23 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1714137143 00:02:06.539 15:12:23 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1714137143 00:02:06.539 15:12:23 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1714137143 00:02:06.539 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1714137143_collect-vmstat.pm.log 00:02:06.539 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1714137143_collect-cpu-load.pm.log 00:02:06.539 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1714137143_collect-bmc-pm.bmc.pm.log 00:02:06.539 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1714137143_collect-cpu-temp.pm.log 00:02:07.482 15:12:24 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:07.482 15:12:24 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:07.482 15:12:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:02:07.482 15:12:24 -- common/autotest_common.sh@10 -- # set +x 00:02:07.482 15:12:24 -- spdk/autotest.sh@59 -- # create_test_list 00:02:07.482 15:12:24 -- common/autotest_common.sh@734 -- # xtrace_disable 00:02:07.482 15:12:24 -- common/autotest_common.sh@10 -- # set +x 00:02:07.742 15:12:24 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:07.742 15:12:24 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:07.742 15:12:24 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:07.742 15:12:24 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:07.742 15:12:24 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:07.742 15:12:24 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:07.742 15:12:24 -- common/autotest_common.sh@1441 -- # uname 00:02:07.742 15:12:24 -- common/autotest_common.sh@1441 -- # '[' Linux = FreeBSD ']' 00:02:07.742 15:12:24 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:07.742 15:12:24 -- common/autotest_common.sh@1461 -- # uname 00:02:07.742 15:12:24 -- common/autotest_common.sh@1461 -- # [[ Linux = FreeBSD ]] 00:02:07.742 15:12:24 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:07.742 15:12:24 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:07.742 15:12:24 -- spdk/autotest.sh@72 -- # hash lcov 00:02:07.742 15:12:24 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:07.742 15:12:24 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:07.742 --rc lcov_branch_coverage=1 00:02:07.742 --rc lcov_function_coverage=1 00:02:07.742 --rc genhtml_branch_coverage=1 00:02:07.742 --rc genhtml_function_coverage=1 00:02:07.742 --rc genhtml_legend=1 00:02:07.742 --rc geninfo_all_blocks=1 00:02:07.742 ' 00:02:07.742 15:12:24 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:07.742 --rc lcov_branch_coverage=1 00:02:07.742 --rc lcov_function_coverage=1 00:02:07.742 --rc genhtml_branch_coverage=1 00:02:07.742 --rc genhtml_function_coverage=1 00:02:07.742 --rc genhtml_legend=1 00:02:07.742 --rc geninfo_all_blocks=1 00:02:07.742 ' 00:02:07.742 15:12:24 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:07.742 --rc lcov_branch_coverage=1 00:02:07.742 --rc lcov_function_coverage=1 00:02:07.742 --rc genhtml_branch_coverage=1 00:02:07.742 --rc genhtml_function_coverage=1 00:02:07.742 --rc genhtml_legend=1 00:02:07.742 --rc geninfo_all_blocks=1 00:02:07.742 --no-external' 00:02:07.743 15:12:24 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:07.743 --rc lcov_branch_coverage=1 00:02:07.743 --rc lcov_function_coverage=1 00:02:07.743 --rc genhtml_branch_coverage=1 00:02:07.743 --rc genhtml_function_coverage=1 00:02:07.743 --rc genhtml_legend=1 00:02:07.743 --rc geninfo_all_blocks=1 00:02:07.743 --no-external' 00:02:07.743 15:12:24 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:07.743 lcov: LCOV version 1.14 00:02:07.743 15:12:25 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:15.884 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:15.884 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:15.884 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:15.884 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:15.884 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:15.884 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:15.884 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:15.884 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:15.884 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:15.884 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:15.884 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:15.884 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:15.884 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:15.884 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:15.884 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:15.884 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:15.884 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:15.884 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:15.884 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:15.884 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:15.884 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:15.884 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:15.884 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:15.884 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:15.884 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:15.884 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:15.884 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:15.884 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:15.884 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:15.884 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:15.884 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:15.884 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:15.884 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:15.884 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:15.884 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:15.884 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:15.884 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:15.884 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:15.884 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:15.884 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:15.884 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:15.884 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:15.884 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:15.884 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:15.884 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:15.884 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:15.884 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:15.884 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:15.884 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:15.884 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:15.884 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:15.884 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:15.884 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:15.884 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:15.884 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:15.884 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:15.884 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:15.884 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:15.884 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:15.884 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:15.884 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:15.884 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:15.884 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:15.884 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:15.884 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:15.884 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:15.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:15.885 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:15.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:15.885 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:15.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:15.885 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:15.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:15.885 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:15.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:15.885 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:15.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:15.885 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:15.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:15.885 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:15.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:15.885 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:15.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:15.885 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:15.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:15.885 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:15.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:15.885 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:15.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:15.885 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:15.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:15.885 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:15.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:15.885 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:15.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:15.885 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:15.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:15.885 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:15.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:15.885 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:15.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:15.885 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:15.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:15.885 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:15.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:15.885 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:15.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:15.885 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:15.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:15.885 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:15.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:15.885 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:15.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:15.885 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:15.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:15.885 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:15.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:15.885 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:15.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:15.885 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:15.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:15.885 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:15.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:15.885 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:15.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:15.885 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:15.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:15.885 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:15.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:15.885 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:15.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:15.885 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:15.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:15.885 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:15.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:15.885 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:15.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:15.885 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:15.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:15.885 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:15.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:15.885 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:15.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:15.885 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:15.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:15.885 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:15.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:15.885 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:15.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:15.885 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:15.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:15.885 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:15.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:15.885 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:15.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:15.885 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:15.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:15.885 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:15.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:15.885 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:15.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:15.885 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:15.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:15.885 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:15.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:15.885 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:15.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:15.885 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:15.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:15.885 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:15.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:15.886 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:15.886 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:15.886 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:15.886 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:15.886 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:20.094 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:20.094 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:30.096 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:30.096 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:30.096 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:30.096 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:02:30.096 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:30.096 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:36.685 15:12:53 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:36.685 15:12:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:02:36.685 15:12:53 -- common/autotest_common.sh@10 -- # set +x 00:02:36.685 15:12:53 -- spdk/autotest.sh@91 -- # rm -f 00:02:36.685 15:12:53 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:39.234 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:02:39.234 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:02:39.234 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:02:39.234 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:02:39.234 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:02:39.234 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:02:39.495 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:02:39.495 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:02:39.495 0000:65:00.0 (144d a80a): Already using the nvme driver 00:02:39.495 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:02:39.495 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:02:39.495 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:02:39.495 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:02:39.495 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:02:39.495 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:02:39.495 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:02:39.495 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:02:39.761 15:12:57 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:39.761 15:12:57 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:02:39.761 15:12:57 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:02:39.761 15:12:57 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:02:39.761 15:12:57 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:02:39.761 15:12:57 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:02:39.761 15:12:57 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:02:39.761 15:12:57 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:39.761 15:12:57 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:02:39.761 15:12:57 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:39.761 15:12:57 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:39.761 15:12:57 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:39.761 15:12:57 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:39.761 15:12:57 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:39.761 15:12:57 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:40.022 No valid GPT data, bailing 00:02:40.022 15:12:57 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:40.022 15:12:57 -- scripts/common.sh@391 -- # pt= 00:02:40.022 15:12:57 -- scripts/common.sh@392 -- # return 1 00:02:40.022 15:12:57 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:40.022 1+0 records in 00:02:40.022 1+0 records out 00:02:40.022 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00487278 s, 215 MB/s 00:02:40.022 15:12:57 -- spdk/autotest.sh@118 -- # sync 00:02:40.022 15:12:57 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:40.022 15:12:57 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:40.022 15:12:57 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:48.197 15:13:04 -- spdk/autotest.sh@124 -- # uname -s 00:02:48.197 15:13:04 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:48.197 15:13:04 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:48.197 15:13:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:48.197 15:13:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:48.197 15:13:04 -- common/autotest_common.sh@10 -- # set +x 00:02:48.197 ************************************ 00:02:48.197 START TEST setup.sh 00:02:48.197 ************************************ 00:02:48.197 15:13:04 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:48.197 * Looking for test storage... 00:02:48.197 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:48.197 15:13:04 -- setup/test-setup.sh@10 -- # uname -s 00:02:48.197 15:13:04 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:48.197 15:13:04 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:48.197 15:13:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:48.197 15:13:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:48.197 15:13:04 -- common/autotest_common.sh@10 -- # set +x 00:02:48.197 ************************************ 00:02:48.197 START TEST acl 00:02:48.197 ************************************ 00:02:48.197 15:13:05 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:48.197 * Looking for test storage... 00:02:48.197 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:48.197 15:13:05 -- setup/acl.sh@10 -- # get_zoned_devs 00:02:48.197 15:13:05 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:02:48.197 15:13:05 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:02:48.197 15:13:05 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:02:48.197 15:13:05 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:02:48.197 15:13:05 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:02:48.197 15:13:05 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:02:48.197 15:13:05 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:48.197 15:13:05 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:02:48.197 15:13:05 -- setup/acl.sh@12 -- # devs=() 00:02:48.197 15:13:05 -- setup/acl.sh@12 -- # declare -a devs 00:02:48.197 15:13:05 -- setup/acl.sh@13 -- # drivers=() 00:02:48.197 15:13:05 -- setup/acl.sh@13 -- # declare -A drivers 00:02:48.197 15:13:05 -- setup/acl.sh@51 -- # setup reset 00:02:48.197 15:13:05 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:48.197 15:13:05 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:52.397 15:13:09 -- setup/acl.sh@52 -- # collect_setup_devs 00:02:52.397 15:13:09 -- setup/acl.sh@16 -- # local dev driver 00:02:52.397 15:13:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:52.397 15:13:09 -- setup/acl.sh@15 -- # setup output status 00:02:52.397 15:13:09 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:52.397 15:13:09 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:55.693 Hugepages 00:02:55.693 node hugesize free / total 00:02:55.693 15:13:12 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:55.693 15:13:12 -- setup/acl.sh@19 -- # continue 00:02:55.693 15:13:12 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:55.693 15:13:12 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:55.693 15:13:12 -- setup/acl.sh@19 -- # continue 00:02:55.693 15:13:12 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:55.693 15:13:12 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:55.693 15:13:12 -- setup/acl.sh@19 -- # continue 00:02:55.693 15:13:12 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:55.693 00:02:55.693 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:55.693 15:13:12 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:55.693 15:13:12 -- setup/acl.sh@19 -- # continue 00:02:55.693 15:13:12 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:55.693 15:13:12 -- setup/acl.sh@19 -- # [[ 0000:00:01.0 == *:*:*.* ]] 00:02:55.693 15:13:12 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:55.693 15:13:12 -- setup/acl.sh@20 -- # continue 00:02:55.693 15:13:12 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:55.693 15:13:12 -- setup/acl.sh@19 -- # [[ 0000:00:01.1 == *:*:*.* ]] 00:02:55.693 15:13:12 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:55.693 15:13:12 -- setup/acl.sh@20 -- # continue 00:02:55.693 15:13:12 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:55.693 15:13:12 -- setup/acl.sh@19 -- # [[ 0000:00:01.2 == *:*:*.* ]] 00:02:55.693 15:13:12 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:55.693 15:13:12 -- setup/acl.sh@20 -- # continue 00:02:55.693 15:13:12 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:55.693 15:13:12 -- setup/acl.sh@19 -- # [[ 0000:00:01.3 == *:*:*.* ]] 00:02:55.693 15:13:12 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:55.693 15:13:12 -- setup/acl.sh@20 -- # continue 00:02:55.693 15:13:12 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:55.693 15:13:12 -- setup/acl.sh@19 -- # [[ 0000:00:01.4 == *:*:*.* ]] 00:02:55.693 15:13:12 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:55.693 15:13:12 -- setup/acl.sh@20 -- # continue 00:02:55.693 15:13:12 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:55.693 15:13:12 -- setup/acl.sh@19 -- # [[ 0000:00:01.5 == *:*:*.* ]] 00:02:55.693 15:13:12 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:55.693 15:13:12 -- setup/acl.sh@20 -- # continue 00:02:55.693 15:13:12 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:55.693 15:13:12 -- setup/acl.sh@19 -- # [[ 0000:00:01.6 == *:*:*.* ]] 00:02:55.693 15:13:12 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:55.693 15:13:12 -- setup/acl.sh@20 -- # continue 00:02:55.693 15:13:12 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:55.693 15:13:12 -- setup/acl.sh@19 -- # [[ 0000:00:01.7 == *:*:*.* ]] 00:02:55.693 15:13:12 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:55.693 15:13:12 -- setup/acl.sh@20 -- # continue 00:02:55.693 15:13:12 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:55.693 15:13:12 -- setup/acl.sh@19 -- # [[ 0000:65:00.0 == *:*:*.* ]] 00:02:55.693 15:13:12 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:55.693 15:13:12 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:02:55.693 15:13:12 -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:55.693 15:13:12 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:55.693 15:13:12 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:55.693 15:13:12 -- setup/acl.sh@19 -- # [[ 0000:80:01.0 == *:*:*.* ]] 00:02:55.693 15:13:12 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:55.693 15:13:12 -- setup/acl.sh@20 -- # continue 00:02:55.693 15:13:12 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:55.693 15:13:12 -- setup/acl.sh@19 -- # [[ 0000:80:01.1 == *:*:*.* ]] 00:02:55.693 15:13:12 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:55.693 15:13:12 -- setup/acl.sh@20 -- # continue 00:02:55.693 15:13:12 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:55.693 15:13:12 -- setup/acl.sh@19 -- # [[ 0000:80:01.2 == *:*:*.* ]] 00:02:55.693 15:13:12 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:55.693 15:13:12 -- setup/acl.sh@20 -- # continue 00:02:55.693 15:13:12 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:55.693 15:13:12 -- setup/acl.sh@19 -- # [[ 0000:80:01.3 == *:*:*.* ]] 00:02:55.693 15:13:12 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:55.693 15:13:12 -- setup/acl.sh@20 -- # continue 00:02:55.693 15:13:12 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:55.693 15:13:12 -- setup/acl.sh@19 -- # [[ 0000:80:01.4 == *:*:*.* ]] 00:02:55.693 15:13:12 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:55.693 15:13:12 -- setup/acl.sh@20 -- # continue 00:02:55.693 15:13:12 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:55.693 15:13:12 -- setup/acl.sh@19 -- # [[ 0000:80:01.5 == *:*:*.* ]] 00:02:55.693 15:13:12 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:55.693 15:13:12 -- setup/acl.sh@20 -- # continue 00:02:55.693 15:13:12 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:55.693 15:13:12 -- setup/acl.sh@19 -- # [[ 0000:80:01.6 == *:*:*.* ]] 00:02:55.693 15:13:12 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:55.693 15:13:12 -- setup/acl.sh@20 -- # continue 00:02:55.693 15:13:12 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:55.693 15:13:12 -- setup/acl.sh@19 -- # [[ 0000:80:01.7 == *:*:*.* ]] 00:02:55.693 15:13:12 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:55.693 15:13:12 -- setup/acl.sh@20 -- # continue 00:02:55.693 15:13:12 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:55.693 15:13:12 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:55.693 15:13:12 -- setup/acl.sh@54 -- # run_test denied denied 00:02:55.693 15:13:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:55.693 15:13:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:55.693 15:13:12 -- common/autotest_common.sh@10 -- # set +x 00:02:55.693 ************************************ 00:02:55.693 START TEST denied 00:02:55.693 ************************************ 00:02:55.693 15:13:12 -- common/autotest_common.sh@1111 -- # denied 00:02:55.693 15:13:12 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:65:00.0' 00:02:55.693 15:13:12 -- setup/acl.sh@38 -- # setup output config 00:02:55.693 15:13:12 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:65:00.0' 00:02:55.693 15:13:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:55.693 15:13:12 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:59.892 0000:65:00.0 (144d a80a): Skipping denied controller at 0000:65:00.0 00:02:59.892 15:13:16 -- setup/acl.sh@40 -- # verify 0000:65:00.0 00:02:59.892 15:13:16 -- setup/acl.sh@28 -- # local dev driver 00:02:59.892 15:13:16 -- setup/acl.sh@30 -- # for dev in "$@" 00:02:59.892 15:13:16 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:65:00.0 ]] 00:02:59.892 15:13:16 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:65:00.0/driver 00:02:59.892 15:13:16 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:59.892 15:13:16 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:59.892 15:13:16 -- setup/acl.sh@41 -- # setup reset 00:02:59.892 15:13:16 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:59.892 15:13:16 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:05.174 00:03:05.174 real 0m8.611s 00:03:05.174 user 0m2.836s 00:03:05.174 sys 0m5.036s 00:03:05.174 15:13:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:05.174 15:13:21 -- common/autotest_common.sh@10 -- # set +x 00:03:05.174 ************************************ 00:03:05.174 END TEST denied 00:03:05.174 ************************************ 00:03:05.174 15:13:21 -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:05.174 15:13:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:05.174 15:13:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:05.174 15:13:21 -- common/autotest_common.sh@10 -- # set +x 00:03:05.174 ************************************ 00:03:05.174 START TEST allowed 00:03:05.174 ************************************ 00:03:05.174 15:13:21 -- common/autotest_common.sh@1111 -- # allowed 00:03:05.174 15:13:21 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:65:00.0 00:03:05.174 15:13:21 -- setup/acl.sh@45 -- # setup output config 00:03:05.174 15:13:21 -- setup/acl.sh@46 -- # grep -E '0000:65:00.0 .*: nvme -> .*' 00:03:05.174 15:13:21 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:05.174 15:13:21 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:10.464 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:10.464 15:13:26 -- setup/acl.sh@47 -- # verify 00:03:10.464 15:13:26 -- setup/acl.sh@28 -- # local dev driver 00:03:10.464 15:13:26 -- setup/acl.sh@48 -- # setup reset 00:03:10.464 15:13:26 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:10.464 15:13:26 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:13.810 00:03:13.810 real 0m9.371s 00:03:13.810 user 0m2.613s 00:03:13.810 sys 0m4.948s 00:03:13.810 15:13:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:13.810 15:13:31 -- common/autotest_common.sh@10 -- # set +x 00:03:13.810 ************************************ 00:03:13.810 END TEST allowed 00:03:13.810 ************************************ 00:03:13.810 00:03:13.810 real 0m26.150s 00:03:13.810 user 0m8.549s 00:03:13.810 sys 0m15.216s 00:03:13.810 15:13:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:13.810 15:13:31 -- common/autotest_common.sh@10 -- # set +x 00:03:13.810 ************************************ 00:03:13.810 END TEST acl 00:03:13.810 ************************************ 00:03:13.810 15:13:31 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:13.810 15:13:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:13.810 15:13:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:13.810 15:13:31 -- common/autotest_common.sh@10 -- # set +x 00:03:14.072 ************************************ 00:03:14.072 START TEST hugepages 00:03:14.072 ************************************ 00:03:14.072 15:13:31 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:14.072 * Looking for test storage... 00:03:14.073 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:14.073 15:13:31 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:14.073 15:13:31 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:14.073 15:13:31 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:14.073 15:13:31 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:14.073 15:13:31 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:14.073 15:13:31 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:14.073 15:13:31 -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:14.073 15:13:31 -- setup/common.sh@18 -- # local node= 00:03:14.073 15:13:31 -- setup/common.sh@19 -- # local var val 00:03:14.073 15:13:31 -- setup/common.sh@20 -- # local mem_f mem 00:03:14.073 15:13:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.073 15:13:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.073 15:13:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.073 15:13:31 -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.073 15:13:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.073 15:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.073 15:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.073 15:13:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 107033132 kB' 'MemAvailable: 110563672 kB' 'Buffers: 4124 kB' 'Cached: 10548884 kB' 'SwapCached: 0 kB' 'Active: 7641768 kB' 'Inactive: 3515716 kB' 'Active(anon): 6951372 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515716 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 607916 kB' 'Mapped: 177468 kB' 'Shmem: 6346896 kB' 'KReclaimable: 297516 kB' 'Slab: 1075644 kB' 'SReclaimable: 297516 kB' 'SUnreclaim: 778128 kB' 'KernelStack: 26992 kB' 'PageTables: 8684 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69460884 kB' 'Committed_AS: 8338928 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234748 kB' 'VmallocChunk: 0 kB' 'Percpu: 106560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3632500 kB' 'DirectMap2M: 42184704 kB' 'DirectMap1G: 90177536 kB' 00:03:14.073 15:13:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.073 15:13:31 -- setup/common.sh@32 -- # continue 00:03:14.073 15:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.073 15:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.073 15:13:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.073 15:13:31 -- setup/common.sh@32 -- # continue 00:03:14.073 15:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.073 15:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.073 15:13:31 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.073 15:13:31 -- setup/common.sh@32 -- # continue 00:03:14.073 15:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.073 15:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.073 15:13:31 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.073 15:13:31 -- setup/common.sh@32 -- # continue 00:03:14.073 15:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.073 15:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.073 15:13:31 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.073 15:13:31 -- setup/common.sh@32 -- # continue 00:03:14.073 15:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.073 15:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.073 15:13:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.073 15:13:31 -- setup/common.sh@32 -- # continue 00:03:14.073 15:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.073 15:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.073 15:13:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.073 15:13:31 -- setup/common.sh@32 -- # continue 00:03:14.073 15:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.073 15:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.073 15:13:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.073 15:13:31 -- setup/common.sh@32 -- # continue 00:03:14.073 15:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.073 15:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.073 15:13:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.073 15:13:31 -- setup/common.sh@32 -- # continue 00:03:14.073 15:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.073 15:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.073 15:13:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.073 15:13:31 -- setup/common.sh@32 -- # continue 00:03:14.073 15:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.073 15:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.073 15:13:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.073 15:13:31 -- setup/common.sh@32 -- # continue 00:03:14.073 15:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.073 15:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.073 15:13:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.073 15:13:31 -- setup/common.sh@32 -- # continue 00:03:14.073 15:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.073 15:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.073 15:13:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.073 15:13:31 -- setup/common.sh@32 -- # continue 00:03:14.073 15:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.073 15:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.073 15:13:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.073 15:13:31 -- setup/common.sh@32 -- # continue 00:03:14.073 15:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.073 15:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.073 15:13:31 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.073 15:13:31 -- setup/common.sh@32 -- # continue 00:03:14.073 15:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.073 15:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.073 15:13:31 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.073 15:13:31 -- setup/common.sh@32 -- # continue 00:03:14.073 15:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.073 15:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.073 15:13:31 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.073 15:13:31 -- setup/common.sh@32 -- # continue 00:03:14.073 15:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.073 15:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.073 15:13:31 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.073 15:13:31 -- setup/common.sh@32 -- # continue 00:03:14.073 15:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.073 15:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.073 15:13:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.073 15:13:31 -- setup/common.sh@32 -- # continue 00:03:14.073 15:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.073 15:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.073 15:13:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.073 15:13:31 -- setup/common.sh@32 -- # continue 00:03:14.073 15:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.073 15:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.073 15:13:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.073 15:13:31 -- setup/common.sh@32 -- # continue 00:03:14.073 15:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.073 15:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.073 15:13:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.073 15:13:31 -- setup/common.sh@32 -- # continue 00:03:14.073 15:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.073 15:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.073 15:13:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.073 15:13:31 -- setup/common.sh@32 -- # continue 00:03:14.073 15:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.073 15:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.073 15:13:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.073 15:13:31 -- setup/common.sh@32 -- # continue 00:03:14.073 15:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.073 15:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.073 15:13:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.073 15:13:31 -- setup/common.sh@32 -- # continue 00:03:14.073 15:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.073 15:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.073 15:13:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.073 15:13:31 -- setup/common.sh@32 -- # continue 00:03:14.073 15:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.073 15:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.073 15:13:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.073 15:13:31 -- setup/common.sh@32 -- # continue 00:03:14.073 15:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.073 15:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.073 15:13:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.073 15:13:31 -- setup/common.sh@32 -- # continue 00:03:14.073 15:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.073 15:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.073 15:13:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.073 15:13:31 -- setup/common.sh@32 -- # continue 00:03:14.073 15:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.073 15:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.073 15:13:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.073 15:13:31 -- setup/common.sh@32 -- # continue 00:03:14.073 15:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.073 15:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.073 15:13:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.073 15:13:31 -- setup/common.sh@32 -- # continue 00:03:14.073 15:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.073 15:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.073 15:13:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.073 15:13:31 -- setup/common.sh@32 -- # continue 00:03:14.073 15:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.073 15:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.073 15:13:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.074 15:13:31 -- setup/common.sh@32 -- # continue 00:03:14.074 15:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.074 15:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.074 15:13:31 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.074 15:13:31 -- setup/common.sh@32 -- # continue 00:03:14.074 15:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.074 15:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.074 15:13:31 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.074 15:13:31 -- setup/common.sh@32 -- # continue 00:03:14.074 15:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.074 15:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.074 15:13:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.074 15:13:31 -- setup/common.sh@32 -- # continue 00:03:14.074 15:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.074 15:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.074 15:13:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.074 15:13:31 -- setup/common.sh@32 -- # continue 00:03:14.074 15:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.074 15:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.074 15:13:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.074 15:13:31 -- setup/common.sh@32 -- # continue 00:03:14.074 15:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.074 15:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.074 15:13:31 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.074 15:13:31 -- setup/common.sh@32 -- # continue 00:03:14.074 15:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.074 15:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.074 15:13:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.074 15:13:31 -- setup/common.sh@32 -- # continue 00:03:14.074 15:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.074 15:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.074 15:13:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.074 15:13:31 -- setup/common.sh@32 -- # continue 00:03:14.074 15:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.074 15:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.074 15:13:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.074 15:13:31 -- setup/common.sh@32 -- # continue 00:03:14.074 15:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.074 15:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.074 15:13:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.074 15:13:31 -- setup/common.sh@32 -- # continue 00:03:14.074 15:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.074 15:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.074 15:13:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.074 15:13:31 -- setup/common.sh@32 -- # continue 00:03:14.074 15:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.074 15:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.074 15:13:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.074 15:13:31 -- setup/common.sh@32 -- # continue 00:03:14.074 15:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.074 15:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.074 15:13:31 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.074 15:13:31 -- setup/common.sh@32 -- # continue 00:03:14.074 15:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.074 15:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.074 15:13:31 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.074 15:13:31 -- setup/common.sh@32 -- # continue 00:03:14.074 15:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.074 15:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.074 15:13:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.074 15:13:31 -- setup/common.sh@32 -- # continue 00:03:14.074 15:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.074 15:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.074 15:13:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.074 15:13:31 -- setup/common.sh@32 -- # continue 00:03:14.074 15:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.074 15:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.074 15:13:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.074 15:13:31 -- setup/common.sh@32 -- # continue 00:03:14.074 15:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.074 15:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.074 15:13:31 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.074 15:13:31 -- setup/common.sh@32 -- # continue 00:03:14.074 15:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.074 15:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.074 15:13:31 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.074 15:13:31 -- setup/common.sh@32 -- # continue 00:03:14.074 15:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.074 15:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.074 15:13:31 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.074 15:13:31 -- setup/common.sh@33 -- # echo 2048 00:03:14.074 15:13:31 -- setup/common.sh@33 -- # return 0 00:03:14.074 15:13:31 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:14.074 15:13:31 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:14.074 15:13:31 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:14.074 15:13:31 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:14.074 15:13:31 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:14.074 15:13:31 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:14.074 15:13:31 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:14.074 15:13:31 -- setup/hugepages.sh@207 -- # get_nodes 00:03:14.074 15:13:31 -- setup/hugepages.sh@27 -- # local node 00:03:14.074 15:13:31 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:14.074 15:13:31 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:14.074 15:13:31 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:14.074 15:13:31 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:14.074 15:13:31 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:14.074 15:13:31 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:14.074 15:13:31 -- setup/hugepages.sh@208 -- # clear_hp 00:03:14.074 15:13:31 -- setup/hugepages.sh@37 -- # local node hp 00:03:14.074 15:13:31 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:14.074 15:13:31 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:14.074 15:13:31 -- setup/hugepages.sh@41 -- # echo 0 00:03:14.074 15:13:31 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:14.074 15:13:31 -- setup/hugepages.sh@41 -- # echo 0 00:03:14.334 15:13:31 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:14.334 15:13:31 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:14.335 15:13:31 -- setup/hugepages.sh@41 -- # echo 0 00:03:14.335 15:13:31 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:14.335 15:13:31 -- setup/hugepages.sh@41 -- # echo 0 00:03:14.335 15:13:31 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:14.335 15:13:31 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:14.335 15:13:31 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:14.335 15:13:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:14.335 15:13:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:14.335 15:13:31 -- common/autotest_common.sh@10 -- # set +x 00:03:14.335 ************************************ 00:03:14.335 START TEST default_setup 00:03:14.335 ************************************ 00:03:14.335 15:13:31 -- common/autotest_common.sh@1111 -- # default_setup 00:03:14.335 15:13:31 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:14.335 15:13:31 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:14.335 15:13:31 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:14.335 15:13:31 -- setup/hugepages.sh@51 -- # shift 00:03:14.335 15:13:31 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:14.335 15:13:31 -- setup/hugepages.sh@52 -- # local node_ids 00:03:14.335 15:13:31 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:14.335 15:13:31 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:14.335 15:13:31 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:14.335 15:13:31 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:14.335 15:13:31 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:14.335 15:13:31 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:14.335 15:13:31 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:14.335 15:13:31 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:14.335 15:13:31 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:14.335 15:13:31 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:14.335 15:13:31 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:14.335 15:13:31 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:14.335 15:13:31 -- setup/hugepages.sh@73 -- # return 0 00:03:14.335 15:13:31 -- setup/hugepages.sh@137 -- # setup output 00:03:14.335 15:13:31 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:14.335 15:13:31 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:17.640 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:17.640 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:17.640 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:17.640 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:17.640 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:17.640 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:17.640 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:17.900 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:17.900 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:17.900 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:17.900 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:17.900 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:17.900 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:17.900 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:17.900 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:17.900 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:17.900 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:18.162 15:13:35 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:18.162 15:13:35 -- setup/hugepages.sh@89 -- # local node 00:03:18.162 15:13:35 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:18.162 15:13:35 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:18.162 15:13:35 -- setup/hugepages.sh@92 -- # local surp 00:03:18.162 15:13:35 -- setup/hugepages.sh@93 -- # local resv 00:03:18.162 15:13:35 -- setup/hugepages.sh@94 -- # local anon 00:03:18.162 15:13:35 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:18.162 15:13:35 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:18.162 15:13:35 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:18.162 15:13:35 -- setup/common.sh@18 -- # local node= 00:03:18.162 15:13:35 -- setup/common.sh@19 -- # local var val 00:03:18.162 15:13:35 -- setup/common.sh@20 -- # local mem_f mem 00:03:18.162 15:13:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.162 15:13:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.162 15:13:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.162 15:13:35 -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.162 15:13:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.162 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.162 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.162 15:13:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109183952 kB' 'MemAvailable: 112714492 kB' 'Buffers: 4124 kB' 'Cached: 10549004 kB' 'SwapCached: 0 kB' 'Active: 7658588 kB' 'Inactive: 3515716 kB' 'Active(anon): 6968192 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515716 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 624408 kB' 'Mapped: 178024 kB' 'Shmem: 6347016 kB' 'KReclaimable: 297516 kB' 'Slab: 1072996 kB' 'SReclaimable: 297516 kB' 'SUnreclaim: 775480 kB' 'KernelStack: 27184 kB' 'PageTables: 8668 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8354876 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234716 kB' 'VmallocChunk: 0 kB' 'Percpu: 106560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3632500 kB' 'DirectMap2M: 42184704 kB' 'DirectMap1G: 90177536 kB' 00:03:18.162 15:13:35 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.162 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.162 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.162 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.162 15:13:35 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.162 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.162 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.162 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.162 15:13:35 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.162 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.162 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.162 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.162 15:13:35 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.162 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.162 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.162 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.162 15:13:35 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.162 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.162 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.162 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.162 15:13:35 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.162 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.162 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.162 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.162 15:13:35 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.162 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.162 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.162 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.163 15:13:35 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.163 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.163 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.163 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.163 15:13:35 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.163 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.163 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.163 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.163 15:13:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.163 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.163 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.163 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.163 15:13:35 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.163 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.163 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.163 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.163 15:13:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.163 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.163 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.163 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.163 15:13:35 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.163 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.163 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.163 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.163 15:13:35 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.163 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.163 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.163 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.163 15:13:35 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.163 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.163 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.163 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.163 15:13:35 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.163 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.163 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.163 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.163 15:13:35 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.163 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.163 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.163 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.163 15:13:35 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.163 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.163 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.163 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.163 15:13:35 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.163 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.163 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.163 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.163 15:13:35 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.163 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.163 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.163 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.163 15:13:35 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.163 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.163 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.163 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.163 15:13:35 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.163 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.163 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.163 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.163 15:13:35 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.163 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.163 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.163 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.163 15:13:35 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.163 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.163 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.163 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.163 15:13:35 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.163 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.163 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.163 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.163 15:13:35 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.163 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.163 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.163 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.163 15:13:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.163 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.163 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.163 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.163 15:13:35 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.163 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.163 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.163 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.163 15:13:35 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.163 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.163 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.163 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.163 15:13:35 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.163 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.163 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.163 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.163 15:13:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.163 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.163 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.163 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.163 15:13:35 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.163 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.163 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.163 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.163 15:13:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.163 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.163 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.163 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.163 15:13:35 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.163 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.163 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.163 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.163 15:13:35 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.163 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.163 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.163 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.163 15:13:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.163 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.163 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.163 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.163 15:13:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.163 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.163 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.163 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.163 15:13:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.163 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.163 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.163 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.163 15:13:35 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.163 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.163 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.163 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.163 15:13:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.163 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.163 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.163 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.163 15:13:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.163 15:13:35 -- setup/common.sh@33 -- # echo 0 00:03:18.163 15:13:35 -- setup/common.sh@33 -- # return 0 00:03:18.163 15:13:35 -- setup/hugepages.sh@97 -- # anon=0 00:03:18.163 15:13:35 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:18.163 15:13:35 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:18.163 15:13:35 -- setup/common.sh@18 -- # local node= 00:03:18.163 15:13:35 -- setup/common.sh@19 -- # local var val 00:03:18.163 15:13:35 -- setup/common.sh@20 -- # local mem_f mem 00:03:18.163 15:13:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.163 15:13:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.163 15:13:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.163 15:13:35 -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.163 15:13:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.163 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.163 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.163 15:13:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109185152 kB' 'MemAvailable: 112715692 kB' 'Buffers: 4124 kB' 'Cached: 10549008 kB' 'SwapCached: 0 kB' 'Active: 7657804 kB' 'Inactive: 3515716 kB' 'Active(anon): 6967408 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515716 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 623816 kB' 'Mapped: 177916 kB' 'Shmem: 6347020 kB' 'KReclaimable: 297516 kB' 'Slab: 1072980 kB' 'SReclaimable: 297516 kB' 'SUnreclaim: 775464 kB' 'KernelStack: 27280 kB' 'PageTables: 8860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8356536 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234716 kB' 'VmallocChunk: 0 kB' 'Percpu: 106560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3632500 kB' 'DirectMap2M: 42184704 kB' 'DirectMap1G: 90177536 kB' 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.164 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.164 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.165 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.165 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.165 15:13:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.165 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.165 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.165 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.165 15:13:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.165 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.165 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.165 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.165 15:13:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.165 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.165 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.165 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.165 15:13:35 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.165 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.165 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.165 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.165 15:13:35 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.165 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.165 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.165 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.165 15:13:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.165 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.165 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.165 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.165 15:13:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.165 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.165 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.165 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.165 15:13:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.165 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.165 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.165 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.165 15:13:35 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.165 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.165 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.165 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.165 15:13:35 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.165 15:13:35 -- setup/common.sh@33 -- # echo 0 00:03:18.165 15:13:35 -- setup/common.sh@33 -- # return 0 00:03:18.165 15:13:35 -- setup/hugepages.sh@99 -- # surp=0 00:03:18.165 15:13:35 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:18.165 15:13:35 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:18.165 15:13:35 -- setup/common.sh@18 -- # local node= 00:03:18.165 15:13:35 -- setup/common.sh@19 -- # local var val 00:03:18.165 15:13:35 -- setup/common.sh@20 -- # local mem_f mem 00:03:18.165 15:13:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.165 15:13:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.165 15:13:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.165 15:13:35 -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.165 15:13:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.165 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.165 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.165 15:13:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109184800 kB' 'MemAvailable: 112715340 kB' 'Buffers: 4124 kB' 'Cached: 10549008 kB' 'SwapCached: 0 kB' 'Active: 7657912 kB' 'Inactive: 3515716 kB' 'Active(anon): 6967516 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515716 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 623840 kB' 'Mapped: 177840 kB' 'Shmem: 6347020 kB' 'KReclaimable: 297516 kB' 'Slab: 1073008 kB' 'SReclaimable: 297516 kB' 'SUnreclaim: 775492 kB' 'KernelStack: 27264 kB' 'PageTables: 8960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8354908 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234684 kB' 'VmallocChunk: 0 kB' 'Percpu: 106560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3632500 kB' 'DirectMap2M: 42184704 kB' 'DirectMap1G: 90177536 kB' 00:03:18.165 15:13:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.165 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.165 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.165 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.165 15:13:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.165 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.165 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.165 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.165 15:13:35 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.165 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.165 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.165 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.165 15:13:35 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.165 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.165 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.165 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.165 15:13:35 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.165 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.165 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.165 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.165 15:13:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.165 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.165 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.165 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.165 15:13:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.165 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.165 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.165 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.165 15:13:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.165 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.165 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.165 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.165 15:13:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.165 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.165 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.165 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.165 15:13:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.165 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.165 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.165 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.165 15:13:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.165 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.165 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.165 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.165 15:13:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.165 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.165 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.165 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.165 15:13:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.427 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.427 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.427 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.427 15:13:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.427 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.427 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.427 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.427 15:13:35 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.428 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.428 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.428 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.428 15:13:35 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.428 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.428 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.428 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.428 15:13:35 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.428 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.428 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.428 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.428 15:13:35 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.428 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.428 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.428 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.428 15:13:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.428 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.428 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.428 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.428 15:13:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.428 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.428 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.428 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.428 15:13:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.428 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.428 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.428 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.428 15:13:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.428 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.428 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.428 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.428 15:13:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.428 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.428 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.428 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.428 15:13:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.428 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.428 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.428 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.428 15:13:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.428 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.428 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.428 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.428 15:13:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.428 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.428 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.428 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.428 15:13:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.428 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.428 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.428 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.428 15:13:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.428 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.428 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.428 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.428 15:13:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.428 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.428 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.428 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.428 15:13:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.428 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.428 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.428 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.428 15:13:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.428 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.428 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.428 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.428 15:13:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.428 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.428 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.428 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.428 15:13:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.428 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.428 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.428 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.428 15:13:35 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.428 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.428 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.428 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.428 15:13:35 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.428 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.428 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.428 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.428 15:13:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.428 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.428 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.428 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.428 15:13:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.428 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.428 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.428 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.428 15:13:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.428 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.428 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.428 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.428 15:13:35 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.428 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.428 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.428 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.428 15:13:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.428 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.428 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.428 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.428 15:13:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.428 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.428 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.428 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.428 15:13:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.428 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.428 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.428 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.428 15:13:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.428 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.428 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.428 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.428 15:13:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.428 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.428 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.428 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.428 15:13:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.428 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.428 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.428 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.428 15:13:35 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.428 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.428 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.428 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.428 15:13:35 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.428 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.428 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.428 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.428 15:13:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.428 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.428 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.428 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.428 15:13:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.428 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.428 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.428 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.428 15:13:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.428 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.428 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.428 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.428 15:13:35 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.428 15:13:35 -- setup/common.sh@33 -- # echo 0 00:03:18.428 15:13:35 -- setup/common.sh@33 -- # return 0 00:03:18.428 15:13:35 -- setup/hugepages.sh@100 -- # resv=0 00:03:18.428 15:13:35 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:18.428 nr_hugepages=1024 00:03:18.428 15:13:35 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:18.428 resv_hugepages=0 00:03:18.428 15:13:35 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:18.428 surplus_hugepages=0 00:03:18.428 15:13:35 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:18.428 anon_hugepages=0 00:03:18.428 15:13:35 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:18.428 15:13:35 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:18.428 15:13:35 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:18.428 15:13:35 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:18.428 15:13:35 -- setup/common.sh@18 -- # local node= 00:03:18.428 15:13:35 -- setup/common.sh@19 -- # local var val 00:03:18.428 15:13:35 -- setup/common.sh@20 -- # local mem_f mem 00:03:18.428 15:13:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.428 15:13:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.428 15:13:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.428 15:13:35 -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.428 15:13:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.428 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.429 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.429 15:13:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109184700 kB' 'MemAvailable: 112715240 kB' 'Buffers: 4124 kB' 'Cached: 10549028 kB' 'SwapCached: 0 kB' 'Active: 7657040 kB' 'Inactive: 3515716 kB' 'Active(anon): 6966644 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515716 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 622892 kB' 'Mapped: 177840 kB' 'Shmem: 6347040 kB' 'KReclaimable: 297516 kB' 'Slab: 1073008 kB' 'SReclaimable: 297516 kB' 'SUnreclaim: 775492 kB' 'KernelStack: 27216 kB' 'PageTables: 8968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8356700 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234732 kB' 'VmallocChunk: 0 kB' 'Percpu: 106560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3632500 kB' 'DirectMap2M: 42184704 kB' 'DirectMap1G: 90177536 kB' 00:03:18.429 15:13:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.429 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.429 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.429 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.429 15:13:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.429 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.429 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.429 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.429 15:13:35 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.429 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.429 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.429 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.429 15:13:35 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.429 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.429 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.429 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.429 15:13:35 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.429 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.429 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.429 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.429 15:13:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.429 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.429 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.429 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.429 15:13:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.429 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.429 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.429 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.429 15:13:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.429 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.429 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.429 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.429 15:13:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.429 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.429 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.429 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.429 15:13:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.429 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.429 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.429 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.429 15:13:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.429 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.429 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.429 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.429 15:13:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.429 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.429 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.429 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.429 15:13:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.429 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.429 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.429 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.429 15:13:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.429 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.429 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.429 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.429 15:13:35 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.429 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.429 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.429 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.429 15:13:35 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.429 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.429 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.429 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.429 15:13:35 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.429 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.429 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.429 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.429 15:13:35 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.429 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.429 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.429 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.429 15:13:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.429 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.429 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.429 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.429 15:13:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.429 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.429 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.429 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.429 15:13:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.429 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.429 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.429 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.429 15:13:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.429 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.429 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.429 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.429 15:13:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.429 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.429 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.429 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.429 15:13:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.429 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.429 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.429 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.429 15:13:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.429 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.429 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.429 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.429 15:13:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.429 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.429 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.429 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.429 15:13:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.429 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.429 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.429 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.429 15:13:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.429 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.429 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.429 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.429 15:13:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.429 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.429 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.429 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.429 15:13:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.429 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.429 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.429 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.429 15:13:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.429 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.429 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.429 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.429 15:13:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.429 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.429 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.429 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.429 15:13:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.429 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.429 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.429 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.429 15:13:35 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.429 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.429 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.429 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.429 15:13:35 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.429 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.429 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.429 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.429 15:13:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.429 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.429 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.429 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.429 15:13:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.430 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.430 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.430 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.430 15:13:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.430 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.430 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.430 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.430 15:13:35 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.430 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.430 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.430 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.430 15:13:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.430 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.430 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.430 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.430 15:13:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.430 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.430 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.430 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.430 15:13:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.430 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.430 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.430 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.430 15:13:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.430 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.430 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.430 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.430 15:13:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.430 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.430 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.430 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.430 15:13:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.430 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.430 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.430 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.430 15:13:35 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.430 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.430 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.430 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.430 15:13:35 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.430 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.430 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.430 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.430 15:13:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.430 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.430 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.430 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.430 15:13:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.430 15:13:35 -- setup/common.sh@33 -- # echo 1024 00:03:18.430 15:13:35 -- setup/common.sh@33 -- # return 0 00:03:18.430 15:13:35 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:18.430 15:13:35 -- setup/hugepages.sh@112 -- # get_nodes 00:03:18.430 15:13:35 -- setup/hugepages.sh@27 -- # local node 00:03:18.430 15:13:35 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:18.430 15:13:35 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:18.430 15:13:35 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:18.430 15:13:35 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:18.430 15:13:35 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:18.430 15:13:35 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:18.430 15:13:35 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:18.430 15:13:35 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:18.430 15:13:35 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:18.430 15:13:35 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:18.430 15:13:35 -- setup/common.sh@18 -- # local node=0 00:03:18.430 15:13:35 -- setup/common.sh@19 -- # local var val 00:03:18.430 15:13:35 -- setup/common.sh@20 -- # local mem_f mem 00:03:18.430 15:13:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.430 15:13:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:18.430 15:13:35 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:18.430 15:13:35 -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.430 15:13:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.430 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.430 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.430 15:13:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59374240 kB' 'MemUsed: 6284768 kB' 'SwapCached: 0 kB' 'Active: 2551812 kB' 'Inactive: 106348 kB' 'Active(anon): 2242292 kB' 'Inactive(anon): 0 kB' 'Active(file): 309520 kB' 'Inactive(file): 106348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2547308 kB' 'Mapped: 109328 kB' 'AnonPages: 114068 kB' 'Shmem: 2131440 kB' 'KernelStack: 13656 kB' 'PageTables: 4064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 159900 kB' 'Slab: 520420 kB' 'SReclaimable: 159900 kB' 'SUnreclaim: 360520 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:18.430 15:13:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.430 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.430 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.430 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.430 15:13:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.430 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.430 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.430 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.430 15:13:35 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.430 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.430 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.430 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.430 15:13:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.430 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.430 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.430 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.430 15:13:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.430 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.430 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.430 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.430 15:13:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.430 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.430 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.430 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.430 15:13:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.430 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.430 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.430 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.430 15:13:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.430 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.430 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.430 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.430 15:13:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.430 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.430 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.430 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.430 15:13:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.430 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.430 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.430 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.430 15:13:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.430 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.430 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.430 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.430 15:13:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.430 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.430 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.430 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.430 15:13:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.430 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.430 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.430 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.430 15:13:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.430 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.430 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.430 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.430 15:13:35 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.430 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.430 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.430 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.430 15:13:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.430 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.430 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.430 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.430 15:13:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.430 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.430 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.430 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.430 15:13:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.430 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.430 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.430 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.430 15:13:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.430 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.430 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.430 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.430 15:13:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.430 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.431 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.431 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.431 15:13:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.431 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.431 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.431 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.431 15:13:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.431 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.431 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.431 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.431 15:13:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.431 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.431 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.431 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.431 15:13:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.431 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.431 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.431 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.431 15:13:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.431 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.431 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.431 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.431 15:13:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.431 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.431 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.431 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.431 15:13:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.431 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.431 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.431 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.431 15:13:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.431 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.431 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.431 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.431 15:13:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.431 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.431 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.431 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.431 15:13:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.431 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.431 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.431 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.431 15:13:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.431 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.431 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.431 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.431 15:13:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.431 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.431 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.431 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.431 15:13:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.431 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.431 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.431 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.431 15:13:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.431 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.431 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.431 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.431 15:13:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.431 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.431 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.431 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.431 15:13:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.431 15:13:35 -- setup/common.sh@32 -- # continue 00:03:18.431 15:13:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.431 15:13:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.431 15:13:35 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.431 15:13:35 -- setup/common.sh@33 -- # echo 0 00:03:18.431 15:13:35 -- setup/common.sh@33 -- # return 0 00:03:18.431 15:13:35 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:18.431 15:13:35 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:18.431 15:13:35 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:18.431 15:13:35 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:18.431 15:13:35 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:18.431 node0=1024 expecting 1024 00:03:18.431 15:13:35 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:18.431 00:03:18.431 real 0m4.011s 00:03:18.431 user 0m1.487s 00:03:18.431 sys 0m2.509s 00:03:18.431 15:13:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:18.431 15:13:35 -- common/autotest_common.sh@10 -- # set +x 00:03:18.431 ************************************ 00:03:18.431 END TEST default_setup 00:03:18.431 ************************************ 00:03:18.431 15:13:35 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:18.431 15:13:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:18.431 15:13:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:18.431 15:13:35 -- common/autotest_common.sh@10 -- # set +x 00:03:18.692 ************************************ 00:03:18.692 START TEST per_node_1G_alloc 00:03:18.692 ************************************ 00:03:18.692 15:13:35 -- common/autotest_common.sh@1111 -- # per_node_1G_alloc 00:03:18.692 15:13:35 -- setup/hugepages.sh@143 -- # local IFS=, 00:03:18.692 15:13:35 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:18.692 15:13:35 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:18.692 15:13:35 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:18.692 15:13:35 -- setup/hugepages.sh@51 -- # shift 00:03:18.692 15:13:35 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:18.692 15:13:35 -- setup/hugepages.sh@52 -- # local node_ids 00:03:18.692 15:13:35 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:18.692 15:13:35 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:18.692 15:13:35 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:18.692 15:13:35 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:18.692 15:13:35 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:18.692 15:13:35 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:18.692 15:13:35 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:18.692 15:13:35 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:18.692 15:13:35 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:18.692 15:13:35 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:18.692 15:13:35 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:18.692 15:13:35 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:18.692 15:13:35 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:18.692 15:13:35 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:18.692 15:13:35 -- setup/hugepages.sh@73 -- # return 0 00:03:18.692 15:13:35 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:18.692 15:13:35 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:18.692 15:13:35 -- setup/hugepages.sh@146 -- # setup output 00:03:18.692 15:13:35 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:18.692 15:13:35 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:21.991 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:21.991 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:21.991 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:21.991 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:21.991 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:21.991 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:21.991 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:21.991 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:21.991 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:21.991 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:21.991 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:21.991 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:21.991 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:21.991 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:21.991 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:21.991 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:21.991 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:22.258 15:13:39 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:22.258 15:13:39 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:22.258 15:13:39 -- setup/hugepages.sh@89 -- # local node 00:03:22.258 15:13:39 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:22.258 15:13:39 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:22.258 15:13:39 -- setup/hugepages.sh@92 -- # local surp 00:03:22.258 15:13:39 -- setup/hugepages.sh@93 -- # local resv 00:03:22.258 15:13:39 -- setup/hugepages.sh@94 -- # local anon 00:03:22.258 15:13:39 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:22.258 15:13:39 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:22.258 15:13:39 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:22.258 15:13:39 -- setup/common.sh@18 -- # local node= 00:03:22.258 15:13:39 -- setup/common.sh@19 -- # local var val 00:03:22.258 15:13:39 -- setup/common.sh@20 -- # local mem_f mem 00:03:22.258 15:13:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.258 15:13:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.258 15:13:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.258 15:13:39 -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.258 15:13:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.258 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.258 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.258 15:13:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109185364 kB' 'MemAvailable: 112715856 kB' 'Buffers: 4124 kB' 'Cached: 10549148 kB' 'SwapCached: 0 kB' 'Active: 7656348 kB' 'Inactive: 3515716 kB' 'Active(anon): 6965952 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515716 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 622228 kB' 'Mapped: 176740 kB' 'Shmem: 6347160 kB' 'KReclaimable: 297420 kB' 'Slab: 1073044 kB' 'SReclaimable: 297420 kB' 'SUnreclaim: 775624 kB' 'KernelStack: 27040 kB' 'PageTables: 8460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8343088 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234652 kB' 'VmallocChunk: 0 kB' 'Percpu: 106560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3632500 kB' 'DirectMap2M: 42184704 kB' 'DirectMap1G: 90177536 kB' 00:03:22.258 15:13:39 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.258 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.258 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.258 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.258 15:13:39 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.258 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.258 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.258 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.258 15:13:39 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.258 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.258 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.258 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.258 15:13:39 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.258 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.258 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.258 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.258 15:13:39 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.258 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.258 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.258 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.258 15:13:39 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.258 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.258 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.258 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.258 15:13:39 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.258 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.258 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.258 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.258 15:13:39 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.258 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.258 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.258 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.258 15:13:39 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.258 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.258 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.258 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.258 15:13:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.258 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.258 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.258 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.258 15:13:39 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.258 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.258 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.258 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.258 15:13:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.258 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.258 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.258 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.258 15:13:39 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.258 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.258 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.258 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.258 15:13:39 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.258 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.258 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.258 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.258 15:13:39 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.258 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.258 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.258 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.258 15:13:39 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.258 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.258 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.258 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.258 15:13:39 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.258 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.258 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.258 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.258 15:13:39 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.258 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.258 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.258 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.258 15:13:39 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.258 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.258 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.258 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.258 15:13:39 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.258 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.258 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.258 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.258 15:13:39 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.258 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.258 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.258 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.258 15:13:39 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.258 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.258 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.258 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.258 15:13:39 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.258 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.258 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.258 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.258 15:13:39 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.258 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.258 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.258 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.258 15:13:39 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.258 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.258 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.258 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.258 15:13:39 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.258 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.258 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.258 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.258 15:13:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.258 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.258 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.258 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.258 15:13:39 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.258 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.258 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.258 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.258 15:13:39 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.258 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.258 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.258 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.258 15:13:39 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.258 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.259 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.259 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.259 15:13:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.259 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.259 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.259 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.259 15:13:39 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.259 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.259 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.259 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.259 15:13:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.259 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.259 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.259 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.259 15:13:39 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.259 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.259 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.259 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.259 15:13:39 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.259 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.259 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.259 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.259 15:13:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.259 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.259 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.259 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.259 15:13:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.259 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.259 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.259 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.259 15:13:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.259 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.259 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.259 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.259 15:13:39 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.259 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.259 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.259 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.259 15:13:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.259 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.259 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.259 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.259 15:13:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.259 15:13:39 -- setup/common.sh@33 -- # echo 0 00:03:22.259 15:13:39 -- setup/common.sh@33 -- # return 0 00:03:22.259 15:13:39 -- setup/hugepages.sh@97 -- # anon=0 00:03:22.259 15:13:39 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:22.259 15:13:39 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.259 15:13:39 -- setup/common.sh@18 -- # local node= 00:03:22.259 15:13:39 -- setup/common.sh@19 -- # local var val 00:03:22.259 15:13:39 -- setup/common.sh@20 -- # local mem_f mem 00:03:22.259 15:13:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.259 15:13:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.259 15:13:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.259 15:13:39 -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.259 15:13:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.259 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.259 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.259 15:13:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109186244 kB' 'MemAvailable: 112716736 kB' 'Buffers: 4124 kB' 'Cached: 10549148 kB' 'SwapCached: 0 kB' 'Active: 7656864 kB' 'Inactive: 3515716 kB' 'Active(anon): 6966468 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515716 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 622756 kB' 'Mapped: 176712 kB' 'Shmem: 6347160 kB' 'KReclaimable: 297420 kB' 'Slab: 1073044 kB' 'SReclaimable: 297420 kB' 'SUnreclaim: 775624 kB' 'KernelStack: 27040 kB' 'PageTables: 8456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8343100 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234620 kB' 'VmallocChunk: 0 kB' 'Percpu: 106560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3632500 kB' 'DirectMap2M: 42184704 kB' 'DirectMap1G: 90177536 kB' 00:03:22.259 15:13:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.259 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.259 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.259 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.259 15:13:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.259 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.259 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.259 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.259 15:13:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.259 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.259 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.259 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.259 15:13:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.259 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.259 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.259 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.259 15:13:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.259 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.259 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.259 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.259 15:13:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.259 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.259 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.259 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.259 15:13:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.259 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.259 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.259 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.259 15:13:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.259 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.259 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.259 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.259 15:13:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.259 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.259 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.259 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.259 15:13:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.259 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.259 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.259 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.259 15:13:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.259 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.259 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.259 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.259 15:13:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.259 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.259 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.259 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.259 15:13:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.259 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.259 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.259 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.259 15:13:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.259 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.259 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.259 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.259 15:13:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.259 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.259 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.259 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.259 15:13:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.259 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.259 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.259 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.259 15:13:39 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.259 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.259 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.259 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.259 15:13:39 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.259 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.259 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.259 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.259 15:13:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.259 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.259 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.259 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.259 15:13:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.259 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.259 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.259 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.259 15:13:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.259 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.259 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.259 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.259 15:13:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.259 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.259 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.259 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.259 15:13:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.260 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.260 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.260 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.260 15:13:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.260 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.260 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.260 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.260 15:13:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.260 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.260 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.260 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.260 15:13:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.260 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.260 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.260 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.260 15:13:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.260 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.260 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.260 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.260 15:13:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.260 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.260 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.260 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.260 15:13:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.260 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.260 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.260 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.260 15:13:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.260 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.260 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.260 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.260 15:13:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.260 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.260 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.260 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.260 15:13:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.260 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.260 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.260 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.260 15:13:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.260 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.260 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.260 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.260 15:13:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.260 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.260 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.260 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.260 15:13:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.260 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.260 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.260 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.260 15:13:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.260 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.260 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.260 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.260 15:13:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.260 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.260 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.260 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.260 15:13:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.260 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.260 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.260 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.260 15:13:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.260 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.260 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.260 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.260 15:13:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.260 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.260 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.260 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.260 15:13:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.260 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.260 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.260 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.260 15:13:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.260 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.260 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.260 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.260 15:13:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.260 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.260 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.260 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.260 15:13:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.260 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.260 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.260 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.260 15:13:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.260 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.260 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.260 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.260 15:13:39 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.260 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.260 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.260 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.260 15:13:39 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.260 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.260 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.260 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.260 15:13:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.260 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.260 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.260 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.260 15:13:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.260 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.260 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.260 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.260 15:13:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.260 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.260 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.260 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.260 15:13:39 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.260 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.260 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.260 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.260 15:13:39 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.260 15:13:39 -- setup/common.sh@33 -- # echo 0 00:03:22.260 15:13:39 -- setup/common.sh@33 -- # return 0 00:03:22.260 15:13:39 -- setup/hugepages.sh@99 -- # surp=0 00:03:22.260 15:13:39 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:22.260 15:13:39 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:22.260 15:13:39 -- setup/common.sh@18 -- # local node= 00:03:22.260 15:13:39 -- setup/common.sh@19 -- # local var val 00:03:22.260 15:13:39 -- setup/common.sh@20 -- # local mem_f mem 00:03:22.260 15:13:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.260 15:13:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.260 15:13:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.260 15:13:39 -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.260 15:13:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.260 15:13:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109189220 kB' 'MemAvailable: 112719712 kB' 'Buffers: 4124 kB' 'Cached: 10549164 kB' 'SwapCached: 0 kB' 'Active: 7656528 kB' 'Inactive: 3515716 kB' 'Active(anon): 6966132 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515716 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 622408 kB' 'Mapped: 176712 kB' 'Shmem: 6347176 kB' 'KReclaimable: 297420 kB' 'Slab: 1073056 kB' 'SReclaimable: 297420 kB' 'SUnreclaim: 775636 kB' 'KernelStack: 27040 kB' 'PageTables: 8476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8343112 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234620 kB' 'VmallocChunk: 0 kB' 'Percpu: 106560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3632500 kB' 'DirectMap2M: 42184704 kB' 'DirectMap1G: 90177536 kB' 00:03:22.260 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.260 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.260 15:13:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.260 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.260 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.260 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.260 15:13:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.260 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.260 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.260 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.260 15:13:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.260 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.260 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.261 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.261 15:13:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.262 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.262 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.262 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.262 15:13:39 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.262 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.262 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.262 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.262 15:13:39 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.262 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.262 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.262 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.262 15:13:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.262 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.262 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.262 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.262 15:13:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.262 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.262 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.262 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.262 15:13:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.262 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.262 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.262 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.262 15:13:39 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.262 15:13:39 -- setup/common.sh@33 -- # echo 0 00:03:22.262 15:13:39 -- setup/common.sh@33 -- # return 0 00:03:22.262 15:13:39 -- setup/hugepages.sh@100 -- # resv=0 00:03:22.262 15:13:39 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:22.262 nr_hugepages=1024 00:03:22.262 15:13:39 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:22.262 resv_hugepages=0 00:03:22.262 15:13:39 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:22.262 surplus_hugepages=0 00:03:22.262 15:13:39 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:22.262 anon_hugepages=0 00:03:22.262 15:13:39 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:22.262 15:13:39 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:22.262 15:13:39 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:22.262 15:13:39 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:22.262 15:13:39 -- setup/common.sh@18 -- # local node= 00:03:22.262 15:13:39 -- setup/common.sh@19 -- # local var val 00:03:22.262 15:13:39 -- setup/common.sh@20 -- # local mem_f mem 00:03:22.262 15:13:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.262 15:13:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.262 15:13:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.262 15:13:39 -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.262 15:13:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.262 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.262 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.262 15:13:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109189220 kB' 'MemAvailable: 112719712 kB' 'Buffers: 4124 kB' 'Cached: 10549180 kB' 'SwapCached: 0 kB' 'Active: 7656480 kB' 'Inactive: 3515716 kB' 'Active(anon): 6966084 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515716 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 622328 kB' 'Mapped: 176712 kB' 'Shmem: 6347192 kB' 'KReclaimable: 297420 kB' 'Slab: 1073056 kB' 'SReclaimable: 297420 kB' 'SUnreclaim: 775636 kB' 'KernelStack: 27024 kB' 'PageTables: 8424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8343128 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234620 kB' 'VmallocChunk: 0 kB' 'Percpu: 106560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3632500 kB' 'DirectMap2M: 42184704 kB' 'DirectMap1G: 90177536 kB' 00:03:22.262 15:13:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.262 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.262 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.262 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.262 15:13:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.262 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.262 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.262 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.262 15:13:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.262 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.262 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.262 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.262 15:13:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.262 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.262 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.262 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.262 15:13:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.262 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.262 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.262 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.262 15:13:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.262 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.262 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.262 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.262 15:13:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.262 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.262 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.262 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.262 15:13:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.262 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.262 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.262 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.262 15:13:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.262 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.262 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.262 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.262 15:13:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.262 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.262 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.262 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.262 15:13:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.262 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.262 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.262 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.262 15:13:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.262 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.262 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.262 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.262 15:13:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.262 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.262 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.262 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.262 15:13:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.262 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.262 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.262 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.262 15:13:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.262 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.262 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.262 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.262 15:13:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.262 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.262 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.262 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.262 15:13:39 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.262 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.262 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.263 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.263 15:13:39 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.263 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.263 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.263 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.263 15:13:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.263 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.263 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.263 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.263 15:13:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.263 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.263 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.263 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.263 15:13:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.263 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.263 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.263 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.263 15:13:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.263 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.263 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.263 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.263 15:13:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.263 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.263 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.263 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.263 15:13:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.263 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.263 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.263 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.263 15:13:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.263 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.263 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.263 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.263 15:13:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.263 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.263 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.263 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.263 15:13:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.263 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.263 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.263 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.263 15:13:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.263 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.263 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.263 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.263 15:13:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.263 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.263 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.263 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.263 15:13:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.263 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.263 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.263 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.263 15:13:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.263 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.263 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.263 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.263 15:13:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.263 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.263 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.263 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.263 15:13:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.263 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.263 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.263 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.263 15:13:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.263 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.263 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.263 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.263 15:13:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.263 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.263 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.263 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.263 15:13:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.263 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.263 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.263 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.263 15:13:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.263 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.263 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.263 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.263 15:13:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.263 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.263 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.263 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.263 15:13:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.263 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.263 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.263 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.263 15:13:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.263 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.263 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.263 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.263 15:13:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.263 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.263 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.263 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.263 15:13:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.263 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.263 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.263 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.263 15:13:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.263 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.263 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.263 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.263 15:13:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.263 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.263 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.263 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.263 15:13:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.263 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.263 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.263 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.263 15:13:39 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.263 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.263 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.263 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.263 15:13:39 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.263 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.263 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.263 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.263 15:13:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.263 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.263 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.263 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.263 15:13:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.263 15:13:39 -- setup/common.sh@33 -- # echo 1024 00:03:22.263 15:13:39 -- setup/common.sh@33 -- # return 0 00:03:22.263 15:13:39 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:22.263 15:13:39 -- setup/hugepages.sh@112 -- # get_nodes 00:03:22.263 15:13:39 -- setup/hugepages.sh@27 -- # local node 00:03:22.263 15:13:39 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:22.263 15:13:39 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:22.263 15:13:39 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:22.263 15:13:39 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:22.263 15:13:39 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:22.263 15:13:39 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:22.263 15:13:39 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:22.263 15:13:39 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:22.263 15:13:39 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:22.263 15:13:39 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.263 15:13:39 -- setup/common.sh@18 -- # local node=0 00:03:22.263 15:13:39 -- setup/common.sh@19 -- # local var val 00:03:22.263 15:13:39 -- setup/common.sh@20 -- # local mem_f mem 00:03:22.263 15:13:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.263 15:13:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:22.263 15:13:39 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:22.263 15:13:39 -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.263 15:13:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.263 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.263 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.263 15:13:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 60405760 kB' 'MemUsed: 5253248 kB' 'SwapCached: 0 kB' 'Active: 2549720 kB' 'Inactive: 106348 kB' 'Active(anon): 2240200 kB' 'Inactive(anon): 0 kB' 'Active(file): 309520 kB' 'Inactive(file): 106348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2547364 kB' 'Mapped: 108212 kB' 'AnonPages: 112044 kB' 'Shmem: 2131496 kB' 'KernelStack: 13544 kB' 'PageTables: 3916 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 159900 kB' 'Slab: 520788 kB' 'SReclaimable: 159900 kB' 'SUnreclaim: 360888 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:22.264 15:13:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.264 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.264 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.264 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.264 15:13:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.264 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.264 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.264 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.264 15:13:39 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.264 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.264 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.264 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.264 15:13:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.264 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.264 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.264 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.264 15:13:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.264 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.264 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.264 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.264 15:13:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.264 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.264 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.264 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.264 15:13:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.264 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.264 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.264 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.264 15:13:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.264 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.264 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.264 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.264 15:13:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.264 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.264 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.264 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.264 15:13:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.264 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.264 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.264 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.264 15:13:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.264 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.264 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.264 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.264 15:13:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.264 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.264 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.264 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.264 15:13:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.264 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.264 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.264 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.264 15:13:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.264 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.264 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.264 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.264 15:13:39 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.264 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.264 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.264 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.264 15:13:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.264 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.264 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.264 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.264 15:13:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.264 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.264 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.264 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.264 15:13:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.264 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.264 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.264 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.264 15:13:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.264 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.264 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.264 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.264 15:13:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.264 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.264 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.264 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.264 15:13:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.264 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.264 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.264 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.264 15:13:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.264 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.264 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.264 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.264 15:13:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.264 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.264 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.264 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.264 15:13:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.264 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.264 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.264 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.264 15:13:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.264 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.264 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.264 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.264 15:13:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.264 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.264 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.264 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.264 15:13:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.264 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.264 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.264 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.264 15:13:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.264 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.264 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.264 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.264 15:13:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.264 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.264 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.264 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.264 15:13:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.264 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.264 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.264 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.264 15:13:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.264 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.264 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.264 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.264 15:13:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.264 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.264 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.264 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.264 15:13:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.264 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.264 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.264 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.264 15:13:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.264 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.264 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.264 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.264 15:13:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.264 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.264 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.264 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.264 15:13:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.264 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.264 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.264 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.264 15:13:39 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.264 15:13:39 -- setup/common.sh@33 -- # echo 0 00:03:22.264 15:13:39 -- setup/common.sh@33 -- # return 0 00:03:22.264 15:13:39 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:22.264 15:13:39 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:22.264 15:13:39 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:22.264 15:13:39 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:22.264 15:13:39 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.264 15:13:39 -- setup/common.sh@18 -- # local node=1 00:03:22.264 15:13:39 -- setup/common.sh@19 -- # local var val 00:03:22.264 15:13:39 -- setup/common.sh@20 -- # local mem_f mem 00:03:22.264 15:13:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.264 15:13:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:22.264 15:13:39 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:22.264 15:13:39 -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.264 15:13:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.264 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.264 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.265 15:13:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679860 kB' 'MemFree: 48783460 kB' 'MemUsed: 11896400 kB' 'SwapCached: 0 kB' 'Active: 5107136 kB' 'Inactive: 3409368 kB' 'Active(anon): 4726260 kB' 'Inactive(anon): 0 kB' 'Active(file): 380876 kB' 'Inactive(file): 3409368 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8005952 kB' 'Mapped: 68500 kB' 'AnonPages: 510672 kB' 'Shmem: 4215708 kB' 'KernelStack: 13496 kB' 'PageTables: 4560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 137520 kB' 'Slab: 552268 kB' 'SReclaimable: 137520 kB' 'SUnreclaim: 414748 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:22.265 15:13:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.265 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.265 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.265 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.265 15:13:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.265 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.265 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.265 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.265 15:13:39 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.265 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.265 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.265 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.265 15:13:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.265 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.265 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.265 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.265 15:13:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.265 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.265 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.265 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.265 15:13:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.265 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.265 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.265 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.265 15:13:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.265 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.265 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.265 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.265 15:13:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.265 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.265 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.265 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.265 15:13:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.265 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.265 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.265 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.265 15:13:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.265 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.265 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.265 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.265 15:13:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.265 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.265 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.265 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.265 15:13:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.265 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.265 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.265 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.265 15:13:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.265 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.265 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.265 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.265 15:13:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.265 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.265 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.265 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.265 15:13:39 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.265 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.265 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.265 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.265 15:13:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.265 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.265 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.265 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.265 15:13:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.265 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.265 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.265 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.265 15:13:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.265 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.265 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.265 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.265 15:13:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.265 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.265 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.265 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.265 15:13:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.265 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.265 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.265 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.265 15:13:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.265 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.265 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.265 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.265 15:13:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.265 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.265 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.265 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.265 15:13:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.265 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.265 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.265 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.265 15:13:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.265 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.265 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.265 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.265 15:13:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.265 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.265 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.265 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.265 15:13:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.265 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.265 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.265 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.265 15:13:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.265 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.265 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.265 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.265 15:13:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.265 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.265 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.265 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.265 15:13:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.265 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.265 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.265 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.265 15:13:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.265 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.265 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.265 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.265 15:13:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.265 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.265 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.265 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.265 15:13:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.265 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.265 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.265 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.265 15:13:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.265 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.265 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.265 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.265 15:13:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.265 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.265 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.265 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.265 15:13:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.265 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.265 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.265 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.265 15:13:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.265 15:13:39 -- setup/common.sh@32 -- # continue 00:03:22.265 15:13:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.265 15:13:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.265 15:13:39 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.265 15:13:39 -- setup/common.sh@33 -- # echo 0 00:03:22.265 15:13:39 -- setup/common.sh@33 -- # return 0 00:03:22.265 15:13:39 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:22.265 15:13:39 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:22.265 15:13:39 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:22.265 15:13:39 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:22.265 15:13:39 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:22.265 node0=512 expecting 512 00:03:22.265 15:13:39 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:22.265 15:13:39 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:22.265 15:13:39 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:22.266 15:13:39 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:22.266 node1=512 expecting 512 00:03:22.266 15:13:39 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:22.266 00:03:22.266 real 0m3.822s 00:03:22.266 user 0m1.458s 00:03:22.266 sys 0m2.406s 00:03:22.266 15:13:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:22.266 15:13:39 -- common/autotest_common.sh@10 -- # set +x 00:03:22.266 ************************************ 00:03:22.266 END TEST per_node_1G_alloc 00:03:22.266 ************************************ 00:03:22.526 15:13:39 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:22.526 15:13:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:22.526 15:13:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:22.526 15:13:39 -- common/autotest_common.sh@10 -- # set +x 00:03:22.526 ************************************ 00:03:22.526 START TEST even_2G_alloc 00:03:22.526 ************************************ 00:03:22.526 15:13:39 -- common/autotest_common.sh@1111 -- # even_2G_alloc 00:03:22.526 15:13:39 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:22.527 15:13:39 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:22.527 15:13:39 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:22.527 15:13:39 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:22.527 15:13:39 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:22.527 15:13:39 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:22.527 15:13:39 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:22.527 15:13:39 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:22.527 15:13:39 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:22.527 15:13:39 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:22.527 15:13:39 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:22.527 15:13:39 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:22.527 15:13:39 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:22.527 15:13:39 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:22.527 15:13:39 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:22.527 15:13:39 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:22.527 15:13:39 -- setup/hugepages.sh@83 -- # : 512 00:03:22.527 15:13:39 -- setup/hugepages.sh@84 -- # : 1 00:03:22.527 15:13:39 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:22.527 15:13:39 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:22.527 15:13:39 -- setup/hugepages.sh@83 -- # : 0 00:03:22.527 15:13:39 -- setup/hugepages.sh@84 -- # : 0 00:03:22.527 15:13:39 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:22.527 15:13:39 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:22.527 15:13:39 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:22.527 15:13:39 -- setup/hugepages.sh@153 -- # setup output 00:03:22.527 15:13:39 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:22.527 15:13:39 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:25.826 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:25.826 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:25.826 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:25.826 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:25.826 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:25.826 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:25.826 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:25.826 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:25.826 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:25.826 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:25.826 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:25.826 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:25.826 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:25.826 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:25.826 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:25.826 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:25.826 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:26.086 15:13:43 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:26.086 15:13:43 -- setup/hugepages.sh@89 -- # local node 00:03:26.086 15:13:43 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:26.086 15:13:43 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:26.086 15:13:43 -- setup/hugepages.sh@92 -- # local surp 00:03:26.086 15:13:43 -- setup/hugepages.sh@93 -- # local resv 00:03:26.086 15:13:43 -- setup/hugepages.sh@94 -- # local anon 00:03:26.086 15:13:43 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:26.086 15:13:43 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:26.086 15:13:43 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:26.086 15:13:43 -- setup/common.sh@18 -- # local node= 00:03:26.086 15:13:43 -- setup/common.sh@19 -- # local var val 00:03:26.087 15:13:43 -- setup/common.sh@20 -- # local mem_f mem 00:03:26.087 15:13:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.087 15:13:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.087 15:13:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.087 15:13:43 -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.087 15:13:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.087 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.087 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.087 15:13:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109179656 kB' 'MemAvailable: 112710116 kB' 'Buffers: 4124 kB' 'Cached: 10549296 kB' 'SwapCached: 0 kB' 'Active: 7658168 kB' 'Inactive: 3515716 kB' 'Active(anon): 6967772 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515716 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 623252 kB' 'Mapped: 176764 kB' 'Shmem: 6347308 kB' 'KReclaimable: 297356 kB' 'Slab: 1073168 kB' 'SReclaimable: 297356 kB' 'SUnreclaim: 775812 kB' 'KernelStack: 27040 kB' 'PageTables: 8484 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8344596 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234812 kB' 'VmallocChunk: 0 kB' 'Percpu: 106560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3632500 kB' 'DirectMap2M: 42184704 kB' 'DirectMap1G: 90177536 kB' 00:03:26.087 15:13:43 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.087 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.087 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.087 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.087 15:13:43 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.087 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.087 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.087 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.087 15:13:43 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.087 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.087 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.087 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.087 15:13:43 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.087 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.087 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.087 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.087 15:13:43 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.087 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.087 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.087 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.087 15:13:43 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.087 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.087 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.087 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.087 15:13:43 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.087 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.087 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.087 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.087 15:13:43 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.087 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.087 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.087 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.087 15:13:43 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.087 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.087 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.087 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.087 15:13:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.087 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.087 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.087 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.087 15:13:43 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.087 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.087 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.087 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.087 15:13:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.087 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.087 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.087 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.087 15:13:43 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.087 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.087 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.087 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.087 15:13:43 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.087 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.087 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.087 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.087 15:13:43 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.087 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.087 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.087 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.087 15:13:43 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.087 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.087 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.087 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.087 15:13:43 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.087 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.087 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.087 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.087 15:13:43 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.087 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.087 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.087 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.087 15:13:43 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.087 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.087 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.087 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.087 15:13:43 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.087 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.087 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.087 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.087 15:13:43 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.087 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.087 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.087 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.087 15:13:43 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.087 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.087 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.087 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.087 15:13:43 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.087 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.087 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.087 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.087 15:13:43 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.087 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.087 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.087 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.087 15:13:43 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.087 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.087 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.087 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.087 15:13:43 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.087 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.087 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.087 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.087 15:13:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.087 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.087 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.087 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.087 15:13:43 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.087 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.087 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.087 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.087 15:13:43 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.087 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.087 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.087 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.087 15:13:43 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.087 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.087 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.087 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.087 15:13:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.087 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.087 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.087 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.087 15:13:43 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.087 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.087 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.087 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.087 15:13:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.087 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.087 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.087 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.087 15:13:43 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.087 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.087 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.087 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.088 15:13:43 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.088 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.088 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.088 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.088 15:13:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.088 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.088 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.088 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.088 15:13:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.088 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.088 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.088 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.088 15:13:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.088 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.088 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.088 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.088 15:13:43 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.088 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.088 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.088 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.088 15:13:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.088 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.088 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.088 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.088 15:13:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.088 15:13:43 -- setup/common.sh@33 -- # echo 0 00:03:26.088 15:13:43 -- setup/common.sh@33 -- # return 0 00:03:26.088 15:13:43 -- setup/hugepages.sh@97 -- # anon=0 00:03:26.088 15:13:43 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:26.088 15:13:43 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.088 15:13:43 -- setup/common.sh@18 -- # local node= 00:03:26.088 15:13:43 -- setup/common.sh@19 -- # local var val 00:03:26.088 15:13:43 -- setup/common.sh@20 -- # local mem_f mem 00:03:26.088 15:13:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.088 15:13:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.088 15:13:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.088 15:13:43 -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.088 15:13:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.088 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.088 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.088 15:13:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109178688 kB' 'MemAvailable: 112709148 kB' 'Buffers: 4124 kB' 'Cached: 10549296 kB' 'SwapCached: 0 kB' 'Active: 7658120 kB' 'Inactive: 3515716 kB' 'Active(anon): 6967724 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515716 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 623236 kB' 'Mapped: 176748 kB' 'Shmem: 6347308 kB' 'KReclaimable: 297356 kB' 'Slab: 1073168 kB' 'SReclaimable: 297356 kB' 'SUnreclaim: 775812 kB' 'KernelStack: 27072 kB' 'PageTables: 8548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8345116 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234780 kB' 'VmallocChunk: 0 kB' 'Percpu: 106560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3632500 kB' 'DirectMap2M: 42184704 kB' 'DirectMap1G: 90177536 kB' 00:03:26.088 15:13:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.088 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.088 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.088 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.088 15:13:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.088 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.088 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.088 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.088 15:13:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.088 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.088 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.088 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.088 15:13:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.088 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.088 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.088 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.088 15:13:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.088 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.088 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.088 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.088 15:13:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.088 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.088 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.088 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.088 15:13:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.088 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.088 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.088 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.088 15:13:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.088 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.088 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.088 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.088 15:13:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.088 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.088 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.088 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.088 15:13:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.088 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.088 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.088 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.088 15:13:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.088 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.088 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.088 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.088 15:13:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.088 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.088 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.088 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.088 15:13:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.088 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.088 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.088 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.088 15:13:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.088 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.088 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.088 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.088 15:13:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.088 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.088 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.088 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.088 15:13:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.088 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.088 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.088 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.088 15:13:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.088 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.088 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.088 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.088 15:13:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.088 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.088 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.088 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.088 15:13:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.088 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.088 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.088 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.088 15:13:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.088 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.088 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.088 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.088 15:13:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.088 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.088 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.088 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.088 15:13:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.088 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.088 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.088 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.088 15:13:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.088 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.088 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.088 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.088 15:13:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.088 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.088 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.088 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.088 15:13:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.088 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.088 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.088 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.088 15:13:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.088 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.088 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.088 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.088 15:13:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.088 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.089 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.089 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.089 15:13:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.089 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.089 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.089 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.089 15:13:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.089 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.089 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.089 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.089 15:13:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.089 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.089 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.089 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.089 15:13:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.089 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.089 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.089 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.089 15:13:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.089 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.089 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.089 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.089 15:13:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.089 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.089 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.089 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.089 15:13:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.089 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.089 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.089 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.089 15:13:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.089 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.089 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.089 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.089 15:13:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.089 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.089 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.089 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.089 15:13:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.089 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.089 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.089 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.089 15:13:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.089 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.089 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.089 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.089 15:13:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.089 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.089 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.089 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.089 15:13:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.089 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.089 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.089 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.089 15:13:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.089 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.089 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.089 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.089 15:13:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.089 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.089 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.089 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.089 15:13:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.089 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.089 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.089 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.089 15:13:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.089 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.089 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.089 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.089 15:13:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.089 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.089 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.089 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.089 15:13:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.089 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.089 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.089 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.089 15:13:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.089 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.089 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.089 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.089 15:13:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.089 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.089 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.089 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.089 15:13:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.089 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.089 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.089 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.089 15:13:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.089 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.089 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.089 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.089 15:13:43 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.089 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.089 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.089 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.089 15:13:43 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.089 15:13:43 -- setup/common.sh@33 -- # echo 0 00:03:26.089 15:13:43 -- setup/common.sh@33 -- # return 0 00:03:26.089 15:13:43 -- setup/hugepages.sh@99 -- # surp=0 00:03:26.089 15:13:43 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:26.089 15:13:43 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:26.089 15:13:43 -- setup/common.sh@18 -- # local node= 00:03:26.089 15:13:43 -- setup/common.sh@19 -- # local var val 00:03:26.089 15:13:43 -- setup/common.sh@20 -- # local mem_f mem 00:03:26.089 15:13:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.089 15:13:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.089 15:13:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.089 15:13:43 -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.089 15:13:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.089 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.089 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.089 15:13:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109178216 kB' 'MemAvailable: 112708676 kB' 'Buffers: 4124 kB' 'Cached: 10549296 kB' 'SwapCached: 0 kB' 'Active: 7657376 kB' 'Inactive: 3515716 kB' 'Active(anon): 6966980 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515716 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 622996 kB' 'Mapped: 176748 kB' 'Shmem: 6347308 kB' 'KReclaimable: 297356 kB' 'Slab: 1073168 kB' 'SReclaimable: 297356 kB' 'SUnreclaim: 775812 kB' 'KernelStack: 27040 kB' 'PageTables: 8444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8344372 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234748 kB' 'VmallocChunk: 0 kB' 'Percpu: 106560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3632500 kB' 'DirectMap2M: 42184704 kB' 'DirectMap1G: 90177536 kB' 00:03:26.089 15:13:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.089 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.089 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.089 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.089 15:13:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.355 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.355 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.355 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.355 15:13:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.355 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.355 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.355 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.355 15:13:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.355 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.355 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.355 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.355 15:13:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.355 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.355 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.355 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.355 15:13:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.355 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.355 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.355 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.355 15:13:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.355 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.355 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.355 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.355 15:13:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.355 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.355 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.355 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.355 15:13:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.355 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.355 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.355 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.355 15:13:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.355 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.355 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.355 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.355 15:13:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.355 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.355 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.355 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.355 15:13:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.355 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.355 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.355 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.355 15:13:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.355 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.355 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.355 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.355 15:13:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.355 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.355 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.355 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.355 15:13:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.355 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.355 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.355 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.355 15:13:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.355 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.355 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.355 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.355 15:13:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.355 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.355 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.355 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.355 15:13:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.355 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.355 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.355 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.355 15:13:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.355 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.355 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.355 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.355 15:13:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.355 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.355 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.355 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.355 15:13:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.355 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.355 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.355 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.355 15:13:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.355 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.355 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.355 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.355 15:13:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.355 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.355 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.355 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.355 15:13:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.355 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.355 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.355 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.355 15:13:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.355 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.355 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.355 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.355 15:13:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.355 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.355 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.355 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.355 15:13:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.355 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.355 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.355 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.355 15:13:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.355 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.355 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.355 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.355 15:13:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.355 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.355 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.355 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.355 15:13:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.355 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.355 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.355 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.355 15:13:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.355 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.355 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.355 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.355 15:13:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.355 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.355 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.355 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.355 15:13:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.355 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.355 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.355 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.355 15:13:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.355 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.355 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.355 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.355 15:13:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.355 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.355 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.355 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.355 15:13:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.355 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.355 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.355 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.355 15:13:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.355 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.355 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.355 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.355 15:13:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.355 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.356 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.356 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.356 15:13:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.356 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.356 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.356 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.356 15:13:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.356 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.356 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.356 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.356 15:13:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.356 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.356 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.356 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.356 15:13:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.356 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.356 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.356 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.356 15:13:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.356 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.356 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.356 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.356 15:13:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.356 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.356 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.356 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.356 15:13:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.356 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.356 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.356 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.356 15:13:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.356 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.356 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.356 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.356 15:13:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.356 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.356 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.356 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.356 15:13:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.356 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.356 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.356 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.356 15:13:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.356 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.356 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.356 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.356 15:13:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.356 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.356 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.356 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.356 15:13:43 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.356 15:13:43 -- setup/common.sh@33 -- # echo 0 00:03:26.356 15:13:43 -- setup/common.sh@33 -- # return 0 00:03:26.356 15:13:43 -- setup/hugepages.sh@100 -- # resv=0 00:03:26.356 15:13:43 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:26.356 nr_hugepages=1024 00:03:26.356 15:13:43 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:26.356 resv_hugepages=0 00:03:26.356 15:13:43 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:26.356 surplus_hugepages=0 00:03:26.356 15:13:43 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:26.356 anon_hugepages=0 00:03:26.356 15:13:43 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:26.356 15:13:43 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:26.356 15:13:43 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:26.356 15:13:43 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:26.356 15:13:43 -- setup/common.sh@18 -- # local node= 00:03:26.356 15:13:43 -- setup/common.sh@19 -- # local var val 00:03:26.356 15:13:43 -- setup/common.sh@20 -- # local mem_f mem 00:03:26.356 15:13:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.356 15:13:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.356 15:13:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.356 15:13:43 -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.356 15:13:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.356 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.356 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.356 15:13:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109179104 kB' 'MemAvailable: 112709564 kB' 'Buffers: 4124 kB' 'Cached: 10549324 kB' 'SwapCached: 0 kB' 'Active: 7657088 kB' 'Inactive: 3515716 kB' 'Active(anon): 6966692 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515716 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 622696 kB' 'Mapped: 176748 kB' 'Shmem: 6347336 kB' 'KReclaimable: 297356 kB' 'Slab: 1073192 kB' 'SReclaimable: 297356 kB' 'SUnreclaim: 775836 kB' 'KernelStack: 27040 kB' 'PageTables: 8480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8344388 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234748 kB' 'VmallocChunk: 0 kB' 'Percpu: 106560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3632500 kB' 'DirectMap2M: 42184704 kB' 'DirectMap1G: 90177536 kB' 00:03:26.356 15:13:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.356 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.356 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.356 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.356 15:13:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.356 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.356 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.356 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.356 15:13:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.356 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.356 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.356 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.356 15:13:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.356 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.356 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.356 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.356 15:13:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.356 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.356 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.356 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.356 15:13:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.356 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.356 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.356 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.356 15:13:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.356 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.356 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.356 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.356 15:13:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.356 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.356 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.356 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.356 15:13:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.356 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.356 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.356 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.356 15:13:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.356 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.356 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.356 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.356 15:13:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.356 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.356 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.356 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.356 15:13:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.356 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.356 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.356 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.356 15:13:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.356 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.356 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.356 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.356 15:13:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.356 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.356 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.356 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.356 15:13:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.356 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.356 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.356 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.356 15:13:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.356 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.356 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.356 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.356 15:13:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.356 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.356 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.356 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.356 15:13:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.357 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.357 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.357 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.357 15:13:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.357 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.357 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.357 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.357 15:13:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.357 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.357 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.357 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.357 15:13:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.357 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.357 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.357 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.357 15:13:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.357 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.357 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.357 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.357 15:13:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.357 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.357 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.357 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.357 15:13:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.357 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.357 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.357 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.357 15:13:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.357 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.357 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.357 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.357 15:13:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.357 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.357 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.357 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.357 15:13:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.357 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.357 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.357 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.357 15:13:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.357 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.357 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.357 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.357 15:13:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.357 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.357 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.357 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.357 15:13:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.357 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.357 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.357 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.357 15:13:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.357 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.357 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.357 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.357 15:13:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.357 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.357 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.357 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.357 15:13:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.357 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.357 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.357 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.357 15:13:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.357 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.357 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.357 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.357 15:13:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.357 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.357 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.357 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.357 15:13:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.357 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.357 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.357 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.357 15:13:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.357 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.357 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.357 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.357 15:13:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.357 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.357 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.357 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.357 15:13:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.357 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.357 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.357 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.357 15:13:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.357 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.357 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.357 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.357 15:13:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.357 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.357 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.357 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.357 15:13:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.357 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.357 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.357 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.357 15:13:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.357 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.357 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.357 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.357 15:13:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.357 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.357 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.357 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.357 15:13:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.357 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.357 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.357 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.357 15:13:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.357 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.357 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.357 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.357 15:13:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.357 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.357 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.357 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.357 15:13:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.357 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.357 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.357 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.357 15:13:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.357 15:13:43 -- setup/common.sh@33 -- # echo 1024 00:03:26.357 15:13:43 -- setup/common.sh@33 -- # return 0 00:03:26.357 15:13:43 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:26.357 15:13:43 -- setup/hugepages.sh@112 -- # get_nodes 00:03:26.357 15:13:43 -- setup/hugepages.sh@27 -- # local node 00:03:26.357 15:13:43 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:26.357 15:13:43 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:26.357 15:13:43 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:26.357 15:13:43 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:26.357 15:13:43 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:26.357 15:13:43 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:26.357 15:13:43 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:26.357 15:13:43 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:26.357 15:13:43 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:26.357 15:13:43 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.357 15:13:43 -- setup/common.sh@18 -- # local node=0 00:03:26.357 15:13:43 -- setup/common.sh@19 -- # local var val 00:03:26.357 15:13:43 -- setup/common.sh@20 -- # local mem_f mem 00:03:26.357 15:13:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.357 15:13:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:26.357 15:13:43 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:26.357 15:13:43 -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.357 15:13:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.357 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.357 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.358 15:13:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 60388596 kB' 'MemUsed: 5270412 kB' 'SwapCached: 0 kB' 'Active: 2549640 kB' 'Inactive: 106348 kB' 'Active(anon): 2240120 kB' 'Inactive(anon): 0 kB' 'Active(file): 309520 kB' 'Inactive(file): 106348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2547456 kB' 'Mapped: 108248 kB' 'AnonPages: 111700 kB' 'Shmem: 2131588 kB' 'KernelStack: 13512 kB' 'PageTables: 3820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 159900 kB' 'Slab: 520892 kB' 'SReclaimable: 159900 kB' 'SUnreclaim: 360992 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:26.358 15:13:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.358 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.358 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.358 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.358 15:13:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.358 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.358 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.358 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.358 15:13:43 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.358 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.358 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.358 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.358 15:13:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.358 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.358 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.358 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.358 15:13:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.358 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.358 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.358 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.358 15:13:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.358 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.358 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.358 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.358 15:13:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.358 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.358 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.358 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.358 15:13:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.358 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.358 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.358 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.358 15:13:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.358 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.358 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.358 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.358 15:13:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.358 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.358 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.358 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.358 15:13:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.358 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.358 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.358 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.358 15:13:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.358 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.358 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.358 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.358 15:13:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.358 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.358 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.358 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.358 15:13:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.358 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.358 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.358 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.358 15:13:43 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.358 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.358 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.358 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.358 15:13:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.358 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.358 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.358 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.358 15:13:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.358 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.358 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.358 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.358 15:13:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.358 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.358 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.358 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.358 15:13:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.358 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.358 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.358 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.358 15:13:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.358 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.358 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.358 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.358 15:13:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.358 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.358 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.358 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.358 15:13:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.358 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.358 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.358 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.358 15:13:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.358 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.358 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.358 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.358 15:13:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.358 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.358 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.358 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.358 15:13:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.358 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.358 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.358 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.358 15:13:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.358 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.358 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.358 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.358 15:13:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.358 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.358 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.358 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.358 15:13:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.358 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.358 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.358 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.358 15:13:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.358 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.358 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.358 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.358 15:13:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.358 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.358 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.358 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.358 15:13:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.358 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.358 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.358 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.358 15:13:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.358 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.358 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.358 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.358 15:13:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.358 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.358 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.358 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.358 15:13:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.358 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.358 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.358 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.358 15:13:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.358 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.358 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.358 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.358 15:13:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.358 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.358 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.358 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.358 15:13:43 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.358 15:13:43 -- setup/common.sh@33 -- # echo 0 00:03:26.358 15:13:43 -- setup/common.sh@33 -- # return 0 00:03:26.358 15:13:43 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:26.358 15:13:43 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:26.358 15:13:43 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:26.358 15:13:43 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:26.358 15:13:43 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.358 15:13:43 -- setup/common.sh@18 -- # local node=1 00:03:26.358 15:13:43 -- setup/common.sh@19 -- # local var val 00:03:26.358 15:13:43 -- setup/common.sh@20 -- # local mem_f mem 00:03:26.358 15:13:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.358 15:13:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:26.358 15:13:43 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:26.358 15:13:43 -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.358 15:13:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.358 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.358 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.359 15:13:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679860 kB' 'MemFree: 48790256 kB' 'MemUsed: 11889604 kB' 'SwapCached: 0 kB' 'Active: 5107816 kB' 'Inactive: 3409368 kB' 'Active(anon): 4726940 kB' 'Inactive(anon): 0 kB' 'Active(file): 380876 kB' 'Inactive(file): 3409368 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8006004 kB' 'Mapped: 68500 kB' 'AnonPages: 511344 kB' 'Shmem: 4215760 kB' 'KernelStack: 13528 kB' 'PageTables: 4660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 137456 kB' 'Slab: 552300 kB' 'SReclaimable: 137456 kB' 'SUnreclaim: 414844 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:26.359 15:13:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.359 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.359 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.359 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.359 15:13:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.359 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.359 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.359 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.359 15:13:43 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.359 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.359 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.359 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.359 15:13:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.359 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.359 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.359 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.359 15:13:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.359 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.359 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.359 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.359 15:13:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.359 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.359 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.359 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.359 15:13:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.359 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.359 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.359 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.359 15:13:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.359 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.359 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.359 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.359 15:13:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.359 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.359 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.359 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.359 15:13:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.359 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.359 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.359 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.359 15:13:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.359 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.359 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.359 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.359 15:13:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.359 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.359 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.359 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.359 15:13:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.359 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.359 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.359 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.359 15:13:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.359 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.359 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.359 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.359 15:13:43 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.359 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.359 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.359 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.359 15:13:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.359 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.359 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.359 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.359 15:13:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.359 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.359 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.359 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.359 15:13:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.359 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.359 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.359 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.359 15:13:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.359 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.359 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.359 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.359 15:13:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.359 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.359 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.359 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.359 15:13:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.359 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.359 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.359 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.359 15:13:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.359 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.359 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.359 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.359 15:13:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.359 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.359 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.359 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.359 15:13:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.359 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.359 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.359 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.359 15:13:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.359 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.359 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.359 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.359 15:13:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.359 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.359 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.359 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.359 15:13:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.359 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.359 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.359 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.359 15:13:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.359 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.359 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.359 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.359 15:13:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.359 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.359 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.359 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.359 15:13:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.359 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.359 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.359 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.359 15:13:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.359 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.359 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.359 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.360 15:13:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.360 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.360 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.360 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.360 15:13:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.360 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.360 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.360 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.360 15:13:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.360 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.360 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.360 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.360 15:13:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.360 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.360 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.360 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.360 15:13:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.360 15:13:43 -- setup/common.sh@32 -- # continue 00:03:26.360 15:13:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.360 15:13:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.360 15:13:43 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.360 15:13:43 -- setup/common.sh@33 -- # echo 0 00:03:26.360 15:13:43 -- setup/common.sh@33 -- # return 0 00:03:26.360 15:13:43 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:26.360 15:13:43 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:26.360 15:13:43 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:26.360 15:13:43 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:26.360 15:13:43 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:26.360 node0=512 expecting 512 00:03:26.360 15:13:43 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:26.360 15:13:43 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:26.360 15:13:43 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:26.360 15:13:43 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:26.360 node1=512 expecting 512 00:03:26.360 15:13:43 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:26.360 00:03:26.360 real 0m3.751s 00:03:26.360 user 0m1.463s 00:03:26.360 sys 0m2.328s 00:03:26.360 15:13:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:26.360 15:13:43 -- common/autotest_common.sh@10 -- # set +x 00:03:26.360 ************************************ 00:03:26.360 END TEST even_2G_alloc 00:03:26.360 ************************************ 00:03:26.360 15:13:43 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:26.360 15:13:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:26.360 15:13:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:26.360 15:13:43 -- common/autotest_common.sh@10 -- # set +x 00:03:26.360 ************************************ 00:03:26.360 START TEST odd_alloc 00:03:26.360 ************************************ 00:03:26.360 15:13:43 -- common/autotest_common.sh@1111 -- # odd_alloc 00:03:26.360 15:13:43 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:26.360 15:13:43 -- setup/hugepages.sh@49 -- # local size=2098176 00:03:26.360 15:13:43 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:26.360 15:13:43 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:26.360 15:13:43 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:26.360 15:13:43 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:26.360 15:13:43 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:26.360 15:13:43 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:26.360 15:13:43 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:26.360 15:13:43 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:26.360 15:13:43 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:26.360 15:13:43 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:26.360 15:13:43 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:26.360 15:13:43 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:26.360 15:13:43 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:26.360 15:13:43 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:26.360 15:13:43 -- setup/hugepages.sh@83 -- # : 513 00:03:26.360 15:13:43 -- setup/hugepages.sh@84 -- # : 1 00:03:26.360 15:13:43 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:26.360 15:13:43 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:26.360 15:13:43 -- setup/hugepages.sh@83 -- # : 0 00:03:26.360 15:13:43 -- setup/hugepages.sh@84 -- # : 0 00:03:26.360 15:13:43 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:26.360 15:13:43 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:26.360 15:13:43 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:26.360 15:13:43 -- setup/hugepages.sh@160 -- # setup output 00:03:26.360 15:13:43 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:26.360 15:13:43 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:29.745 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:29.745 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:29.745 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:29.745 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:29.745 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:29.745 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:29.745 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:29.745 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:30.006 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:30.006 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:30.006 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:30.006 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:30.006 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:30.006 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:30.006 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:30.006 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:30.006 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:30.273 15:13:47 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:30.273 15:13:47 -- setup/hugepages.sh@89 -- # local node 00:03:30.273 15:13:47 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:30.273 15:13:47 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:30.273 15:13:47 -- setup/hugepages.sh@92 -- # local surp 00:03:30.273 15:13:47 -- setup/hugepages.sh@93 -- # local resv 00:03:30.273 15:13:47 -- setup/hugepages.sh@94 -- # local anon 00:03:30.273 15:13:47 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:30.273 15:13:47 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:30.273 15:13:47 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:30.273 15:13:47 -- setup/common.sh@18 -- # local node= 00:03:30.273 15:13:47 -- setup/common.sh@19 -- # local var val 00:03:30.273 15:13:47 -- setup/common.sh@20 -- # local mem_f mem 00:03:30.273 15:13:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.273 15:13:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.273 15:13:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.273 15:13:47 -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.273 15:13:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.273 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.273 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.273 15:13:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109208880 kB' 'MemAvailable: 112739340 kB' 'Buffers: 4124 kB' 'Cached: 10549440 kB' 'SwapCached: 0 kB' 'Active: 7658976 kB' 'Inactive: 3515716 kB' 'Active(anon): 6968580 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515716 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 624424 kB' 'Mapped: 176936 kB' 'Shmem: 6347452 kB' 'KReclaimable: 297356 kB' 'Slab: 1073400 kB' 'SReclaimable: 297356 kB' 'SUnreclaim: 776044 kB' 'KernelStack: 27232 kB' 'PageTables: 8852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508436 kB' 'Committed_AS: 8347824 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234732 kB' 'VmallocChunk: 0 kB' 'Percpu: 106560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3632500 kB' 'DirectMap2M: 42184704 kB' 'DirectMap1G: 90177536 kB' 00:03:30.273 15:13:47 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.273 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.273 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.273 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.273 15:13:47 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.273 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.273 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.273 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.273 15:13:47 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.273 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.273 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.273 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.273 15:13:47 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.273 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.273 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.273 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.273 15:13:47 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.273 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.273 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.273 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.273 15:13:47 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.273 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.273 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.273 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.273 15:13:47 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.273 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.273 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.273 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.273 15:13:47 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.273 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.273 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.273 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.273 15:13:47 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.273 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.273 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.273 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.273 15:13:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.273 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.273 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.273 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.273 15:13:47 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.273 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.273 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.273 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.273 15:13:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.273 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.273 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.273 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.273 15:13:47 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.273 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.273 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.273 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.273 15:13:47 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.273 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.273 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.273 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.273 15:13:47 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.273 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.273 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.273 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.273 15:13:47 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.273 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.273 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.273 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.273 15:13:47 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.273 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.273 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.273 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.274 15:13:47 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.274 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.274 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.274 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.274 15:13:47 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.274 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.274 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.274 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.274 15:13:47 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.274 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.274 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.274 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.274 15:13:47 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.274 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.274 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.274 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.274 15:13:47 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.274 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.274 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.274 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.274 15:13:47 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.274 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.274 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.274 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.274 15:13:47 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.274 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.274 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.274 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.274 15:13:47 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.274 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.274 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.274 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.274 15:13:47 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.274 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.274 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.274 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.274 15:13:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.274 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.274 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.274 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.274 15:13:47 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.274 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.274 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.274 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.274 15:13:47 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.274 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.274 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.274 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.274 15:13:47 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.274 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.274 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.274 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.274 15:13:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.274 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.274 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.274 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.274 15:13:47 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.274 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.274 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.274 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.274 15:13:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.274 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.274 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.274 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.274 15:13:47 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.274 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.274 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.274 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.274 15:13:47 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.274 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.274 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.274 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.274 15:13:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.274 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.274 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.274 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.274 15:13:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.274 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.274 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.274 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.274 15:13:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.274 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.274 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.274 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.274 15:13:47 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.274 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.274 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.274 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.274 15:13:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.274 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.274 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.274 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.274 15:13:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.274 15:13:47 -- setup/common.sh@33 -- # echo 0 00:03:30.274 15:13:47 -- setup/common.sh@33 -- # return 0 00:03:30.274 15:13:47 -- setup/hugepages.sh@97 -- # anon=0 00:03:30.274 15:13:47 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:30.274 15:13:47 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.274 15:13:47 -- setup/common.sh@18 -- # local node= 00:03:30.274 15:13:47 -- setup/common.sh@19 -- # local var val 00:03:30.274 15:13:47 -- setup/common.sh@20 -- # local mem_f mem 00:03:30.274 15:13:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.274 15:13:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.274 15:13:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.274 15:13:47 -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.274 15:13:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.274 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.274 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.274 15:13:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109209032 kB' 'MemAvailable: 112739492 kB' 'Buffers: 4124 kB' 'Cached: 10549444 kB' 'SwapCached: 0 kB' 'Active: 7658408 kB' 'Inactive: 3515716 kB' 'Active(anon): 6968012 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515716 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 623812 kB' 'Mapped: 176824 kB' 'Shmem: 6347456 kB' 'KReclaimable: 297356 kB' 'Slab: 1073380 kB' 'SReclaimable: 297356 kB' 'SUnreclaim: 776024 kB' 'KernelStack: 27120 kB' 'PageTables: 8344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508436 kB' 'Committed_AS: 8347708 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234716 kB' 'VmallocChunk: 0 kB' 'Percpu: 106560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3632500 kB' 'DirectMap2M: 42184704 kB' 'DirectMap1G: 90177536 kB' 00:03:30.274 15:13:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.274 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.274 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.274 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.274 15:13:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.274 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.274 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.274 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.274 15:13:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.274 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.274 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.274 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.274 15:13:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.274 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.274 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.274 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.274 15:13:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.274 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.274 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.274 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.274 15:13:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.274 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.274 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.274 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.274 15:13:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.274 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.274 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.274 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.274 15:13:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.274 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.274 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.274 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.274 15:13:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.274 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.274 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.274 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.274 15:13:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.275 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.275 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.276 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.276 15:13:47 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.276 15:13:47 -- setup/common.sh@33 -- # echo 0 00:03:30.276 15:13:47 -- setup/common.sh@33 -- # return 0 00:03:30.276 15:13:47 -- setup/hugepages.sh@99 -- # surp=0 00:03:30.276 15:13:47 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:30.276 15:13:47 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:30.276 15:13:47 -- setup/common.sh@18 -- # local node= 00:03:30.276 15:13:47 -- setup/common.sh@19 -- # local var val 00:03:30.276 15:13:47 -- setup/common.sh@20 -- # local mem_f mem 00:03:30.276 15:13:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.276 15:13:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.276 15:13:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.276 15:13:47 -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.276 15:13:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.276 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.276 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.276 15:13:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109208216 kB' 'MemAvailable: 112738676 kB' 'Buffers: 4124 kB' 'Cached: 10549456 kB' 'SwapCached: 0 kB' 'Active: 7658444 kB' 'Inactive: 3515716 kB' 'Active(anon): 6968048 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515716 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 623968 kB' 'Mapped: 176884 kB' 'Shmem: 6347468 kB' 'KReclaimable: 297356 kB' 'Slab: 1073380 kB' 'SReclaimable: 297356 kB' 'SUnreclaim: 776024 kB' 'KernelStack: 27184 kB' 'PageTables: 8840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508436 kB' 'Committed_AS: 8347860 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234716 kB' 'VmallocChunk: 0 kB' 'Percpu: 106560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3632500 kB' 'DirectMap2M: 42184704 kB' 'DirectMap1G: 90177536 kB' 00:03:30.276 15:13:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.276 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.276 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.276 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.276 15:13:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.276 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.276 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.276 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.276 15:13:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.276 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.276 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.276 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.276 15:13:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.276 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.276 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.276 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.276 15:13:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.276 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.276 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.276 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.276 15:13:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.276 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.276 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.276 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.276 15:13:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.276 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.276 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.276 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.276 15:13:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.276 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.276 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.276 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.276 15:13:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.276 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.276 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.276 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.276 15:13:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.276 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.276 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.276 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.276 15:13:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.276 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.276 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.276 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.276 15:13:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.276 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.276 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.276 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.276 15:13:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.276 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.276 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.276 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.276 15:13:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.276 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.276 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.276 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.276 15:13:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.276 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.276 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.276 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.276 15:13:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.276 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.276 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.276 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.276 15:13:47 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.276 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.276 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.276 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.276 15:13:47 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.276 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.276 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.276 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.276 15:13:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.276 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.276 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.276 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.276 15:13:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.276 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.276 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.276 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.276 15:13:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.276 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.276 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.276 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.276 15:13:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.276 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.276 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.276 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.276 15:13:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.276 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.276 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.276 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.276 15:13:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.276 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.276 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.276 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.276 15:13:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.276 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.276 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.276 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.276 15:13:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.276 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.276 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.276 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.276 15:13:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.276 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.276 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.276 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.276 15:13:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.276 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.276 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.276 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.276 15:13:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.276 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.276 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.276 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.276 15:13:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.276 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.276 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.276 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.276 15:13:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.276 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.276 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.276 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.276 15:13:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.276 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.276 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.276 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.276 15:13:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.276 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.277 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.277 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.277 15:13:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.277 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.277 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.277 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.277 15:13:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.277 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.277 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.277 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.277 15:13:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.277 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.277 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.277 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.277 15:13:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.277 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.277 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.277 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.277 15:13:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.277 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.277 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.277 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.277 15:13:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.277 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.277 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.277 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.277 15:13:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.277 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.277 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.277 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.277 15:13:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.277 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.277 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.277 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.277 15:13:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.277 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.277 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.277 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.277 15:13:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.277 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.277 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.277 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.277 15:13:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.277 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.277 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.277 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.277 15:13:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.277 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.277 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.277 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.277 15:13:47 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.277 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.277 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.277 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.277 15:13:47 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.277 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.277 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.277 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.277 15:13:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.277 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.277 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.277 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.277 15:13:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.277 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.277 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.277 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.277 15:13:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.277 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.277 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.277 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.277 15:13:47 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.277 15:13:47 -- setup/common.sh@33 -- # echo 0 00:03:30.277 15:13:47 -- setup/common.sh@33 -- # return 0 00:03:30.277 15:13:47 -- setup/hugepages.sh@100 -- # resv=0 00:03:30.277 15:13:47 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:30.277 nr_hugepages=1025 00:03:30.277 15:13:47 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:30.277 resv_hugepages=0 00:03:30.277 15:13:47 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:30.277 surplus_hugepages=0 00:03:30.277 15:13:47 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:30.277 anon_hugepages=0 00:03:30.277 15:13:47 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:30.277 15:13:47 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:30.277 15:13:47 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:30.277 15:13:47 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:30.277 15:13:47 -- setup/common.sh@18 -- # local node= 00:03:30.277 15:13:47 -- setup/common.sh@19 -- # local var val 00:03:30.277 15:13:47 -- setup/common.sh@20 -- # local mem_f mem 00:03:30.277 15:13:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.277 15:13:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.277 15:13:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.277 15:13:47 -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.277 15:13:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.277 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.277 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.277 15:13:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109208764 kB' 'MemAvailable: 112739224 kB' 'Buffers: 4124 kB' 'Cached: 10549468 kB' 'SwapCached: 0 kB' 'Active: 7658540 kB' 'Inactive: 3515716 kB' 'Active(anon): 6968144 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515716 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 624004 kB' 'Mapped: 176824 kB' 'Shmem: 6347480 kB' 'KReclaimable: 297356 kB' 'Slab: 1073380 kB' 'SReclaimable: 297356 kB' 'SUnreclaim: 776024 kB' 'KernelStack: 27072 kB' 'PageTables: 8820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508436 kB' 'Committed_AS: 8348004 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234700 kB' 'VmallocChunk: 0 kB' 'Percpu: 106560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3632500 kB' 'DirectMap2M: 42184704 kB' 'DirectMap1G: 90177536 kB' 00:03:30.277 15:13:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.277 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.277 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.277 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.277 15:13:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.277 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.277 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.277 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.277 15:13:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.277 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.277 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.277 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.277 15:13:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.277 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.277 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.277 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.277 15:13:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.277 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.277 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.277 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.277 15:13:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.277 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.277 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.277 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.277 15:13:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.277 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.277 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.277 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.277 15:13:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.277 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.277 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.277 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.277 15:13:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.277 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.277 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.277 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.277 15:13:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.277 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.278 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.278 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.278 15:13:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.278 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.278 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.278 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.278 15:13:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.278 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.278 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.278 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.278 15:13:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.278 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.278 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.278 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.278 15:13:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.278 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.278 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.278 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.278 15:13:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.278 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.278 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.278 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.278 15:13:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.278 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.278 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.278 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.278 15:13:47 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.278 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.278 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.278 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.278 15:13:47 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.278 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.278 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.278 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.278 15:13:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.278 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.278 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.278 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.278 15:13:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.278 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.278 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.278 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.278 15:13:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.278 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.278 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.278 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.278 15:13:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.278 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.278 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.278 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.278 15:13:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.278 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.278 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.278 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.278 15:13:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.278 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.278 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.278 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.278 15:13:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.278 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.278 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.278 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.278 15:13:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.278 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.278 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.278 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.278 15:13:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.278 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.278 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.278 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.278 15:13:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.278 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.278 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.278 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.278 15:13:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.278 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.278 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.278 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.278 15:13:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.278 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.278 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.278 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.278 15:13:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.278 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.278 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.278 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.278 15:13:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.278 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.278 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.278 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.278 15:13:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.278 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.278 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.278 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.278 15:13:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.278 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.278 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.278 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.278 15:13:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.278 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.278 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.278 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.278 15:13:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.278 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.278 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.278 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.278 15:13:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.278 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.278 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.278 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.278 15:13:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.278 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.278 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.278 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.278 15:13:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.278 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.278 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.278 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.278 15:13:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.278 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.278 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.278 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.278 15:13:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.278 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.278 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.278 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.278 15:13:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.278 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.278 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.278 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.278 15:13:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.278 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.278 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.278 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.278 15:13:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.278 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.278 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.278 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.278 15:13:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.278 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.278 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.278 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.278 15:13:47 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.278 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.278 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.278 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.279 15:13:47 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.279 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.279 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.279 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.279 15:13:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.279 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.279 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.279 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.279 15:13:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.279 15:13:47 -- setup/common.sh@33 -- # echo 1025 00:03:30.279 15:13:47 -- setup/common.sh@33 -- # return 0 00:03:30.279 15:13:47 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:30.279 15:13:47 -- setup/hugepages.sh@112 -- # get_nodes 00:03:30.279 15:13:47 -- setup/hugepages.sh@27 -- # local node 00:03:30.279 15:13:47 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:30.279 15:13:47 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:30.279 15:13:47 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:30.279 15:13:47 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:30.279 15:13:47 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:30.279 15:13:47 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:30.279 15:13:47 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:30.279 15:13:47 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:30.279 15:13:47 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:30.279 15:13:47 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.279 15:13:47 -- setup/common.sh@18 -- # local node=0 00:03:30.279 15:13:47 -- setup/common.sh@19 -- # local var val 00:03:30.279 15:13:47 -- setup/common.sh@20 -- # local mem_f mem 00:03:30.279 15:13:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.279 15:13:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:30.279 15:13:47 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:30.279 15:13:47 -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.279 15:13:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.279 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.279 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.279 15:13:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 60408936 kB' 'MemUsed: 5250072 kB' 'SwapCached: 0 kB' 'Active: 2549824 kB' 'Inactive: 106348 kB' 'Active(anon): 2240304 kB' 'Inactive(anon): 0 kB' 'Active(file): 309520 kB' 'Inactive(file): 106348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2547556 kB' 'Mapped: 108280 kB' 'AnonPages: 111856 kB' 'Shmem: 2131688 kB' 'KernelStack: 13544 kB' 'PageTables: 4428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 159900 kB' 'Slab: 520820 kB' 'SReclaimable: 159900 kB' 'SUnreclaim: 360920 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:30.279 15:13:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.279 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.279 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.279 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.279 15:13:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.279 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.279 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.279 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.279 15:13:47 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.279 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.279 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.279 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.279 15:13:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.279 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.279 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.279 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.279 15:13:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.279 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.279 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.279 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.279 15:13:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.279 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.279 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.279 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.279 15:13:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.279 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.279 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.279 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.279 15:13:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.279 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.279 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.279 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.279 15:13:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.279 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.279 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.279 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.279 15:13:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.279 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.279 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.279 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.279 15:13:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.279 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.279 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.279 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.279 15:13:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.279 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.279 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.279 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.279 15:13:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.279 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.279 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.279 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.279 15:13:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.279 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.279 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.279 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.279 15:13:47 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.279 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.279 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.279 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.279 15:13:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.279 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.279 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.279 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.279 15:13:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.279 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.279 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.279 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.279 15:13:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.279 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.279 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.279 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.279 15:13:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.279 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.279 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.279 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.279 15:13:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.279 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.279 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.279 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.279 15:13:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.279 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.279 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.279 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.279 15:13:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.279 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.279 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.279 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.279 15:13:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.279 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.279 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.279 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.279 15:13:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.279 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.279 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.279 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.279 15:13:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.279 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.279 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.279 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.279 15:13:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.279 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.279 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.279 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.279 15:13:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.279 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.279 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.279 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.279 15:13:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.279 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.279 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.279 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.279 15:13:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.279 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.279 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.279 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.279 15:13:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.280 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.280 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.280 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.280 15:13:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.280 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.280 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.280 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.280 15:13:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.280 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.280 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.280 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.280 15:13:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.280 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.280 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.280 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.280 15:13:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.280 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.280 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.280 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.280 15:13:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.280 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.280 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.280 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.280 15:13:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.280 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.280 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.280 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.280 15:13:47 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.280 15:13:47 -- setup/common.sh@33 -- # echo 0 00:03:30.280 15:13:47 -- setup/common.sh@33 -- # return 0 00:03:30.542 15:13:47 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:30.543 15:13:47 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:30.543 15:13:47 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:30.543 15:13:47 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:30.543 15:13:47 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.543 15:13:47 -- setup/common.sh@18 -- # local node=1 00:03:30.543 15:13:47 -- setup/common.sh@19 -- # local var val 00:03:30.543 15:13:47 -- setup/common.sh@20 -- # local mem_f mem 00:03:30.543 15:13:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.543 15:13:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:30.543 15:13:47 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:30.543 15:13:47 -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.543 15:13:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.543 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.543 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.543 15:13:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679860 kB' 'MemFree: 48800572 kB' 'MemUsed: 11879288 kB' 'SwapCached: 0 kB' 'Active: 5108852 kB' 'Inactive: 3409368 kB' 'Active(anon): 4727976 kB' 'Inactive(anon): 0 kB' 'Active(file): 380876 kB' 'Inactive(file): 3409368 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8006068 kB' 'Mapped: 68544 kB' 'AnonPages: 512268 kB' 'Shmem: 4215824 kB' 'KernelStack: 13544 kB' 'PageTables: 4712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 137456 kB' 'Slab: 552560 kB' 'SReclaimable: 137456 kB' 'SUnreclaim: 415104 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:30.543 15:13:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.543 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.543 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.543 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.543 15:13:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.543 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.543 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.543 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.543 15:13:47 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.543 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.543 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.543 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.543 15:13:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.543 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.543 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.543 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.543 15:13:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.543 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.543 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.543 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.543 15:13:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.543 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.543 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.543 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.543 15:13:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.543 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.543 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.543 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.543 15:13:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.543 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.543 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.543 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.543 15:13:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.543 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.543 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.543 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.543 15:13:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.543 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.543 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.543 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.543 15:13:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.543 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.543 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.543 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.543 15:13:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.543 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.543 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.543 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.543 15:13:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.543 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.543 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.543 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.543 15:13:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.543 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.543 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.543 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.543 15:13:47 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.543 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.543 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.543 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.543 15:13:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.543 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.543 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.543 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.543 15:13:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.543 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.543 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.543 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.543 15:13:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.543 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.543 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.543 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.543 15:13:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.543 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.543 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.543 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.543 15:13:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.543 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.543 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.543 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.543 15:13:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.543 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.543 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.543 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.543 15:13:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.543 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.543 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.543 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.543 15:13:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.543 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.543 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.543 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.543 15:13:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.543 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.543 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.543 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.543 15:13:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.543 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.543 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.543 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.543 15:13:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.543 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.543 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.543 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.543 15:13:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.543 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.543 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.543 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.543 15:13:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.543 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.543 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.543 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.543 15:13:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.543 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.543 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.543 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.543 15:13:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.543 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.543 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.543 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.543 15:13:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.543 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.543 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.543 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.543 15:13:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.543 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.543 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.543 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.544 15:13:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.544 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.544 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.544 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.544 15:13:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.544 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.544 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.544 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.544 15:13:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.544 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.544 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.544 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.544 15:13:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.544 15:13:47 -- setup/common.sh@32 -- # continue 00:03:30.544 15:13:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.544 15:13:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.544 15:13:47 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.544 15:13:47 -- setup/common.sh@33 -- # echo 0 00:03:30.544 15:13:47 -- setup/common.sh@33 -- # return 0 00:03:30.544 15:13:47 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:30.544 15:13:47 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:30.544 15:13:47 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:30.544 15:13:47 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:30.544 15:13:47 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:30.544 node0=512 expecting 513 00:03:30.544 15:13:47 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:30.544 15:13:47 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:30.544 15:13:47 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:30.544 15:13:47 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:30.544 node1=513 expecting 512 00:03:30.544 15:13:47 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:30.544 00:03:30.544 real 0m3.977s 00:03:30.544 user 0m1.666s 00:03:30.544 sys 0m2.356s 00:03:30.544 15:13:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:30.544 15:13:47 -- common/autotest_common.sh@10 -- # set +x 00:03:30.544 ************************************ 00:03:30.544 END TEST odd_alloc 00:03:30.544 ************************************ 00:03:30.544 15:13:47 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:30.544 15:13:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:30.544 15:13:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:30.544 15:13:47 -- common/autotest_common.sh@10 -- # set +x 00:03:30.544 ************************************ 00:03:30.544 START TEST custom_alloc 00:03:30.544 ************************************ 00:03:30.544 15:13:47 -- common/autotest_common.sh@1111 -- # custom_alloc 00:03:30.544 15:13:47 -- setup/hugepages.sh@167 -- # local IFS=, 00:03:30.544 15:13:47 -- setup/hugepages.sh@169 -- # local node 00:03:30.544 15:13:47 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:30.544 15:13:47 -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:30.544 15:13:47 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:30.544 15:13:47 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:30.544 15:13:47 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:30.544 15:13:47 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:30.544 15:13:47 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:30.544 15:13:47 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:30.544 15:13:47 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:30.544 15:13:47 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:30.544 15:13:47 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:30.544 15:13:47 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:30.544 15:13:47 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:30.544 15:13:47 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:30.544 15:13:47 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:30.544 15:13:47 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:30.544 15:13:47 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:30.544 15:13:47 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:30.544 15:13:47 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:30.544 15:13:47 -- setup/hugepages.sh@83 -- # : 256 00:03:30.544 15:13:47 -- setup/hugepages.sh@84 -- # : 1 00:03:30.544 15:13:47 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:30.544 15:13:47 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:30.544 15:13:47 -- setup/hugepages.sh@83 -- # : 0 00:03:30.544 15:13:47 -- setup/hugepages.sh@84 -- # : 0 00:03:30.544 15:13:47 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:30.544 15:13:47 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:30.544 15:13:47 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:30.544 15:13:47 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:30.544 15:13:47 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:30.544 15:13:47 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:30.544 15:13:47 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:30.544 15:13:47 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:30.544 15:13:47 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:30.544 15:13:47 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:30.544 15:13:47 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:30.544 15:13:47 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:30.544 15:13:47 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:30.544 15:13:47 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:30.544 15:13:47 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:30.544 15:13:47 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:30.544 15:13:47 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:30.544 15:13:47 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:30.544 15:13:47 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:30.544 15:13:47 -- setup/hugepages.sh@78 -- # return 0 00:03:30.544 15:13:47 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:30.544 15:13:47 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:30.544 15:13:47 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:30.544 15:13:47 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:30.544 15:13:47 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:30.544 15:13:47 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:30.544 15:13:47 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:30.544 15:13:47 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:30.544 15:13:47 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:30.544 15:13:47 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:30.544 15:13:47 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:30.544 15:13:47 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:30.544 15:13:47 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:30.544 15:13:47 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:30.544 15:13:47 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:30.544 15:13:47 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:30.544 15:13:47 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:30.544 15:13:47 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:30.544 15:13:47 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:30.544 15:13:47 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:30.544 15:13:47 -- setup/hugepages.sh@78 -- # return 0 00:03:30.544 15:13:47 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:30.544 15:13:47 -- setup/hugepages.sh@187 -- # setup output 00:03:30.544 15:13:47 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:30.544 15:13:47 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:33.848 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:33.848 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:33.848 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:33.848 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:33.848 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:33.848 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:33.848 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:33.848 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:33.848 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:33.848 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:33.848 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:33.848 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:33.848 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:33.848 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:33.848 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:33.848 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:33.848 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:34.424 15:13:51 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:34.424 15:13:51 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:34.424 15:13:51 -- setup/hugepages.sh@89 -- # local node 00:03:34.424 15:13:51 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:34.424 15:13:51 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:34.424 15:13:51 -- setup/hugepages.sh@92 -- # local surp 00:03:34.424 15:13:51 -- setup/hugepages.sh@93 -- # local resv 00:03:34.424 15:13:51 -- setup/hugepages.sh@94 -- # local anon 00:03:34.424 15:13:51 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:34.424 15:13:51 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:34.424 15:13:51 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:34.424 15:13:51 -- setup/common.sh@18 -- # local node= 00:03:34.424 15:13:51 -- setup/common.sh@19 -- # local var val 00:03:34.424 15:13:51 -- setup/common.sh@20 -- # local mem_f mem 00:03:34.424 15:13:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.424 15:13:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.424 15:13:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.424 15:13:51 -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.424 15:13:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.424 15:13:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 108171180 kB' 'MemAvailable: 111701624 kB' 'Buffers: 4124 kB' 'Cached: 10549600 kB' 'SwapCached: 0 kB' 'Active: 7658296 kB' 'Inactive: 3515716 kB' 'Active(anon): 6967900 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515716 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 623676 kB' 'Mapped: 176832 kB' 'Shmem: 6347612 kB' 'KReclaimable: 297324 kB' 'Slab: 1073036 kB' 'SReclaimable: 297324 kB' 'SUnreclaim: 775712 kB' 'KernelStack: 27008 kB' 'PageTables: 8392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985172 kB' 'Committed_AS: 8346196 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234828 kB' 'VmallocChunk: 0 kB' 'Percpu: 106560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3632500 kB' 'DirectMap2M: 42184704 kB' 'DirectMap1G: 90177536 kB' 00:03:34.424 15:13:51 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.424 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.424 15:13:51 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.424 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.424 15:13:51 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.424 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.424 15:13:51 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.424 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.424 15:13:51 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.424 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.424 15:13:51 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.424 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.424 15:13:51 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.424 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.424 15:13:51 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.424 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.424 15:13:51 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.424 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.424 15:13:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.424 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.424 15:13:51 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.424 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.424 15:13:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.424 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.424 15:13:51 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.424 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.424 15:13:51 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.424 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.424 15:13:51 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.424 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.424 15:13:51 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.424 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.424 15:13:51 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.424 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.424 15:13:51 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.424 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.424 15:13:51 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.424 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.424 15:13:51 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.424 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.424 15:13:51 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.424 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.424 15:13:51 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.424 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.424 15:13:51 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.424 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.424 15:13:51 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.424 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.424 15:13:51 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.424 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.424 15:13:51 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.424 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.424 15:13:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.424 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.424 15:13:51 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.424 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.424 15:13:51 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.424 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.424 15:13:51 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.424 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.424 15:13:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.424 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.424 15:13:51 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.424 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.424 15:13:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.424 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.424 15:13:51 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.424 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.424 15:13:51 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.424 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.424 15:13:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.424 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.424 15:13:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.424 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.424 15:13:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.424 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.424 15:13:51 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.424 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.424 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.425 15:13:51 -- setup/common.sh@33 -- # echo 0 00:03:34.425 15:13:51 -- setup/common.sh@33 -- # return 0 00:03:34.425 15:13:51 -- setup/hugepages.sh@97 -- # anon=0 00:03:34.425 15:13:51 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:34.425 15:13:51 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:34.425 15:13:51 -- setup/common.sh@18 -- # local node= 00:03:34.425 15:13:51 -- setup/common.sh@19 -- # local var val 00:03:34.425 15:13:51 -- setup/common.sh@20 -- # local mem_f mem 00:03:34.425 15:13:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.425 15:13:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.425 15:13:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.425 15:13:51 -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.425 15:13:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.425 15:13:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 108170928 kB' 'MemAvailable: 111701372 kB' 'Buffers: 4124 kB' 'Cached: 10549600 kB' 'SwapCached: 0 kB' 'Active: 7659360 kB' 'Inactive: 3515716 kB' 'Active(anon): 6968964 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515716 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 624768 kB' 'Mapped: 176832 kB' 'Shmem: 6347612 kB' 'KReclaimable: 297324 kB' 'Slab: 1073036 kB' 'SReclaimable: 297324 kB' 'SUnreclaim: 775712 kB' 'KernelStack: 27088 kB' 'PageTables: 8640 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985172 kB' 'Committed_AS: 8346208 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234812 kB' 'VmallocChunk: 0 kB' 'Percpu: 106560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3632500 kB' 'DirectMap2M: 42184704 kB' 'DirectMap1G: 90177536 kB' 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.425 15:13:51 -- setup/common.sh@33 -- # echo 0 00:03:34.425 15:13:51 -- setup/common.sh@33 -- # return 0 00:03:34.425 15:13:51 -- setup/hugepages.sh@99 -- # surp=0 00:03:34.425 15:13:51 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:34.425 15:13:51 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:34.425 15:13:51 -- setup/common.sh@18 -- # local node= 00:03:34.425 15:13:51 -- setup/common.sh@19 -- # local var val 00:03:34.425 15:13:51 -- setup/common.sh@20 -- # local mem_f mem 00:03:34.425 15:13:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.425 15:13:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.425 15:13:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.425 15:13:51 -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.425 15:13:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.425 15:13:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 108172256 kB' 'MemAvailable: 111702700 kB' 'Buffers: 4124 kB' 'Cached: 10549600 kB' 'SwapCached: 0 kB' 'Active: 7658308 kB' 'Inactive: 3515716 kB' 'Active(anon): 6967912 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515716 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 623692 kB' 'Mapped: 176824 kB' 'Shmem: 6347612 kB' 'KReclaimable: 297324 kB' 'Slab: 1073120 kB' 'SReclaimable: 297324 kB' 'SUnreclaim: 775796 kB' 'KernelStack: 27040 kB' 'PageTables: 8488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985172 kB' 'Committed_AS: 8346224 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234812 kB' 'VmallocChunk: 0 kB' 'Percpu: 106560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3632500 kB' 'DirectMap2M: 42184704 kB' 'DirectMap1G: 90177536 kB' 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.425 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.425 15:13:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.426 15:13:51 -- setup/common.sh@33 -- # echo 0 00:03:34.426 15:13:51 -- setup/common.sh@33 -- # return 0 00:03:34.426 15:13:51 -- setup/hugepages.sh@100 -- # resv=0 00:03:34.426 15:13:51 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:34.426 nr_hugepages=1536 00:03:34.426 15:13:51 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:34.426 resv_hugepages=0 00:03:34.426 15:13:51 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:34.426 surplus_hugepages=0 00:03:34.426 15:13:51 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:34.426 anon_hugepages=0 00:03:34.426 15:13:51 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:34.426 15:13:51 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:34.426 15:13:51 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:34.426 15:13:51 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:34.426 15:13:51 -- setup/common.sh@18 -- # local node= 00:03:34.426 15:13:51 -- setup/common.sh@19 -- # local var val 00:03:34.426 15:13:51 -- setup/common.sh@20 -- # local mem_f mem 00:03:34.426 15:13:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.426 15:13:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.426 15:13:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.426 15:13:51 -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.426 15:13:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.426 15:13:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 108172312 kB' 'MemAvailable: 111702756 kB' 'Buffers: 4124 kB' 'Cached: 10549640 kB' 'SwapCached: 0 kB' 'Active: 7657996 kB' 'Inactive: 3515716 kB' 'Active(anon): 6967600 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515716 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 623360 kB' 'Mapped: 176824 kB' 'Shmem: 6347652 kB' 'KReclaimable: 297324 kB' 'Slab: 1073120 kB' 'SReclaimable: 297324 kB' 'SUnreclaim: 775796 kB' 'KernelStack: 27040 kB' 'PageTables: 8484 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985172 kB' 'Committed_AS: 8346236 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234812 kB' 'VmallocChunk: 0 kB' 'Percpu: 106560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3632500 kB' 'DirectMap2M: 42184704 kB' 'DirectMap1G: 90177536 kB' 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.426 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.426 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.427 15:13:51 -- setup/common.sh@33 -- # echo 1536 00:03:34.427 15:13:51 -- setup/common.sh@33 -- # return 0 00:03:34.427 15:13:51 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:34.427 15:13:51 -- setup/hugepages.sh@112 -- # get_nodes 00:03:34.427 15:13:51 -- setup/hugepages.sh@27 -- # local node 00:03:34.427 15:13:51 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:34.427 15:13:51 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:34.427 15:13:51 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:34.427 15:13:51 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:34.427 15:13:51 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:34.427 15:13:51 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:34.427 15:13:51 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:34.427 15:13:51 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:34.427 15:13:51 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:34.427 15:13:51 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:34.427 15:13:51 -- setup/common.sh@18 -- # local node=0 00:03:34.427 15:13:51 -- setup/common.sh@19 -- # local var val 00:03:34.427 15:13:51 -- setup/common.sh@20 -- # local mem_f mem 00:03:34.427 15:13:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.427 15:13:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:34.427 15:13:51 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:34.427 15:13:51 -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.427 15:13:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.427 15:13:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 60393276 kB' 'MemUsed: 5265732 kB' 'SwapCached: 0 kB' 'Active: 2550840 kB' 'Inactive: 106348 kB' 'Active(anon): 2241320 kB' 'Inactive(anon): 0 kB' 'Active(file): 309520 kB' 'Inactive(file): 106348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2547664 kB' 'Mapped: 108324 kB' 'AnonPages: 112704 kB' 'Shmem: 2131796 kB' 'KernelStack: 13512 kB' 'PageTables: 3820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 159900 kB' 'Slab: 520580 kB' 'SReclaimable: 159900 kB' 'SUnreclaim: 360680 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.427 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.427 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.428 15:13:51 -- setup/common.sh@33 -- # echo 0 00:03:34.428 15:13:51 -- setup/common.sh@33 -- # return 0 00:03:34.428 15:13:51 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:34.428 15:13:51 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:34.428 15:13:51 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:34.428 15:13:51 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:34.428 15:13:51 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:34.428 15:13:51 -- setup/common.sh@18 -- # local node=1 00:03:34.428 15:13:51 -- setup/common.sh@19 -- # local var val 00:03:34.428 15:13:51 -- setup/common.sh@20 -- # local mem_f mem 00:03:34.428 15:13:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.428 15:13:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:34.428 15:13:51 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:34.428 15:13:51 -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.428 15:13:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.428 15:13:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679860 kB' 'MemFree: 47780376 kB' 'MemUsed: 12899484 kB' 'SwapCached: 0 kB' 'Active: 5107528 kB' 'Inactive: 3409368 kB' 'Active(anon): 4726652 kB' 'Inactive(anon): 0 kB' 'Active(file): 380876 kB' 'Inactive(file): 3409368 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8006104 kB' 'Mapped: 68500 kB' 'AnonPages: 511100 kB' 'Shmem: 4215860 kB' 'KernelStack: 13528 kB' 'PageTables: 4664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 137424 kB' 'Slab: 552520 kB' 'SReclaimable: 137424 kB' 'SUnreclaim: 415096 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # continue 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.428 15:13:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.428 15:13:51 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.428 15:13:51 -- setup/common.sh@33 -- # echo 0 00:03:34.428 15:13:51 -- setup/common.sh@33 -- # return 0 00:03:34.428 15:13:51 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:34.428 15:13:51 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:34.428 15:13:51 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:34.428 15:13:51 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:34.428 15:13:51 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:34.428 node0=512 expecting 512 00:03:34.428 15:13:51 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:34.428 15:13:51 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:34.428 15:13:51 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:34.428 15:13:51 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:34.428 node1=1024 expecting 1024 00:03:34.428 15:13:51 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:34.428 00:03:34.428 real 0m3.779s 00:03:34.428 user 0m1.500s 00:03:34.428 sys 0m2.329s 00:03:34.428 15:13:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:34.428 15:13:51 -- common/autotest_common.sh@10 -- # set +x 00:03:34.428 ************************************ 00:03:34.428 END TEST custom_alloc 00:03:34.428 ************************************ 00:03:34.428 15:13:51 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:34.428 15:13:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:34.428 15:13:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:34.428 15:13:51 -- common/autotest_common.sh@10 -- # set +x 00:03:34.690 ************************************ 00:03:34.690 START TEST no_shrink_alloc 00:03:34.690 ************************************ 00:03:34.690 15:13:51 -- common/autotest_common.sh@1111 -- # no_shrink_alloc 00:03:34.690 15:13:51 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:34.690 15:13:51 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:34.690 15:13:51 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:34.690 15:13:51 -- setup/hugepages.sh@51 -- # shift 00:03:34.690 15:13:51 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:34.690 15:13:51 -- setup/hugepages.sh@52 -- # local node_ids 00:03:34.690 15:13:51 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:34.690 15:13:51 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:34.690 15:13:51 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:34.690 15:13:51 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:34.690 15:13:51 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:34.690 15:13:51 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:34.690 15:13:51 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:34.690 15:13:51 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:34.690 15:13:51 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:34.690 15:13:51 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:34.690 15:13:51 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:34.690 15:13:51 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:34.690 15:13:51 -- setup/hugepages.sh@73 -- # return 0 00:03:34.690 15:13:51 -- setup/hugepages.sh@198 -- # setup output 00:03:34.690 15:13:51 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:34.690 15:13:51 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:37.988 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:37.988 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:37.988 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:37.988 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:37.988 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:37.988 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:37.988 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:37.988 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:37.988 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:37.988 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:37.988 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:37.988 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:37.988 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:37.988 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:37.988 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:37.988 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:37.988 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:38.253 15:13:55 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:38.253 15:13:55 -- setup/hugepages.sh@89 -- # local node 00:03:38.253 15:13:55 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:38.253 15:13:55 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:38.253 15:13:55 -- setup/hugepages.sh@92 -- # local surp 00:03:38.253 15:13:55 -- setup/hugepages.sh@93 -- # local resv 00:03:38.253 15:13:55 -- setup/hugepages.sh@94 -- # local anon 00:03:38.253 15:13:55 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:38.253 15:13:55 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:38.253 15:13:55 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:38.253 15:13:55 -- setup/common.sh@18 -- # local node= 00:03:38.253 15:13:55 -- setup/common.sh@19 -- # local var val 00:03:38.253 15:13:55 -- setup/common.sh@20 -- # local mem_f mem 00:03:38.253 15:13:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.253 15:13:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.253 15:13:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.253 15:13:55 -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.253 15:13:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.253 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.253 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.253 15:13:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109215848 kB' 'MemAvailable: 112746292 kB' 'Buffers: 4124 kB' 'Cached: 10549736 kB' 'SwapCached: 0 kB' 'Active: 7659976 kB' 'Inactive: 3515716 kB' 'Active(anon): 6969580 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515716 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 624644 kB' 'Mapped: 176976 kB' 'Shmem: 6347748 kB' 'KReclaimable: 297324 kB' 'Slab: 1072372 kB' 'SReclaimable: 297324 kB' 'SUnreclaim: 775048 kB' 'KernelStack: 27024 kB' 'PageTables: 8452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8346796 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234828 kB' 'VmallocChunk: 0 kB' 'Percpu: 106560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3632500 kB' 'DirectMap2M: 42184704 kB' 'DirectMap1G: 90177536 kB' 00:03:38.253 15:13:55 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.253 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.253 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.253 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.253 15:13:55 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.253 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.253 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.253 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.253 15:13:55 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.253 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.253 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.253 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.253 15:13:55 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.253 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.253 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.253 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.253 15:13:55 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.253 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.253 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.253 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.253 15:13:55 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.253 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.253 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.253 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.253 15:13:55 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.253 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.253 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.253 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.253 15:13:55 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.253 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.253 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.253 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.253 15:13:55 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.253 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.253 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.253 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.253 15:13:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.253 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.253 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.253 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.253 15:13:55 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.253 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.253 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.253 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.253 15:13:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.253 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.253 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.253 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.253 15:13:55 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.253 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.253 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.253 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.253 15:13:55 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.253 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.253 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.253 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.253 15:13:55 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.253 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.253 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.253 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.253 15:13:55 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.253 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.253 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.253 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.254 15:13:55 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.254 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.254 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.254 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.254 15:13:55 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.254 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.254 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.254 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.254 15:13:55 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.254 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.254 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.254 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.254 15:13:55 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.254 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.254 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.254 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.254 15:13:55 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.254 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.254 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.254 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.254 15:13:55 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.254 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.254 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.254 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.254 15:13:55 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.254 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.254 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.254 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.254 15:13:55 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.254 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.254 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.254 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.254 15:13:55 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.254 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.254 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.254 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.254 15:13:55 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.254 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.254 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.254 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.254 15:13:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.254 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.254 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.254 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.254 15:13:55 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.254 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.254 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.254 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.254 15:13:55 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.254 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.254 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.254 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.254 15:13:55 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.254 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.254 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.254 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.254 15:13:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.254 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.254 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.254 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.254 15:13:55 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.254 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.254 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.254 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.254 15:13:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.254 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.254 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.254 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.254 15:13:55 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.254 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.254 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.254 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.254 15:13:55 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.254 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.254 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.254 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.254 15:13:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.254 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.254 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.254 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.254 15:13:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.254 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.254 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.254 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.254 15:13:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.254 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.254 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.254 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.254 15:13:55 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.254 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.254 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.254 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.254 15:13:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.254 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.254 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.254 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.254 15:13:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.254 15:13:55 -- setup/common.sh@33 -- # echo 0 00:03:38.254 15:13:55 -- setup/common.sh@33 -- # return 0 00:03:38.254 15:13:55 -- setup/hugepages.sh@97 -- # anon=0 00:03:38.254 15:13:55 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:38.254 15:13:55 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:38.254 15:13:55 -- setup/common.sh@18 -- # local node= 00:03:38.254 15:13:55 -- setup/common.sh@19 -- # local var val 00:03:38.254 15:13:55 -- setup/common.sh@20 -- # local mem_f mem 00:03:38.254 15:13:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.254 15:13:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.254 15:13:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.254 15:13:55 -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.254 15:13:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.254 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.254 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.254 15:13:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109216204 kB' 'MemAvailable: 112746648 kB' 'Buffers: 4124 kB' 'Cached: 10549736 kB' 'SwapCached: 0 kB' 'Active: 7660004 kB' 'Inactive: 3515716 kB' 'Active(anon): 6969608 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515716 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 624676 kB' 'Mapped: 176964 kB' 'Shmem: 6347748 kB' 'KReclaimable: 297324 kB' 'Slab: 1072372 kB' 'SReclaimable: 297324 kB' 'SUnreclaim: 775048 kB' 'KernelStack: 27040 kB' 'PageTables: 8496 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8346808 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234828 kB' 'VmallocChunk: 0 kB' 'Percpu: 106560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3632500 kB' 'DirectMap2M: 42184704 kB' 'DirectMap1G: 90177536 kB' 00:03:38.254 15:13:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.254 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.254 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.254 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.254 15:13:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.254 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.254 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.254 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.254 15:13:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.254 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.254 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.254 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.254 15:13:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.254 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.254 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.254 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.254 15:13:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.254 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.255 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.255 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.255 15:13:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.255 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.255 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.255 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.255 15:13:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.255 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.255 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.255 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.255 15:13:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.255 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.255 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.255 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.255 15:13:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.255 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.255 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.255 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.255 15:13:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.255 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.255 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.255 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.255 15:13:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.255 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.255 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.255 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.255 15:13:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.255 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.255 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.255 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.255 15:13:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.255 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.255 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.255 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.255 15:13:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.255 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.255 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.255 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.255 15:13:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.255 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.255 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.255 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.255 15:13:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.255 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.255 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.255 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.255 15:13:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.255 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.255 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.255 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.255 15:13:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.255 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.255 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.255 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.255 15:13:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.255 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.255 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.255 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.255 15:13:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.255 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.255 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.255 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.255 15:13:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.255 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.255 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.255 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.255 15:13:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.255 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.255 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.255 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.255 15:13:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.255 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.255 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.255 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.255 15:13:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.255 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.255 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.255 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.255 15:13:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.255 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.255 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.255 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.255 15:13:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.255 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.255 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.255 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.255 15:13:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.255 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.255 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.255 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.255 15:13:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.255 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.255 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.255 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.255 15:13:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.255 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.255 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.255 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.255 15:13:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.255 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.255 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.255 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.255 15:13:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.255 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.255 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.255 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.255 15:13:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.255 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.255 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.255 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.255 15:13:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.255 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.255 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.255 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.255 15:13:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.255 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.255 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.255 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.255 15:13:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.255 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.255 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.255 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.255 15:13:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.255 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.255 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.255 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.255 15:13:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.255 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.255 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.255 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.255 15:13:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.255 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.255 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.255 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.255 15:13:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.255 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.255 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.255 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.255 15:13:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.255 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.255 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.255 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.255 15:13:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.255 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.255 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.255 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.255 15:13:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.255 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.255 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.255 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.255 15:13:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.255 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.255 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.255 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.255 15:13:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.255 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.256 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.256 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.256 15:13:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.256 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.256 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.256 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.256 15:13:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.256 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.256 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.256 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.256 15:13:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.256 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.256 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.256 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.256 15:13:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.256 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.256 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.256 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.256 15:13:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.256 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.256 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.256 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.256 15:13:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.256 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.256 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.256 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.256 15:13:55 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.256 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.256 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.256 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.256 15:13:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.256 15:13:55 -- setup/common.sh@33 -- # echo 0 00:03:38.256 15:13:55 -- setup/common.sh@33 -- # return 0 00:03:38.256 15:13:55 -- setup/hugepages.sh@99 -- # surp=0 00:03:38.256 15:13:55 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:38.256 15:13:55 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:38.256 15:13:55 -- setup/common.sh@18 -- # local node= 00:03:38.256 15:13:55 -- setup/common.sh@19 -- # local var val 00:03:38.256 15:13:55 -- setup/common.sh@20 -- # local mem_f mem 00:03:38.256 15:13:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.256 15:13:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.256 15:13:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.256 15:13:55 -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.256 15:13:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.256 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.256 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.256 15:13:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109216128 kB' 'MemAvailable: 112746572 kB' 'Buffers: 4124 kB' 'Cached: 10549752 kB' 'SwapCached: 0 kB' 'Active: 7659220 kB' 'Inactive: 3515716 kB' 'Active(anon): 6968824 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515716 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 624344 kB' 'Mapped: 176884 kB' 'Shmem: 6347764 kB' 'KReclaimable: 297324 kB' 'Slab: 1072380 kB' 'SReclaimable: 297324 kB' 'SUnreclaim: 775056 kB' 'KernelStack: 27040 kB' 'PageTables: 8488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8346824 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234812 kB' 'VmallocChunk: 0 kB' 'Percpu: 106560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3632500 kB' 'DirectMap2M: 42184704 kB' 'DirectMap1G: 90177536 kB' 00:03:38.256 15:13:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.256 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.256 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.256 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.256 15:13:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.256 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.256 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.256 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.256 15:13:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.256 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.256 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.256 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.256 15:13:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.256 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.256 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.256 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.256 15:13:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.256 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.256 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.256 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.256 15:13:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.256 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.256 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.256 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.256 15:13:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.256 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.256 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.256 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.256 15:13:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.256 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.256 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.256 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.256 15:13:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.256 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.256 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.256 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.256 15:13:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.256 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.256 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.256 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.256 15:13:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.256 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.256 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.256 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.256 15:13:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.256 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.256 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.256 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.256 15:13:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.256 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.256 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.256 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.256 15:13:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.256 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.256 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.256 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.256 15:13:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.256 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.256 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.256 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.256 15:13:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.256 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.256 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.256 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.256 15:13:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.256 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.256 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.256 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.256 15:13:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.256 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.256 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.256 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.256 15:13:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.256 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.256 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.256 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.256 15:13:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.256 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.256 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.256 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.256 15:13:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.256 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.256 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.256 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.256 15:13:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.256 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.257 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.257 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.257 15:13:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.257 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.257 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.257 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.257 15:13:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.257 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.257 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.257 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.257 15:13:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.257 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.257 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.257 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.257 15:13:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.257 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.257 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.257 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.257 15:13:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.257 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.257 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.257 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.257 15:13:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.257 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.257 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.257 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.257 15:13:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.257 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.257 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.257 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.257 15:13:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.257 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.257 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.257 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.257 15:13:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.257 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.257 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.257 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.257 15:13:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.257 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.257 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.257 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.257 15:13:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.257 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.257 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.257 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.257 15:13:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.257 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.257 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.257 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.257 15:13:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.257 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.257 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.257 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.257 15:13:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.257 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.257 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.257 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.257 15:13:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.257 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.257 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.257 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.257 15:13:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.257 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.257 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.257 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.257 15:13:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.257 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.257 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.257 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.257 15:13:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.257 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.257 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.257 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.257 15:13:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.257 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.257 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.257 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.257 15:13:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.257 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.257 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.257 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.257 15:13:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.257 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.257 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.257 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.257 15:13:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.257 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.257 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.257 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.257 15:13:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.257 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.257 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.257 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.257 15:13:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.257 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.257 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.257 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.257 15:13:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.257 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.257 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.257 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.257 15:13:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.257 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.257 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.257 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.257 15:13:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.257 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.257 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.257 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.257 15:13:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.257 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.257 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.257 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.257 15:13:55 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.257 15:13:55 -- setup/common.sh@33 -- # echo 0 00:03:38.257 15:13:55 -- setup/common.sh@33 -- # return 0 00:03:38.257 15:13:55 -- setup/hugepages.sh@100 -- # resv=0 00:03:38.257 15:13:55 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:38.257 nr_hugepages=1024 00:03:38.257 15:13:55 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:38.257 resv_hugepages=0 00:03:38.257 15:13:55 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:38.257 surplus_hugepages=0 00:03:38.257 15:13:55 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:38.257 anon_hugepages=0 00:03:38.257 15:13:55 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:38.257 15:13:55 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:38.257 15:13:55 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:38.257 15:13:55 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:38.257 15:13:55 -- setup/common.sh@18 -- # local node= 00:03:38.257 15:13:55 -- setup/common.sh@19 -- # local var val 00:03:38.257 15:13:55 -- setup/common.sh@20 -- # local mem_f mem 00:03:38.257 15:13:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.257 15:13:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.257 15:13:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.257 15:13:55 -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.257 15:13:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.257 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.257 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.258 15:13:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109215628 kB' 'MemAvailable: 112746072 kB' 'Buffers: 4124 kB' 'Cached: 10549776 kB' 'SwapCached: 0 kB' 'Active: 7658864 kB' 'Inactive: 3515716 kB' 'Active(anon): 6968468 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515716 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 623956 kB' 'Mapped: 176884 kB' 'Shmem: 6347788 kB' 'KReclaimable: 297324 kB' 'Slab: 1072380 kB' 'SReclaimable: 297324 kB' 'SUnreclaim: 775056 kB' 'KernelStack: 27024 kB' 'PageTables: 8436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8346836 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234828 kB' 'VmallocChunk: 0 kB' 'Percpu: 106560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3632500 kB' 'DirectMap2M: 42184704 kB' 'DirectMap1G: 90177536 kB' 00:03:38.258 15:13:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.258 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.258 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.258 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.258 15:13:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.258 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.258 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.258 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.258 15:13:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.258 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.258 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.258 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.258 15:13:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.258 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.258 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.258 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.258 15:13:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.258 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.258 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.258 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.258 15:13:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.258 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.258 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.258 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.258 15:13:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.258 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.258 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.258 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.258 15:13:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.258 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.258 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.258 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.258 15:13:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.258 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.258 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.258 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.258 15:13:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.258 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.258 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.258 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.258 15:13:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.258 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.258 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.258 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.258 15:13:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.258 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.258 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.258 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.258 15:13:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.258 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.258 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.258 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.258 15:13:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.258 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.258 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.258 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.258 15:13:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.258 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.258 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.258 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.258 15:13:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.258 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.258 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.258 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.258 15:13:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.258 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.258 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.258 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.258 15:13:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.258 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.258 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.258 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.258 15:13:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.258 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.258 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.258 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.258 15:13:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.258 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.258 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.258 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.258 15:13:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.258 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.258 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.258 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.258 15:13:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.258 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.258 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.258 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.258 15:13:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.258 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.258 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.258 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.258 15:13:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.258 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.258 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.258 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.258 15:13:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.258 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.258 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.258 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.258 15:13:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.258 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.258 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.258 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.258 15:13:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.258 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.258 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.258 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.258 15:13:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.258 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.258 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.258 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.258 15:13:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.258 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.258 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.258 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.259 15:13:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.259 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.259 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.259 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.259 15:13:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.259 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.259 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.259 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.259 15:13:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.259 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.259 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.259 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.259 15:13:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.259 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.259 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.259 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.259 15:13:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.259 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.259 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.259 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.259 15:13:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.259 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.259 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.259 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.259 15:13:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.259 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.259 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.259 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.259 15:13:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.259 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.259 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.259 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.259 15:13:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.259 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.259 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.259 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.259 15:13:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.259 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.259 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.259 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.259 15:13:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.259 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.259 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.259 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.259 15:13:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.259 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.259 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.259 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.259 15:13:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.259 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.259 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.259 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.259 15:13:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.259 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.259 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.259 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.259 15:13:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.259 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.259 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.259 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.259 15:13:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.259 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.259 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.259 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.259 15:13:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.259 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.259 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.259 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.259 15:13:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.259 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.259 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.259 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.259 15:13:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.259 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.259 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.259 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.259 15:13:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.259 15:13:55 -- setup/common.sh@33 -- # echo 1024 00:03:38.259 15:13:55 -- setup/common.sh@33 -- # return 0 00:03:38.259 15:13:55 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:38.259 15:13:55 -- setup/hugepages.sh@112 -- # get_nodes 00:03:38.259 15:13:55 -- setup/hugepages.sh@27 -- # local node 00:03:38.259 15:13:55 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:38.259 15:13:55 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:38.259 15:13:55 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:38.259 15:13:55 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:38.259 15:13:55 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:38.259 15:13:55 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:38.259 15:13:55 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:38.259 15:13:55 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:38.259 15:13:55 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:38.259 15:13:55 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:38.259 15:13:55 -- setup/common.sh@18 -- # local node=0 00:03:38.259 15:13:55 -- setup/common.sh@19 -- # local var val 00:03:38.259 15:13:55 -- setup/common.sh@20 -- # local mem_f mem 00:03:38.259 15:13:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.259 15:13:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:38.259 15:13:55 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:38.259 15:13:55 -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.259 15:13:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.522 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.522 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.522 15:13:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59359696 kB' 'MemUsed: 6299312 kB' 'SwapCached: 0 kB' 'Active: 2549492 kB' 'Inactive: 106348 kB' 'Active(anon): 2239972 kB' 'Inactive(anon): 0 kB' 'Active(file): 309520 kB' 'Inactive(file): 106348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2547752 kB' 'Mapped: 108380 kB' 'AnonPages: 111312 kB' 'Shmem: 2131884 kB' 'KernelStack: 13496 kB' 'PageTables: 3776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 159900 kB' 'Slab: 520532 kB' 'SReclaimable: 159900 kB' 'SUnreclaim: 360632 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:38.522 15:13:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.522 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.522 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.522 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.522 15:13:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.522 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.522 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.522 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.522 15:13:55 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.522 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.522 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.522 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.522 15:13:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.522 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.522 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.522 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.522 15:13:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.522 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.522 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.522 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.522 15:13:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.522 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.522 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.522 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.522 15:13:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.522 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.522 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.522 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.522 15:13:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.522 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.522 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.522 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.522 15:13:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.522 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.522 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.522 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.522 15:13:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.522 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.522 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.522 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.522 15:13:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.522 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.522 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.522 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.522 15:13:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.522 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.522 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.522 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.522 15:13:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.522 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.522 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.522 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.522 15:13:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.522 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.522 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.522 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.522 15:13:55 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.522 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.522 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.522 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.522 15:13:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.522 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.522 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.522 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.522 15:13:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.522 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.522 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.522 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.522 15:13:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.522 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.522 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.522 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.522 15:13:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.522 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.522 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.522 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.522 15:13:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.523 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.523 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.523 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.523 15:13:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.523 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.523 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.523 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.523 15:13:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.523 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.523 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.523 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.523 15:13:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.523 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.523 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.523 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.523 15:13:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.523 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.523 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.523 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.523 15:13:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.523 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.523 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.523 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.523 15:13:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.523 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.523 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.523 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.523 15:13:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.523 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.523 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.523 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.523 15:13:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.523 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.523 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.523 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.523 15:13:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.523 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.523 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.523 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.523 15:13:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.523 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.523 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.523 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.523 15:13:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.523 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.523 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.523 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.523 15:13:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.523 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.523 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.523 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.523 15:13:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.523 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.523 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.523 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.523 15:13:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.523 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.523 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.523 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.523 15:13:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.523 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.523 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.523 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.523 15:13:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.523 15:13:55 -- setup/common.sh@32 -- # continue 00:03:38.523 15:13:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.523 15:13:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.523 15:13:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.523 15:13:55 -- setup/common.sh@33 -- # echo 0 00:03:38.523 15:13:55 -- setup/common.sh@33 -- # return 0 00:03:38.523 15:13:55 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:38.523 15:13:55 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:38.523 15:13:55 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:38.523 15:13:55 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:38.523 15:13:55 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:38.523 node0=1024 expecting 1024 00:03:38.523 15:13:55 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:38.523 15:13:55 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:38.523 15:13:55 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:38.523 15:13:55 -- setup/hugepages.sh@202 -- # setup output 00:03:38.523 15:13:55 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:38.523 15:13:55 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:41.832 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:41.832 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:41.832 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:41.832 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:41.832 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:41.832 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:41.832 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:41.832 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:41.832 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:41.832 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:41.832 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:41.832 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:41.832 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:41.832 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:41.832 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:41.832 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:41.832 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:41.832 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:41.832 15:13:59 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:41.832 15:13:59 -- setup/hugepages.sh@89 -- # local node 00:03:41.832 15:13:59 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:41.832 15:13:59 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:41.832 15:13:59 -- setup/hugepages.sh@92 -- # local surp 00:03:41.832 15:13:59 -- setup/hugepages.sh@93 -- # local resv 00:03:41.832 15:13:59 -- setup/hugepages.sh@94 -- # local anon 00:03:41.832 15:13:59 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:41.832 15:13:59 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:41.832 15:13:59 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:41.832 15:13:59 -- setup/common.sh@18 -- # local node= 00:03:41.832 15:13:59 -- setup/common.sh@19 -- # local var val 00:03:41.832 15:13:59 -- setup/common.sh@20 -- # local mem_f mem 00:03:41.832 15:13:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.832 15:13:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.832 15:13:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.832 15:13:59 -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.832 15:13:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.832 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.832 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.832 15:13:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109231540 kB' 'MemAvailable: 112761984 kB' 'Buffers: 4124 kB' 'Cached: 10549860 kB' 'SwapCached: 0 kB' 'Active: 7660784 kB' 'Inactive: 3515716 kB' 'Active(anon): 6970388 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515716 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 626024 kB' 'Mapped: 176980 kB' 'Shmem: 6347872 kB' 'KReclaimable: 297324 kB' 'Slab: 1072656 kB' 'SReclaimable: 297324 kB' 'SUnreclaim: 775332 kB' 'KernelStack: 27056 kB' 'PageTables: 8536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8347556 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234812 kB' 'VmallocChunk: 0 kB' 'Percpu: 106560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3632500 kB' 'DirectMap2M: 42184704 kB' 'DirectMap1G: 90177536 kB' 00:03:41.832 15:13:59 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.832 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.832 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.832 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.832 15:13:59 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.832 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.832 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.832 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.832 15:13:59 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.832 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.832 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.832 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.832 15:13:59 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.832 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.832 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.832 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.832 15:13:59 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.832 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.832 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.832 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.832 15:13:59 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.832 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.832 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.832 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.832 15:13:59 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.832 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.832 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.832 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.832 15:13:59 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.832 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.832 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.832 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.832 15:13:59 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.832 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.832 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.832 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.832 15:13:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.832 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.832 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.832 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.832 15:13:59 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.832 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.832 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.832 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.832 15:13:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.832 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.832 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.832 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.832 15:13:59 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.832 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.832 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.832 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.832 15:13:59 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.832 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.832 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.832 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.833 15:13:59 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.833 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.833 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.833 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.833 15:13:59 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.833 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.833 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.833 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.833 15:13:59 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.833 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.833 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.833 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.833 15:13:59 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.833 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.833 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.833 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.833 15:13:59 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.833 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.833 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.833 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.833 15:13:59 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.833 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.833 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.833 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.833 15:13:59 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.833 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.833 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.833 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.833 15:13:59 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.833 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.833 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.833 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.833 15:13:59 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.833 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.833 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.833 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.833 15:13:59 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.833 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.833 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.833 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.833 15:13:59 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.833 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.833 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.833 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.833 15:13:59 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.833 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.833 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.833 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.833 15:13:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.833 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.833 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.833 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.833 15:13:59 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.833 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.833 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.833 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.833 15:13:59 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.833 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.833 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.833 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.833 15:13:59 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.833 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.833 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.833 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.833 15:13:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.833 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.833 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.833 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.833 15:13:59 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.833 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.833 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.833 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.833 15:13:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.833 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.833 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.833 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.833 15:13:59 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.833 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.833 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.833 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.833 15:13:59 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.833 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.833 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.833 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.833 15:13:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.833 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.833 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.833 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.833 15:13:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.833 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.833 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.833 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.833 15:13:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.833 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.833 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.833 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.833 15:13:59 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.833 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.833 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.833 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.833 15:13:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.833 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.833 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.833 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.833 15:13:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.833 15:13:59 -- setup/common.sh@33 -- # echo 0 00:03:41.833 15:13:59 -- setup/common.sh@33 -- # return 0 00:03:41.833 15:13:59 -- setup/hugepages.sh@97 -- # anon=0 00:03:41.833 15:13:59 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:41.833 15:13:59 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:41.833 15:13:59 -- setup/common.sh@18 -- # local node= 00:03:41.833 15:13:59 -- setup/common.sh@19 -- # local var val 00:03:41.833 15:13:59 -- setup/common.sh@20 -- # local mem_f mem 00:03:41.833 15:13:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.833 15:13:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.833 15:13:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.833 15:13:59 -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.833 15:13:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.833 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.833 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.833 15:13:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109232196 kB' 'MemAvailable: 112762640 kB' 'Buffers: 4124 kB' 'Cached: 10549860 kB' 'SwapCached: 0 kB' 'Active: 7661400 kB' 'Inactive: 3515716 kB' 'Active(anon): 6971004 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515716 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 626684 kB' 'Mapped: 176980 kB' 'Shmem: 6347872 kB' 'KReclaimable: 297324 kB' 'Slab: 1072644 kB' 'SReclaimable: 297324 kB' 'SUnreclaim: 775320 kB' 'KernelStack: 27040 kB' 'PageTables: 8468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8347568 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234796 kB' 'VmallocChunk: 0 kB' 'Percpu: 106560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3632500 kB' 'DirectMap2M: 42184704 kB' 'DirectMap1G: 90177536 kB' 00:03:41.833 15:13:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.833 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.833 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.833 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.833 15:13:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.833 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.833 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.833 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.833 15:13:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.833 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.833 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.833 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.833 15:13:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.833 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.833 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.833 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.833 15:13:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.833 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.833 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.833 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.833 15:13:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.833 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.833 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.833 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.833 15:13:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.833 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.834 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.834 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.835 15:13:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.835 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.835 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.835 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.835 15:13:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.835 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.835 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.835 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.835 15:13:59 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.835 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.835 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.835 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.835 15:13:59 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.835 15:13:59 -- setup/common.sh@33 -- # echo 0 00:03:41.835 15:13:59 -- setup/common.sh@33 -- # return 0 00:03:41.835 15:13:59 -- setup/hugepages.sh@99 -- # surp=0 00:03:41.835 15:13:59 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:41.835 15:13:59 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:41.835 15:13:59 -- setup/common.sh@18 -- # local node= 00:03:41.835 15:13:59 -- setup/common.sh@19 -- # local var val 00:03:41.835 15:13:59 -- setup/common.sh@20 -- # local mem_f mem 00:03:41.835 15:13:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.835 15:13:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.835 15:13:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.835 15:13:59 -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.835 15:13:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.835 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.835 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.835 15:13:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109232808 kB' 'MemAvailable: 112763252 kB' 'Buffers: 4124 kB' 'Cached: 10549860 kB' 'SwapCached: 0 kB' 'Active: 7660700 kB' 'Inactive: 3515716 kB' 'Active(anon): 6970304 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515716 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 625908 kB' 'Mapped: 176968 kB' 'Shmem: 6347872 kB' 'KReclaimable: 297324 kB' 'Slab: 1072640 kB' 'SReclaimable: 297324 kB' 'SUnreclaim: 775316 kB' 'KernelStack: 27024 kB' 'PageTables: 8412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8347692 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234780 kB' 'VmallocChunk: 0 kB' 'Percpu: 106560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3632500 kB' 'DirectMap2M: 42184704 kB' 'DirectMap1G: 90177536 kB' 00:03:41.835 15:13:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.835 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.835 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.835 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.835 15:13:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.835 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.835 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.835 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.835 15:13:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.835 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.835 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.835 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.835 15:13:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.835 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.835 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.835 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.835 15:13:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.835 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.835 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.835 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.835 15:13:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.835 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.835 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.835 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.835 15:13:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.835 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.835 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.835 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.835 15:13:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.835 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.835 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.835 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.835 15:13:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.835 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.835 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.835 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.835 15:13:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.835 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.835 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.835 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.835 15:13:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.835 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.835 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.835 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.835 15:13:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.835 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.835 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.835 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.835 15:13:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.835 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.835 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.835 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.835 15:13:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.835 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.835 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.835 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.835 15:13:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.835 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.835 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.835 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.835 15:13:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.835 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.835 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.835 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.835 15:13:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.835 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.835 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.835 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.835 15:13:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.835 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.835 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.835 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.835 15:13:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.835 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.835 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.835 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.835 15:13:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.835 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.835 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.835 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.835 15:13:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.835 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.835 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.835 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.835 15:13:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.835 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.835 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.835 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.835 15:13:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.835 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.835 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.835 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.835 15:13:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.835 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.835 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.835 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.835 15:13:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.835 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.835 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.835 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.835 15:13:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.835 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.835 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.835 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.835 15:13:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.835 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.835 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.835 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.835 15:13:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.835 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.835 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.835 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.835 15:13:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.835 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.835 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.835 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.835 15:13:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.835 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.835 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.835 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.836 15:13:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.836 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.836 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.836 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.836 15:13:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.836 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.836 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.836 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.836 15:13:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.836 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.836 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.836 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.836 15:13:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.836 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.836 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.836 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.836 15:13:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.836 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.836 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.836 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.836 15:13:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.836 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.836 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.836 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.836 15:13:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.836 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.836 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.836 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.836 15:13:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.836 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.836 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.836 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.836 15:13:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.836 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.836 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.836 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.836 15:13:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.836 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.836 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.836 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.836 15:13:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.836 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.836 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.836 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.836 15:13:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.836 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.836 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.836 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.836 15:13:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.836 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.836 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.836 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.836 15:13:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.836 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.836 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.836 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.836 15:13:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.836 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.836 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.836 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.836 15:13:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.836 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.836 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.836 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.836 15:13:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.836 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.836 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.836 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.836 15:13:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.836 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.836 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.836 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.836 15:13:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.836 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.836 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.836 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.836 15:13:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.836 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.836 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.836 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.836 15:13:59 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.836 15:13:59 -- setup/common.sh@33 -- # echo 0 00:03:41.836 15:13:59 -- setup/common.sh@33 -- # return 0 00:03:41.836 15:13:59 -- setup/hugepages.sh@100 -- # resv=0 00:03:41.836 15:13:59 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:41.836 nr_hugepages=1024 00:03:41.836 15:13:59 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:41.836 resv_hugepages=0 00:03:41.836 15:13:59 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:41.836 surplus_hugepages=0 00:03:41.836 15:13:59 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:41.836 anon_hugepages=0 00:03:41.836 15:13:59 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:41.836 15:13:59 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:41.836 15:13:59 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:41.836 15:13:59 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:41.836 15:13:59 -- setup/common.sh@18 -- # local node= 00:03:41.836 15:13:59 -- setup/common.sh@19 -- # local var val 00:03:41.836 15:13:59 -- setup/common.sh@20 -- # local mem_f mem 00:03:41.836 15:13:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.836 15:13:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.836 15:13:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.836 15:13:59 -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.836 15:13:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.836 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.836 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.836 15:13:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109233176 kB' 'MemAvailable: 112763620 kB' 'Buffers: 4124 kB' 'Cached: 10549868 kB' 'SwapCached: 0 kB' 'Active: 7660508 kB' 'Inactive: 3515716 kB' 'Active(anon): 6970112 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515716 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 625716 kB' 'Mapped: 176896 kB' 'Shmem: 6347880 kB' 'KReclaimable: 297324 kB' 'Slab: 1072660 kB' 'SReclaimable: 297324 kB' 'SUnreclaim: 775336 kB' 'KernelStack: 26992 kB' 'PageTables: 8352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8347596 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234748 kB' 'VmallocChunk: 0 kB' 'Percpu: 106560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3632500 kB' 'DirectMap2M: 42184704 kB' 'DirectMap1G: 90177536 kB' 00:03:41.836 15:13:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.836 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.836 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.836 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.836 15:13:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.836 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.836 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.836 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.836 15:13:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.836 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.836 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.836 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.836 15:13:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.836 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.836 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.836 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.836 15:13:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.836 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.836 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.836 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.837 15:13:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.837 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.837 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.837 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.837 15:13:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.837 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.837 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.837 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.837 15:13:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.837 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.837 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.837 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.837 15:13:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.837 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.837 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.837 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.837 15:13:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.837 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.837 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.837 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.837 15:13:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.837 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.837 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.837 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.837 15:13:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.837 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.837 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.837 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.837 15:13:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.837 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.837 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.837 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.837 15:13:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.837 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.837 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.837 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.837 15:13:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.837 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.837 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.837 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.837 15:13:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.837 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.837 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.837 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.837 15:13:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.837 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.837 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.837 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.837 15:13:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.837 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.837 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.837 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.837 15:13:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.837 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.837 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.837 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.837 15:13:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.837 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.837 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.837 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.837 15:13:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.837 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.837 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.837 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.837 15:13:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.837 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.837 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.837 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.837 15:13:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.837 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.837 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.837 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.837 15:13:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.837 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.837 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.837 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.837 15:13:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.837 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.837 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.837 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.837 15:13:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.837 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.837 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.837 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.837 15:13:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.837 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.837 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.837 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.837 15:13:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.837 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.837 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.837 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.837 15:13:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.837 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.837 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.837 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.837 15:13:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.837 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.837 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.837 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.837 15:13:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.837 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.837 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.837 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.837 15:13:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.837 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.837 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.837 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.837 15:13:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.837 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.837 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.837 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.837 15:13:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.837 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.837 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.837 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.837 15:13:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.837 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.837 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.837 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.837 15:13:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.837 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.837 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.837 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.837 15:13:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.837 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.837 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.837 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.837 15:13:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.837 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.837 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.837 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.837 15:13:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.837 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.837 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.837 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.837 15:13:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.837 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.837 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.837 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.837 15:13:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.837 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.837 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.837 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.837 15:13:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.837 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.837 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.837 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.837 15:13:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.837 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.837 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.837 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.838 15:13:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.838 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.838 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.838 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.838 15:13:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.838 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.838 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.838 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.838 15:13:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.838 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.838 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.838 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.838 15:13:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.838 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.838 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.838 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.838 15:13:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.838 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.838 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.838 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.838 15:13:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.838 15:13:59 -- setup/common.sh@33 -- # echo 1024 00:03:41.838 15:13:59 -- setup/common.sh@33 -- # return 0 00:03:41.838 15:13:59 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:41.838 15:13:59 -- setup/hugepages.sh@112 -- # get_nodes 00:03:41.838 15:13:59 -- setup/hugepages.sh@27 -- # local node 00:03:41.838 15:13:59 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:41.838 15:13:59 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:41.838 15:13:59 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:41.838 15:13:59 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:41.838 15:13:59 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:41.838 15:13:59 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:41.838 15:13:59 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:41.838 15:13:59 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:41.838 15:13:59 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:41.838 15:13:59 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:41.838 15:13:59 -- setup/common.sh@18 -- # local node=0 00:03:41.838 15:13:59 -- setup/common.sh@19 -- # local var val 00:03:41.838 15:13:59 -- setup/common.sh@20 -- # local mem_f mem 00:03:41.838 15:13:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.838 15:13:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:41.838 15:13:59 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:41.838 15:13:59 -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.838 15:13:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.838 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.838 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.838 15:13:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59374016 kB' 'MemUsed: 6284992 kB' 'SwapCached: 0 kB' 'Active: 2549564 kB' 'Inactive: 106348 kB' 'Active(anon): 2240044 kB' 'Inactive(anon): 0 kB' 'Active(file): 309520 kB' 'Inactive(file): 106348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2547828 kB' 'Mapped: 108396 kB' 'AnonPages: 111324 kB' 'Shmem: 2131960 kB' 'KernelStack: 13512 kB' 'PageTables: 3868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 159900 kB' 'Slab: 520824 kB' 'SReclaimable: 159900 kB' 'SUnreclaim: 360924 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:41.838 15:13:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.838 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.838 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.838 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.838 15:13:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.838 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.838 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.838 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.838 15:13:59 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.838 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.838 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.838 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.838 15:13:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.838 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.838 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.838 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.838 15:13:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.838 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.838 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.838 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.838 15:13:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.838 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.838 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.838 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.838 15:13:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.838 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.838 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.838 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.838 15:13:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.838 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.838 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.838 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.838 15:13:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.838 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.838 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.838 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.838 15:13:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.838 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.838 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.838 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.838 15:13:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.838 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.838 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.838 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.838 15:13:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.838 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.838 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.838 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.838 15:13:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.838 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.838 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.838 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.838 15:13:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.838 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.838 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.838 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.838 15:13:59 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.838 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.838 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.838 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.838 15:13:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.838 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.838 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.838 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.838 15:13:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.838 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.838 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.838 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.838 15:13:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.838 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.838 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.838 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.838 15:13:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.838 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.838 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.838 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.838 15:13:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.838 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.838 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.838 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.838 15:13:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.838 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.838 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.838 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.838 15:13:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.838 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.838 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.838 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.838 15:13:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.838 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.838 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.838 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.838 15:13:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.838 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.838 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.838 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.838 15:13:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.838 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.838 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.838 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.838 15:13:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.838 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.839 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.839 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.839 15:13:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.839 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.839 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.839 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.839 15:13:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.839 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.839 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.839 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.839 15:13:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.839 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.839 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.839 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.839 15:13:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.839 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.839 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.839 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.839 15:13:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.839 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.839 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.839 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.839 15:13:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.839 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.839 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.839 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.839 15:13:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.839 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.839 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.839 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.839 15:13:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.839 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.839 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.839 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.839 15:13:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.839 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.839 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.839 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.839 15:13:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.839 15:13:59 -- setup/common.sh@32 -- # continue 00:03:41.839 15:13:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.839 15:13:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.839 15:13:59 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.839 15:13:59 -- setup/common.sh@33 -- # echo 0 00:03:41.839 15:13:59 -- setup/common.sh@33 -- # return 0 00:03:41.839 15:13:59 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:41.839 15:13:59 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:41.839 15:13:59 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:41.839 15:13:59 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:41.839 15:13:59 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:41.839 node0=1024 expecting 1024 00:03:41.839 15:13:59 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:41.839 00:03:41.839 real 0m7.301s 00:03:41.839 user 0m2.780s 00:03:41.839 sys 0m4.550s 00:03:41.839 15:13:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:41.839 15:13:59 -- common/autotest_common.sh@10 -- # set +x 00:03:41.839 ************************************ 00:03:41.839 END TEST no_shrink_alloc 00:03:41.839 ************************************ 00:03:41.839 15:13:59 -- setup/hugepages.sh@217 -- # clear_hp 00:03:41.839 15:13:59 -- setup/hugepages.sh@37 -- # local node hp 00:03:41.839 15:13:59 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:41.839 15:13:59 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:41.839 15:13:59 -- setup/hugepages.sh@41 -- # echo 0 00:03:41.839 15:13:59 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:41.839 15:13:59 -- setup/hugepages.sh@41 -- # echo 0 00:03:41.839 15:13:59 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:41.839 15:13:59 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:41.839 15:13:59 -- setup/hugepages.sh@41 -- # echo 0 00:03:41.839 15:13:59 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:41.839 15:13:59 -- setup/hugepages.sh@41 -- # echo 0 00:03:41.839 15:13:59 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:41.839 15:13:59 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:41.839 00:03:41.839 real 0m27.886s 00:03:41.839 user 0m10.827s 00:03:41.839 sys 0m17.163s 00:03:41.839 15:13:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:41.839 15:13:59 -- common/autotest_common.sh@10 -- # set +x 00:03:41.839 ************************************ 00:03:41.839 END TEST hugepages 00:03:41.839 ************************************ 00:03:42.100 15:13:59 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:42.100 15:13:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:42.100 15:13:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:42.100 15:13:59 -- common/autotest_common.sh@10 -- # set +x 00:03:42.100 ************************************ 00:03:42.100 START TEST driver 00:03:42.100 ************************************ 00:03:42.100 15:13:59 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:42.100 * Looking for test storage... 00:03:42.100 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:42.100 15:13:59 -- setup/driver.sh@68 -- # setup reset 00:03:42.100 15:13:59 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:42.100 15:13:59 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:47.392 15:14:04 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:47.392 15:14:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:47.392 15:14:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:47.392 15:14:04 -- common/autotest_common.sh@10 -- # set +x 00:03:47.392 ************************************ 00:03:47.392 START TEST guess_driver 00:03:47.392 ************************************ 00:03:47.392 15:14:04 -- common/autotest_common.sh@1111 -- # guess_driver 00:03:47.392 15:14:04 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:47.392 15:14:04 -- setup/driver.sh@47 -- # local fail=0 00:03:47.392 15:14:04 -- setup/driver.sh@49 -- # pick_driver 00:03:47.392 15:14:04 -- setup/driver.sh@36 -- # vfio 00:03:47.392 15:14:04 -- setup/driver.sh@21 -- # local iommu_grups 00:03:47.392 15:14:04 -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:47.392 15:14:04 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:47.392 15:14:04 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:47.392 15:14:04 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:47.392 15:14:04 -- setup/driver.sh@29 -- # (( 322 > 0 )) 00:03:47.392 15:14:04 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:47.392 15:14:04 -- setup/driver.sh@14 -- # mod vfio_pci 00:03:47.392 15:14:04 -- setup/driver.sh@12 -- # dep vfio_pci 00:03:47.392 15:14:04 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:47.392 15:14:04 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:47.392 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:47.392 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:47.392 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:47.392 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:47.392 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:47.392 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:47.392 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:47.392 15:14:04 -- setup/driver.sh@30 -- # return 0 00:03:47.392 15:14:04 -- setup/driver.sh@37 -- # echo vfio-pci 00:03:47.392 15:14:04 -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:47.392 15:14:04 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:47.392 15:14:04 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:47.392 Looking for driver=vfio-pci 00:03:47.392 15:14:04 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:47.392 15:14:04 -- setup/driver.sh@45 -- # setup output config 00:03:47.392 15:14:04 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:47.392 15:14:04 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:50.690 15:14:07 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:50.690 15:14:07 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:50.690 15:14:07 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:50.690 15:14:07 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:50.690 15:14:07 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:50.690 15:14:07 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:50.690 15:14:07 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:50.690 15:14:07 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:50.690 15:14:07 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:50.690 15:14:07 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:50.690 15:14:07 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:50.690 15:14:07 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:50.690 15:14:07 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:50.690 15:14:07 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:50.690 15:14:07 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:50.690 15:14:07 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:50.690 15:14:07 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:50.690 15:14:07 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:50.690 15:14:07 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:50.690 15:14:07 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:50.690 15:14:07 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:50.690 15:14:07 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:50.690 15:14:07 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:50.690 15:14:07 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:50.690 15:14:07 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:50.690 15:14:07 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:50.690 15:14:07 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:50.690 15:14:07 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:50.690 15:14:07 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:50.690 15:14:07 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:50.690 15:14:07 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:50.690 15:14:07 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:50.690 15:14:07 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:50.690 15:14:07 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:50.690 15:14:07 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:50.690 15:14:07 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:50.690 15:14:07 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:50.690 15:14:07 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:50.690 15:14:07 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:50.690 15:14:07 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:50.690 15:14:07 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:50.690 15:14:07 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:50.690 15:14:07 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:50.690 15:14:07 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:50.690 15:14:07 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:50.690 15:14:07 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:50.690 15:14:07 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:50.690 15:14:07 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:50.690 15:14:07 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:50.690 15:14:07 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:50.690 15:14:07 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:50.690 15:14:07 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:50.690 15:14:07 -- setup/driver.sh@65 -- # setup reset 00:03:50.690 15:14:07 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:50.690 15:14:07 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:55.972 00:03:55.972 real 0m7.766s 00:03:55.972 user 0m2.136s 00:03:55.972 sys 0m4.587s 00:03:55.972 15:14:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:55.972 15:14:12 -- common/autotest_common.sh@10 -- # set +x 00:03:55.972 ************************************ 00:03:55.972 END TEST guess_driver 00:03:55.972 ************************************ 00:03:55.972 00:03:55.972 real 0m13.022s 00:03:55.972 user 0m3.790s 00:03:55.972 sys 0m7.321s 00:03:55.972 15:14:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:55.972 15:14:12 -- common/autotest_common.sh@10 -- # set +x 00:03:55.972 ************************************ 00:03:55.972 END TEST driver 00:03:55.972 ************************************ 00:03:55.972 15:14:12 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:55.972 15:14:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:55.972 15:14:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:55.972 15:14:12 -- common/autotest_common.sh@10 -- # set +x 00:03:55.972 ************************************ 00:03:55.972 START TEST devices 00:03:55.972 ************************************ 00:03:55.972 15:14:12 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:55.972 * Looking for test storage... 00:03:55.972 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:55.972 15:14:12 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:55.972 15:14:12 -- setup/devices.sh@192 -- # setup reset 00:03:55.972 15:14:12 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:55.972 15:14:12 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:59.275 15:14:16 -- setup/devices.sh@194 -- # get_zoned_devs 00:03:59.275 15:14:16 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:59.275 15:14:16 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:59.275 15:14:16 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:59.275 15:14:16 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:59.275 15:14:16 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:59.275 15:14:16 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:59.275 15:14:16 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:59.275 15:14:16 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:59.275 15:14:16 -- setup/devices.sh@196 -- # blocks=() 00:03:59.275 15:14:16 -- setup/devices.sh@196 -- # declare -a blocks 00:03:59.275 15:14:16 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:59.276 15:14:16 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:59.276 15:14:16 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:59.276 15:14:16 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:59.276 15:14:16 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:59.276 15:14:16 -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:59.276 15:14:16 -- setup/devices.sh@202 -- # pci=0000:65:00.0 00:03:59.276 15:14:16 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:03:59.276 15:14:16 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:59.276 15:14:16 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:59.276 15:14:16 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:59.537 No valid GPT data, bailing 00:03:59.537 15:14:16 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:59.537 15:14:16 -- scripts/common.sh@391 -- # pt= 00:03:59.537 15:14:16 -- scripts/common.sh@392 -- # return 1 00:03:59.537 15:14:16 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:59.537 15:14:16 -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:59.537 15:14:16 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:59.537 15:14:16 -- setup/common.sh@80 -- # echo 1920383410176 00:03:59.537 15:14:16 -- setup/devices.sh@204 -- # (( 1920383410176 >= min_disk_size )) 00:03:59.537 15:14:16 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:59.537 15:14:16 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:65:00.0 00:03:59.537 15:14:16 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:59.537 15:14:16 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:59.537 15:14:16 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:59.537 15:14:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:59.537 15:14:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:59.537 15:14:16 -- common/autotest_common.sh@10 -- # set +x 00:03:59.537 ************************************ 00:03:59.537 START TEST nvme_mount 00:03:59.537 ************************************ 00:03:59.537 15:14:16 -- common/autotest_common.sh@1111 -- # nvme_mount 00:03:59.537 15:14:16 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:59.537 15:14:16 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:59.537 15:14:16 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:59.537 15:14:16 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:59.537 15:14:16 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:59.537 15:14:16 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:59.537 15:14:16 -- setup/common.sh@40 -- # local part_no=1 00:03:59.537 15:14:16 -- setup/common.sh@41 -- # local size=1073741824 00:03:59.537 15:14:16 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:59.537 15:14:16 -- setup/common.sh@44 -- # parts=() 00:03:59.537 15:14:16 -- setup/common.sh@44 -- # local parts 00:03:59.537 15:14:16 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:59.537 15:14:16 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:59.537 15:14:16 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:59.537 15:14:16 -- setup/common.sh@46 -- # (( part++ )) 00:03:59.537 15:14:16 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:59.537 15:14:16 -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:59.537 15:14:16 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:59.537 15:14:16 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:00.478 Creating new GPT entries in memory. 00:04:00.478 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:00.478 other utilities. 00:04:00.478 15:14:17 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:00.478 15:14:17 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:00.478 15:14:17 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:00.478 15:14:17 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:00.478 15:14:17 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:01.862 Creating new GPT entries in memory. 00:04:01.862 The operation has completed successfully. 00:04:01.862 15:14:18 -- setup/common.sh@57 -- # (( part++ )) 00:04:01.862 15:14:18 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:01.862 15:14:18 -- setup/common.sh@62 -- # wait 1396994 00:04:01.862 15:14:18 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:01.862 15:14:18 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:01.862 15:14:18 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:01.862 15:14:18 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:01.862 15:14:18 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:01.862 15:14:18 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:01.862 15:14:19 -- setup/devices.sh@105 -- # verify 0000:65:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:01.862 15:14:19 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:01.862 15:14:19 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:01.862 15:14:19 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:01.862 15:14:19 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:01.862 15:14:19 -- setup/devices.sh@53 -- # local found=0 00:04:01.862 15:14:19 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:01.862 15:14:19 -- setup/devices.sh@56 -- # : 00:04:01.862 15:14:19 -- setup/devices.sh@59 -- # local pci status 00:04:01.862 15:14:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.862 15:14:19 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:01.862 15:14:19 -- setup/devices.sh@47 -- # setup output config 00:04:01.862 15:14:19 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:01.862 15:14:19 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:05.166 15:14:22 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:05.166 15:14:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.166 15:14:22 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:05.166 15:14:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.166 15:14:22 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:05.166 15:14:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.166 15:14:22 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:05.166 15:14:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.166 15:14:22 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:05.166 15:14:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.166 15:14:22 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:05.166 15:14:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.166 15:14:22 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:05.166 15:14:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.166 15:14:22 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:05.166 15:14:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.166 15:14:22 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:05.166 15:14:22 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:05.166 15:14:22 -- setup/devices.sh@63 -- # found=1 00:04:05.166 15:14:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.166 15:14:22 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:05.166 15:14:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.166 15:14:22 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:05.166 15:14:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.166 15:14:22 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:05.166 15:14:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.166 15:14:22 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:05.166 15:14:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.166 15:14:22 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:05.166 15:14:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.166 15:14:22 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:05.166 15:14:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.166 15:14:22 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:05.166 15:14:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.166 15:14:22 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:05.166 15:14:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.427 15:14:22 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:05.427 15:14:22 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:05.427 15:14:22 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:05.427 15:14:22 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:05.427 15:14:22 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:05.427 15:14:22 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:05.427 15:14:22 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:05.427 15:14:22 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:05.427 15:14:22 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:05.427 15:14:22 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:05.427 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:05.427 15:14:22 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:05.427 15:14:22 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:05.728 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:05.728 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:05.728 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:05.728 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:05.728 15:14:22 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:05.728 15:14:22 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:05.728 15:14:22 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:05.728 15:14:22 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:05.728 15:14:22 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:05.728 15:14:23 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:05.728 15:14:23 -- setup/devices.sh@116 -- # verify 0000:65:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:05.728 15:14:23 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:05.728 15:14:23 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:05.728 15:14:23 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:05.728 15:14:23 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:05.728 15:14:23 -- setup/devices.sh@53 -- # local found=0 00:04:05.728 15:14:23 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:05.728 15:14:23 -- setup/devices.sh@56 -- # : 00:04:05.728 15:14:23 -- setup/devices.sh@59 -- # local pci status 00:04:05.728 15:14:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.728 15:14:23 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:05.728 15:14:23 -- setup/devices.sh@47 -- # setup output config 00:04:05.728 15:14:23 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:05.728 15:14:23 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:09.067 15:14:26 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.067 15:14:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.067 15:14:26 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.067 15:14:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.067 15:14:26 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.067 15:14:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.067 15:14:26 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.067 15:14:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.067 15:14:26 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.067 15:14:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.067 15:14:26 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.067 15:14:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.067 15:14:26 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.067 15:14:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.067 15:14:26 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.067 15:14:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.067 15:14:26 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.067 15:14:26 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:09.067 15:14:26 -- setup/devices.sh@63 -- # found=1 00:04:09.067 15:14:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.067 15:14:26 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.067 15:14:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.067 15:14:26 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.067 15:14:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.067 15:14:26 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.067 15:14:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.067 15:14:26 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.067 15:14:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.067 15:14:26 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.067 15:14:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.067 15:14:26 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.067 15:14:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.067 15:14:26 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.067 15:14:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.067 15:14:26 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.067 15:14:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.334 15:14:26 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:09.334 15:14:26 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:09.334 15:14:26 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:09.334 15:14:26 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:09.334 15:14:26 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:09.334 15:14:26 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:09.334 15:14:26 -- setup/devices.sh@125 -- # verify 0000:65:00.0 data@nvme0n1 '' '' 00:04:09.334 15:14:26 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:09.334 15:14:26 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:09.334 15:14:26 -- setup/devices.sh@50 -- # local mount_point= 00:04:09.334 15:14:26 -- setup/devices.sh@51 -- # local test_file= 00:04:09.334 15:14:26 -- setup/devices.sh@53 -- # local found=0 00:04:09.334 15:14:26 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:09.334 15:14:26 -- setup/devices.sh@59 -- # local pci status 00:04:09.334 15:14:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.334 15:14:26 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:09.334 15:14:26 -- setup/devices.sh@47 -- # setup output config 00:04:09.334 15:14:26 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:09.334 15:14:26 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:12.637 15:14:29 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:12.637 15:14:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.637 15:14:29 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:12.637 15:14:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.638 15:14:29 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:12.638 15:14:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.638 15:14:29 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:12.638 15:14:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.638 15:14:29 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:12.638 15:14:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.638 15:14:29 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:12.638 15:14:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.638 15:14:29 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:12.638 15:14:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.638 15:14:29 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:12.638 15:14:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.638 15:14:30 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:12.638 15:14:30 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:12.638 15:14:30 -- setup/devices.sh@63 -- # found=1 00:04:12.638 15:14:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.638 15:14:30 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:12.638 15:14:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.638 15:14:30 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:12.638 15:14:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.638 15:14:30 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:12.638 15:14:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.638 15:14:30 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:12.638 15:14:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.638 15:14:30 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:12.638 15:14:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.638 15:14:30 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:12.638 15:14:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.638 15:14:30 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:12.638 15:14:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.638 15:14:30 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:12.638 15:14:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.211 15:14:30 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:13.211 15:14:30 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:13.211 15:14:30 -- setup/devices.sh@68 -- # return 0 00:04:13.211 15:14:30 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:13.211 15:14:30 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:13.211 15:14:30 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:13.211 15:14:30 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:13.211 15:14:30 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:13.211 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:13.211 00:04:13.211 real 0m13.561s 00:04:13.211 user 0m4.317s 00:04:13.211 sys 0m7.083s 00:04:13.211 15:14:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:13.211 15:14:30 -- common/autotest_common.sh@10 -- # set +x 00:04:13.211 ************************************ 00:04:13.211 END TEST nvme_mount 00:04:13.211 ************************************ 00:04:13.211 15:14:30 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:13.211 15:14:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:13.211 15:14:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:13.211 15:14:30 -- common/autotest_common.sh@10 -- # set +x 00:04:13.211 ************************************ 00:04:13.211 START TEST dm_mount 00:04:13.211 ************************************ 00:04:13.211 15:14:30 -- common/autotest_common.sh@1111 -- # dm_mount 00:04:13.211 15:14:30 -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:13.211 15:14:30 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:13.211 15:14:30 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:13.211 15:14:30 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:13.211 15:14:30 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:13.211 15:14:30 -- setup/common.sh@40 -- # local part_no=2 00:04:13.211 15:14:30 -- setup/common.sh@41 -- # local size=1073741824 00:04:13.211 15:14:30 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:13.211 15:14:30 -- setup/common.sh@44 -- # parts=() 00:04:13.211 15:14:30 -- setup/common.sh@44 -- # local parts 00:04:13.211 15:14:30 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:13.211 15:14:30 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:13.211 15:14:30 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:13.211 15:14:30 -- setup/common.sh@46 -- # (( part++ )) 00:04:13.211 15:14:30 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:13.211 15:14:30 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:13.211 15:14:30 -- setup/common.sh@46 -- # (( part++ )) 00:04:13.211 15:14:30 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:13.211 15:14:30 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:13.211 15:14:30 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:13.211 15:14:30 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:14.595 Creating new GPT entries in memory. 00:04:14.595 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:14.595 other utilities. 00:04:14.595 15:14:31 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:14.595 15:14:31 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:14.595 15:14:31 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:14.596 15:14:31 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:14.596 15:14:31 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:15.539 Creating new GPT entries in memory. 00:04:15.539 The operation has completed successfully. 00:04:15.539 15:14:32 -- setup/common.sh@57 -- # (( part++ )) 00:04:15.539 15:14:32 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:15.539 15:14:32 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:15.539 15:14:32 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:15.539 15:14:32 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:16.481 The operation has completed successfully. 00:04:16.481 15:14:33 -- setup/common.sh@57 -- # (( part++ )) 00:04:16.481 15:14:33 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:16.481 15:14:33 -- setup/common.sh@62 -- # wait 1402201 00:04:16.481 15:14:33 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:16.481 15:14:33 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:16.481 15:14:33 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:16.481 15:14:33 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:16.481 15:14:33 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:16.481 15:14:33 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:16.481 15:14:33 -- setup/devices.sh@161 -- # break 00:04:16.481 15:14:33 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:16.481 15:14:33 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:16.481 15:14:33 -- setup/devices.sh@165 -- # dm=/dev/dm-1 00:04:16.481 15:14:33 -- setup/devices.sh@166 -- # dm=dm-1 00:04:16.481 15:14:33 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-1 ]] 00:04:16.481 15:14:33 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-1 ]] 00:04:16.481 15:14:33 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:16.481 15:14:33 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:16.481 15:14:33 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:16.481 15:14:33 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:16.481 15:14:33 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:16.481 15:14:33 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:16.481 15:14:33 -- setup/devices.sh@174 -- # verify 0000:65:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:16.481 15:14:33 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:16.481 15:14:33 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:16.481 15:14:33 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:16.481 15:14:33 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:16.481 15:14:33 -- setup/devices.sh@53 -- # local found=0 00:04:16.481 15:14:33 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:16.481 15:14:33 -- setup/devices.sh@56 -- # : 00:04:16.481 15:14:33 -- setup/devices.sh@59 -- # local pci status 00:04:16.481 15:14:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.481 15:14:33 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:16.481 15:14:33 -- setup/devices.sh@47 -- # setup output config 00:04:16.481 15:14:33 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:16.481 15:14:33 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:19.787 15:14:36 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.787 15:14:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.787 15:14:36 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.787 15:14:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.787 15:14:36 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.787 15:14:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.787 15:14:36 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.787 15:14:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.787 15:14:36 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.787 15:14:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.787 15:14:36 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.787 15:14:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.787 15:14:36 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.787 15:14:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.787 15:14:36 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.787 15:14:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.787 15:14:36 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.787 15:14:36 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:19.787 15:14:36 -- setup/devices.sh@63 -- # found=1 00:04:19.787 15:14:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.787 15:14:36 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.787 15:14:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.787 15:14:36 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.787 15:14:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.787 15:14:36 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.787 15:14:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.787 15:14:36 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.787 15:14:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.787 15:14:36 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.787 15:14:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.787 15:14:36 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.787 15:14:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.787 15:14:36 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.787 15:14:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.787 15:14:36 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.787 15:14:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.049 15:14:37 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:20.049 15:14:37 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:20.049 15:14:37 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:20.049 15:14:37 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:20.049 15:14:37 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:20.049 15:14:37 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:20.049 15:14:37 -- setup/devices.sh@184 -- # verify 0000:65:00.0 holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1 '' '' 00:04:20.049 15:14:37 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:20.049 15:14:37 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1 00:04:20.049 15:14:37 -- setup/devices.sh@50 -- # local mount_point= 00:04:20.049 15:14:37 -- setup/devices.sh@51 -- # local test_file= 00:04:20.049 15:14:37 -- setup/devices.sh@53 -- # local found=0 00:04:20.049 15:14:37 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:20.049 15:14:37 -- setup/devices.sh@59 -- # local pci status 00:04:20.049 15:14:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.049 15:14:37 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:20.049 15:14:37 -- setup/devices.sh@47 -- # setup output config 00:04:20.049 15:14:37 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:20.049 15:14:37 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:23.352 15:14:40 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:23.352 15:14:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.352 15:14:40 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:23.352 15:14:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.352 15:14:40 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:23.352 15:14:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.352 15:14:40 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:23.352 15:14:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.352 15:14:40 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:23.352 15:14:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.352 15:14:40 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:23.352 15:14:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.352 15:14:40 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:23.352 15:14:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.352 15:14:40 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:23.352 15:14:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.352 15:14:40 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:23.352 15:14:40 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\1\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\1* ]] 00:04:23.352 15:14:40 -- setup/devices.sh@63 -- # found=1 00:04:23.352 15:14:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.352 15:14:40 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:23.352 15:14:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.352 15:14:40 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:23.352 15:14:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.352 15:14:40 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:23.352 15:14:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.352 15:14:40 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:23.352 15:14:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.352 15:14:40 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:23.352 15:14:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.352 15:14:40 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:23.352 15:14:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.352 15:14:40 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:23.352 15:14:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.352 15:14:40 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:23.352 15:14:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.612 15:14:41 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:23.612 15:14:41 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:23.612 15:14:41 -- setup/devices.sh@68 -- # return 0 00:04:23.612 15:14:41 -- setup/devices.sh@187 -- # cleanup_dm 00:04:23.612 15:14:41 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:23.612 15:14:41 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:23.612 15:14:41 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:23.871 15:14:41 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:23.871 15:14:41 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:23.871 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:23.871 15:14:41 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:23.871 15:14:41 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:23.871 00:04:23.871 real 0m10.436s 00:04:23.871 user 0m2.685s 00:04:23.871 sys 0m4.747s 00:04:23.871 15:14:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:23.871 15:14:41 -- common/autotest_common.sh@10 -- # set +x 00:04:23.871 ************************************ 00:04:23.871 END TEST dm_mount 00:04:23.871 ************************************ 00:04:23.871 15:14:41 -- setup/devices.sh@1 -- # cleanup 00:04:23.871 15:14:41 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:23.871 15:14:41 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:23.871 15:14:41 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:23.871 15:14:41 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:23.871 15:14:41 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:23.871 15:14:41 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:24.129 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:24.129 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:24.129 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:24.129 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:24.130 15:14:41 -- setup/devices.sh@12 -- # cleanup_dm 00:04:24.130 15:14:41 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:24.130 15:14:41 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:24.130 15:14:41 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:24.130 15:14:41 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:24.130 15:14:41 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:24.130 15:14:41 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:24.130 00:04:24.130 real 0m28.758s 00:04:24.130 user 0m8.702s 00:04:24.130 sys 0m14.727s 00:04:24.130 15:14:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:24.130 15:14:41 -- common/autotest_common.sh@10 -- # set +x 00:04:24.130 ************************************ 00:04:24.130 END TEST devices 00:04:24.130 ************************************ 00:04:24.130 00:04:24.130 real 1m36.680s 00:04:24.130 user 0m32.204s 00:04:24.130 sys 0m54.893s 00:04:24.130 15:14:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:24.130 15:14:41 -- common/autotest_common.sh@10 -- # set +x 00:04:24.130 ************************************ 00:04:24.130 END TEST setup.sh 00:04:24.130 ************************************ 00:04:24.130 15:14:41 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:27.421 Hugepages 00:04:27.421 node hugesize free / total 00:04:27.421 node0 1048576kB 0 / 0 00:04:27.421 node0 2048kB 2048 / 2048 00:04:27.421 node1 1048576kB 0 / 0 00:04:27.421 node1 2048kB 0 / 0 00:04:27.421 00:04:27.421 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:27.421 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:04:27.421 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:04:27.421 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:04:27.421 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:04:27.421 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:04:27.421 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:04:27.421 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:04:27.421 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:04:27.681 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:04:27.681 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:04:27.681 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:04:27.681 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:04:27.681 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:04:27.681 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:04:27.681 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:04:27.681 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:04:27.681 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:04:27.681 15:14:45 -- spdk/autotest.sh@130 -- # uname -s 00:04:27.681 15:14:45 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:27.681 15:14:45 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:27.681 15:14:45 -- common/autotest_common.sh@1517 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:30.978 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:30.978 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:30.978 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:30.978 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:30.978 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:30.978 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:30.978 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:31.238 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:31.238 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:31.238 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:31.238 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:31.238 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:31.238 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:31.238 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:31.238 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:31.238 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:33.154 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:33.154 15:14:50 -- common/autotest_common.sh@1518 -- # sleep 1 00:04:34.540 15:14:51 -- common/autotest_common.sh@1519 -- # bdfs=() 00:04:34.540 15:14:51 -- common/autotest_common.sh@1519 -- # local bdfs 00:04:34.540 15:14:51 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:34.540 15:14:51 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:34.540 15:14:51 -- common/autotest_common.sh@1499 -- # bdfs=() 00:04:34.540 15:14:51 -- common/autotest_common.sh@1499 -- # local bdfs 00:04:34.540 15:14:51 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:34.540 15:14:51 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:34.540 15:14:51 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:04:34.540 15:14:51 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:04:34.540 15:14:51 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:65:00.0 00:04:34.540 15:14:51 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:37.840 Waiting for block devices as requested 00:04:37.840 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:37.840 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:37.840 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:37.840 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:37.840 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:38.101 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:38.101 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:38.101 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:38.362 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:04:38.362 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:38.621 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:38.621 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:38.621 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:38.621 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:38.881 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:38.881 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:38.881 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:39.141 15:14:56 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:39.141 15:14:56 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:04:39.141 15:14:56 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 00:04:39.141 15:14:56 -- common/autotest_common.sh@1488 -- # grep 0000:65:00.0/nvme/nvme 00:04:39.141 15:14:56 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:39.141 15:14:56 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:04:39.141 15:14:56 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:39.141 15:14:56 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme0 00:04:39.141 15:14:56 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:39.141 15:14:56 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:39.141 15:14:56 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:39.141 15:14:56 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:39.141 15:14:56 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:39.141 15:14:56 -- common/autotest_common.sh@1531 -- # oacs=' 0x5f' 00:04:39.141 15:14:56 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:39.141 15:14:56 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:39.141 15:14:56 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:39.141 15:14:56 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:39.141 15:14:56 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:39.141 15:14:56 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:39.141 15:14:56 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:39.141 15:14:56 -- common/autotest_common.sh@1543 -- # continue 00:04:39.141 15:14:56 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:39.141 15:14:56 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:39.141 15:14:56 -- common/autotest_common.sh@10 -- # set +x 00:04:39.141 15:14:56 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:39.141 15:14:56 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:39.141 15:14:56 -- common/autotest_common.sh@10 -- # set +x 00:04:39.401 15:14:56 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:42.854 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:42.854 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:42.854 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:42.854 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:42.854 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:42.854 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:42.854 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:42.854 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:42.854 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:42.854 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:42.854 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:42.854 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:42.854 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:42.854 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:42.854 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:42.854 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:42.854 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:43.113 15:15:00 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:43.113 15:15:00 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:43.113 15:15:00 -- common/autotest_common.sh@10 -- # set +x 00:04:43.113 15:15:00 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:43.113 15:15:00 -- common/autotest_common.sh@1577 -- # mapfile -t bdfs 00:04:43.113 15:15:00 -- common/autotest_common.sh@1577 -- # get_nvme_bdfs_by_id 0x0a54 00:04:43.113 15:15:00 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:43.113 15:15:00 -- common/autotest_common.sh@1563 -- # local bdfs 00:04:43.113 15:15:00 -- common/autotest_common.sh@1565 -- # get_nvme_bdfs 00:04:43.113 15:15:00 -- common/autotest_common.sh@1499 -- # bdfs=() 00:04:43.113 15:15:00 -- common/autotest_common.sh@1499 -- # local bdfs 00:04:43.113 15:15:00 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:43.113 15:15:00 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:43.113 15:15:00 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:04:43.373 15:15:00 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:04:43.373 15:15:00 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:65:00.0 00:04:43.373 15:15:00 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:04:43.373 15:15:00 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:04:43.373 15:15:00 -- common/autotest_common.sh@1566 -- # device=0xa80a 00:04:43.373 15:15:00 -- common/autotest_common.sh@1567 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:04:43.373 15:15:00 -- common/autotest_common.sh@1572 -- # printf '%s\n' 00:04:43.373 15:15:00 -- common/autotest_common.sh@1578 -- # [[ -z '' ]] 00:04:43.373 15:15:00 -- common/autotest_common.sh@1579 -- # return 0 00:04:43.373 15:15:00 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:43.373 15:15:00 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:43.373 15:15:00 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:43.373 15:15:00 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:43.373 15:15:00 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:43.373 15:15:00 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:43.373 15:15:00 -- common/autotest_common.sh@10 -- # set +x 00:04:43.373 15:15:00 -- spdk/autotest.sh@164 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:43.373 15:15:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:43.373 15:15:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:43.373 15:15:00 -- common/autotest_common.sh@10 -- # set +x 00:04:43.373 ************************************ 00:04:43.373 START TEST env 00:04:43.373 ************************************ 00:04:43.373 15:15:00 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:43.633 * Looking for test storage... 00:04:43.633 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:43.633 15:15:00 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:43.633 15:15:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:43.633 15:15:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:43.633 15:15:00 -- common/autotest_common.sh@10 -- # set +x 00:04:43.633 ************************************ 00:04:43.633 START TEST env_memory 00:04:43.633 ************************************ 00:04:43.633 15:15:01 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:43.633 00:04:43.633 00:04:43.633 CUnit - A unit testing framework for C - Version 2.1-3 00:04:43.633 http://cunit.sourceforge.net/ 00:04:43.633 00:04:43.633 00:04:43.633 Suite: memory 00:04:43.892 Test: alloc and free memory map ...[2024-04-26 15:15:01.111231] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:43.892 passed 00:04:43.892 Test: mem map translation ...[2024-04-26 15:15:01.136952] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:43.892 [2024-04-26 15:15:01.136982] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:43.892 [2024-04-26 15:15:01.137030] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:43.892 [2024-04-26 15:15:01.137037] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:43.892 passed 00:04:43.892 Test: mem map registration ...[2024-04-26 15:15:01.192387] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:43.892 [2024-04-26 15:15:01.192410] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:43.892 passed 00:04:43.892 Test: mem map adjacent registrations ...passed 00:04:43.892 00:04:43.892 Run Summary: Type Total Ran Passed Failed Inactive 00:04:43.892 suites 1 1 n/a 0 0 00:04:43.892 tests 4 4 4 0 0 00:04:43.892 asserts 152 152 152 0 n/a 00:04:43.892 00:04:43.892 Elapsed time = 0.196 seconds 00:04:43.892 00:04:43.892 real 0m0.211s 00:04:43.892 user 0m0.197s 00:04:43.892 sys 0m0.013s 00:04:43.892 15:15:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:43.892 15:15:01 -- common/autotest_common.sh@10 -- # set +x 00:04:43.892 ************************************ 00:04:43.892 END TEST env_memory 00:04:43.892 ************************************ 00:04:43.892 15:15:01 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:43.892 15:15:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:43.892 15:15:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:43.892 15:15:01 -- common/autotest_common.sh@10 -- # set +x 00:04:44.154 ************************************ 00:04:44.154 START TEST env_vtophys 00:04:44.154 ************************************ 00:04:44.154 15:15:01 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:44.154 EAL: lib.eal log level changed from notice to debug 00:04:44.154 EAL: Detected lcore 0 as core 0 on socket 0 00:04:44.154 EAL: Detected lcore 1 as core 1 on socket 0 00:04:44.154 EAL: Detected lcore 2 as core 2 on socket 0 00:04:44.154 EAL: Detected lcore 3 as core 3 on socket 0 00:04:44.154 EAL: Detected lcore 4 as core 4 on socket 0 00:04:44.154 EAL: Detected lcore 5 as core 5 on socket 0 00:04:44.154 EAL: Detected lcore 6 as core 6 on socket 0 00:04:44.154 EAL: Detected lcore 7 as core 7 on socket 0 00:04:44.154 EAL: Detected lcore 8 as core 8 on socket 0 00:04:44.154 EAL: Detected lcore 9 as core 9 on socket 0 00:04:44.154 EAL: Detected lcore 10 as core 10 on socket 0 00:04:44.154 EAL: Detected lcore 11 as core 11 on socket 0 00:04:44.154 EAL: Detected lcore 12 as core 12 on socket 0 00:04:44.154 EAL: Detected lcore 13 as core 13 on socket 0 00:04:44.154 EAL: Detected lcore 14 as core 14 on socket 0 00:04:44.154 EAL: Detected lcore 15 as core 15 on socket 0 00:04:44.154 EAL: Detected lcore 16 as core 16 on socket 0 00:04:44.154 EAL: Detected lcore 17 as core 17 on socket 0 00:04:44.154 EAL: Detected lcore 18 as core 18 on socket 0 00:04:44.154 EAL: Detected lcore 19 as core 19 on socket 0 00:04:44.154 EAL: Detected lcore 20 as core 20 on socket 0 00:04:44.154 EAL: Detected lcore 21 as core 21 on socket 0 00:04:44.154 EAL: Detected lcore 22 as core 22 on socket 0 00:04:44.154 EAL: Detected lcore 23 as core 23 on socket 0 00:04:44.154 EAL: Detected lcore 24 as core 24 on socket 0 00:04:44.154 EAL: Detected lcore 25 as core 25 on socket 0 00:04:44.154 EAL: Detected lcore 26 as core 26 on socket 0 00:04:44.154 EAL: Detected lcore 27 as core 27 on socket 0 00:04:44.154 EAL: Detected lcore 28 as core 28 on socket 0 00:04:44.154 EAL: Detected lcore 29 as core 29 on socket 0 00:04:44.154 EAL: Detected lcore 30 as core 30 on socket 0 00:04:44.154 EAL: Detected lcore 31 as core 31 on socket 0 00:04:44.154 EAL: Detected lcore 32 as core 32 on socket 0 00:04:44.154 EAL: Detected lcore 33 as core 33 on socket 0 00:04:44.154 EAL: Detected lcore 34 as core 34 on socket 0 00:04:44.154 EAL: Detected lcore 35 as core 35 on socket 0 00:04:44.154 EAL: Detected lcore 36 as core 0 on socket 1 00:04:44.154 EAL: Detected lcore 37 as core 1 on socket 1 00:04:44.154 EAL: Detected lcore 38 as core 2 on socket 1 00:04:44.154 EAL: Detected lcore 39 as core 3 on socket 1 00:04:44.154 EAL: Detected lcore 40 as core 4 on socket 1 00:04:44.154 EAL: Detected lcore 41 as core 5 on socket 1 00:04:44.154 EAL: Detected lcore 42 as core 6 on socket 1 00:04:44.154 EAL: Detected lcore 43 as core 7 on socket 1 00:04:44.154 EAL: Detected lcore 44 as core 8 on socket 1 00:04:44.154 EAL: Detected lcore 45 as core 9 on socket 1 00:04:44.154 EAL: Detected lcore 46 as core 10 on socket 1 00:04:44.154 EAL: Detected lcore 47 as core 11 on socket 1 00:04:44.154 EAL: Detected lcore 48 as core 12 on socket 1 00:04:44.154 EAL: Detected lcore 49 as core 13 on socket 1 00:04:44.154 EAL: Detected lcore 50 as core 14 on socket 1 00:04:44.154 EAL: Detected lcore 51 as core 15 on socket 1 00:04:44.154 EAL: Detected lcore 52 as core 16 on socket 1 00:04:44.154 EAL: Detected lcore 53 as core 17 on socket 1 00:04:44.154 EAL: Detected lcore 54 as core 18 on socket 1 00:04:44.154 EAL: Detected lcore 55 as core 19 on socket 1 00:04:44.154 EAL: Detected lcore 56 as core 20 on socket 1 00:04:44.154 EAL: Detected lcore 57 as core 21 on socket 1 00:04:44.154 EAL: Detected lcore 58 as core 22 on socket 1 00:04:44.154 EAL: Detected lcore 59 as core 23 on socket 1 00:04:44.154 EAL: Detected lcore 60 as core 24 on socket 1 00:04:44.154 EAL: Detected lcore 61 as core 25 on socket 1 00:04:44.154 EAL: Detected lcore 62 as core 26 on socket 1 00:04:44.154 EAL: Detected lcore 63 as core 27 on socket 1 00:04:44.154 EAL: Detected lcore 64 as core 28 on socket 1 00:04:44.154 EAL: Detected lcore 65 as core 29 on socket 1 00:04:44.154 EAL: Detected lcore 66 as core 30 on socket 1 00:04:44.154 EAL: Detected lcore 67 as core 31 on socket 1 00:04:44.154 EAL: Detected lcore 68 as core 32 on socket 1 00:04:44.154 EAL: Detected lcore 69 as core 33 on socket 1 00:04:44.154 EAL: Detected lcore 70 as core 34 on socket 1 00:04:44.154 EAL: Detected lcore 71 as core 35 on socket 1 00:04:44.154 EAL: Detected lcore 72 as core 0 on socket 0 00:04:44.154 EAL: Detected lcore 73 as core 1 on socket 0 00:04:44.154 EAL: Detected lcore 74 as core 2 on socket 0 00:04:44.154 EAL: Detected lcore 75 as core 3 on socket 0 00:04:44.154 EAL: Detected lcore 76 as core 4 on socket 0 00:04:44.154 EAL: Detected lcore 77 as core 5 on socket 0 00:04:44.154 EAL: Detected lcore 78 as core 6 on socket 0 00:04:44.154 EAL: Detected lcore 79 as core 7 on socket 0 00:04:44.154 EAL: Detected lcore 80 as core 8 on socket 0 00:04:44.154 EAL: Detected lcore 81 as core 9 on socket 0 00:04:44.154 EAL: Detected lcore 82 as core 10 on socket 0 00:04:44.154 EAL: Detected lcore 83 as core 11 on socket 0 00:04:44.154 EAL: Detected lcore 84 as core 12 on socket 0 00:04:44.154 EAL: Detected lcore 85 as core 13 on socket 0 00:04:44.154 EAL: Detected lcore 86 as core 14 on socket 0 00:04:44.154 EAL: Detected lcore 87 as core 15 on socket 0 00:04:44.154 EAL: Detected lcore 88 as core 16 on socket 0 00:04:44.154 EAL: Detected lcore 89 as core 17 on socket 0 00:04:44.154 EAL: Detected lcore 90 as core 18 on socket 0 00:04:44.154 EAL: Detected lcore 91 as core 19 on socket 0 00:04:44.154 EAL: Detected lcore 92 as core 20 on socket 0 00:04:44.154 EAL: Detected lcore 93 as core 21 on socket 0 00:04:44.154 EAL: Detected lcore 94 as core 22 on socket 0 00:04:44.154 EAL: Detected lcore 95 as core 23 on socket 0 00:04:44.154 EAL: Detected lcore 96 as core 24 on socket 0 00:04:44.154 EAL: Detected lcore 97 as core 25 on socket 0 00:04:44.154 EAL: Detected lcore 98 as core 26 on socket 0 00:04:44.154 EAL: Detected lcore 99 as core 27 on socket 0 00:04:44.154 EAL: Detected lcore 100 as core 28 on socket 0 00:04:44.154 EAL: Detected lcore 101 as core 29 on socket 0 00:04:44.154 EAL: Detected lcore 102 as core 30 on socket 0 00:04:44.154 EAL: Detected lcore 103 as core 31 on socket 0 00:04:44.154 EAL: Detected lcore 104 as core 32 on socket 0 00:04:44.154 EAL: Detected lcore 105 as core 33 on socket 0 00:04:44.154 EAL: Detected lcore 106 as core 34 on socket 0 00:04:44.154 EAL: Detected lcore 107 as core 35 on socket 0 00:04:44.154 EAL: Detected lcore 108 as core 0 on socket 1 00:04:44.154 EAL: Detected lcore 109 as core 1 on socket 1 00:04:44.154 EAL: Detected lcore 110 as core 2 on socket 1 00:04:44.154 EAL: Detected lcore 111 as core 3 on socket 1 00:04:44.154 EAL: Detected lcore 112 as core 4 on socket 1 00:04:44.154 EAL: Detected lcore 113 as core 5 on socket 1 00:04:44.154 EAL: Detected lcore 114 as core 6 on socket 1 00:04:44.154 EAL: Detected lcore 115 as core 7 on socket 1 00:04:44.154 EAL: Detected lcore 116 as core 8 on socket 1 00:04:44.154 EAL: Detected lcore 117 as core 9 on socket 1 00:04:44.154 EAL: Detected lcore 118 as core 10 on socket 1 00:04:44.154 EAL: Detected lcore 119 as core 11 on socket 1 00:04:44.154 EAL: Detected lcore 120 as core 12 on socket 1 00:04:44.154 EAL: Detected lcore 121 as core 13 on socket 1 00:04:44.154 EAL: Detected lcore 122 as core 14 on socket 1 00:04:44.154 EAL: Detected lcore 123 as core 15 on socket 1 00:04:44.154 EAL: Detected lcore 124 as core 16 on socket 1 00:04:44.154 EAL: Detected lcore 125 as core 17 on socket 1 00:04:44.154 EAL: Detected lcore 126 as core 18 on socket 1 00:04:44.154 EAL: Detected lcore 127 as core 19 on socket 1 00:04:44.154 EAL: Skipped lcore 128 as core 20 on socket 1 00:04:44.154 EAL: Skipped lcore 129 as core 21 on socket 1 00:04:44.154 EAL: Skipped lcore 130 as core 22 on socket 1 00:04:44.154 EAL: Skipped lcore 131 as core 23 on socket 1 00:04:44.154 EAL: Skipped lcore 132 as core 24 on socket 1 00:04:44.154 EAL: Skipped lcore 133 as core 25 on socket 1 00:04:44.154 EAL: Skipped lcore 134 as core 26 on socket 1 00:04:44.154 EAL: Skipped lcore 135 as core 27 on socket 1 00:04:44.155 EAL: Skipped lcore 136 as core 28 on socket 1 00:04:44.155 EAL: Skipped lcore 137 as core 29 on socket 1 00:04:44.155 EAL: Skipped lcore 138 as core 30 on socket 1 00:04:44.155 EAL: Skipped lcore 139 as core 31 on socket 1 00:04:44.155 EAL: Skipped lcore 140 as core 32 on socket 1 00:04:44.155 EAL: Skipped lcore 141 as core 33 on socket 1 00:04:44.155 EAL: Skipped lcore 142 as core 34 on socket 1 00:04:44.155 EAL: Skipped lcore 143 as core 35 on socket 1 00:04:44.155 EAL: Maximum logical cores by configuration: 128 00:04:44.155 EAL: Detected CPU lcores: 128 00:04:44.155 EAL: Detected NUMA nodes: 2 00:04:44.155 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:44.155 EAL: Detected shared linkage of DPDK 00:04:44.155 EAL: No shared files mode enabled, IPC will be disabled 00:04:44.155 EAL: Bus pci wants IOVA as 'DC' 00:04:44.155 EAL: Buses did not request a specific IOVA mode. 00:04:44.155 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:44.155 EAL: Selected IOVA mode 'VA' 00:04:44.155 EAL: No free 2048 kB hugepages reported on node 1 00:04:44.155 EAL: Probing VFIO support... 00:04:44.155 EAL: IOMMU type 1 (Type 1) is supported 00:04:44.155 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:44.155 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:44.155 EAL: VFIO support initialized 00:04:44.155 EAL: Ask a virtual area of 0x2e000 bytes 00:04:44.155 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:44.155 EAL: Setting up physically contiguous memory... 00:04:44.155 EAL: Setting maximum number of open files to 524288 00:04:44.155 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:44.155 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:44.155 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:44.155 EAL: Ask a virtual area of 0x61000 bytes 00:04:44.155 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:44.155 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:44.155 EAL: Ask a virtual area of 0x400000000 bytes 00:04:44.155 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:44.155 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:44.155 EAL: Ask a virtual area of 0x61000 bytes 00:04:44.155 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:44.155 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:44.155 EAL: Ask a virtual area of 0x400000000 bytes 00:04:44.155 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:44.155 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:44.155 EAL: Ask a virtual area of 0x61000 bytes 00:04:44.155 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:44.155 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:44.155 EAL: Ask a virtual area of 0x400000000 bytes 00:04:44.155 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:44.155 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:44.155 EAL: Ask a virtual area of 0x61000 bytes 00:04:44.155 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:44.155 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:44.155 EAL: Ask a virtual area of 0x400000000 bytes 00:04:44.155 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:44.155 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:44.155 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:44.155 EAL: Ask a virtual area of 0x61000 bytes 00:04:44.155 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:44.155 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:44.155 EAL: Ask a virtual area of 0x400000000 bytes 00:04:44.155 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:44.155 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:44.155 EAL: Ask a virtual area of 0x61000 bytes 00:04:44.155 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:44.155 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:44.155 EAL: Ask a virtual area of 0x400000000 bytes 00:04:44.155 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:44.155 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:44.155 EAL: Ask a virtual area of 0x61000 bytes 00:04:44.155 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:44.155 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:44.155 EAL: Ask a virtual area of 0x400000000 bytes 00:04:44.155 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:44.155 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:44.155 EAL: Ask a virtual area of 0x61000 bytes 00:04:44.155 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:44.155 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:44.155 EAL: Ask a virtual area of 0x400000000 bytes 00:04:44.155 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:44.155 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:44.155 EAL: Hugepages will be freed exactly as allocated. 00:04:44.155 EAL: No shared files mode enabled, IPC is disabled 00:04:44.155 EAL: No shared files mode enabled, IPC is disabled 00:04:44.155 EAL: TSC frequency is ~2400000 KHz 00:04:44.155 EAL: Main lcore 0 is ready (tid=7f7f6e00ea00;cpuset=[0]) 00:04:44.155 EAL: Trying to obtain current memory policy. 00:04:44.155 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.155 EAL: Restoring previous memory policy: 0 00:04:44.155 EAL: request: mp_malloc_sync 00:04:44.155 EAL: No shared files mode enabled, IPC is disabled 00:04:44.155 EAL: Heap on socket 0 was expanded by 2MB 00:04:44.155 EAL: No shared files mode enabled, IPC is disabled 00:04:44.155 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:44.155 EAL: Mem event callback 'spdk:(nil)' registered 00:04:44.155 00:04:44.155 00:04:44.155 CUnit - A unit testing framework for C - Version 2.1-3 00:04:44.155 http://cunit.sourceforge.net/ 00:04:44.155 00:04:44.155 00:04:44.155 Suite: components_suite 00:04:44.155 Test: vtophys_malloc_test ...passed 00:04:44.155 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:44.155 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.155 EAL: Restoring previous memory policy: 4 00:04:44.155 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.155 EAL: request: mp_malloc_sync 00:04:44.155 EAL: No shared files mode enabled, IPC is disabled 00:04:44.155 EAL: Heap on socket 0 was expanded by 4MB 00:04:44.155 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.155 EAL: request: mp_malloc_sync 00:04:44.155 EAL: No shared files mode enabled, IPC is disabled 00:04:44.155 EAL: Heap on socket 0 was shrunk by 4MB 00:04:44.155 EAL: Trying to obtain current memory policy. 00:04:44.155 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.155 EAL: Restoring previous memory policy: 4 00:04:44.155 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.155 EAL: request: mp_malloc_sync 00:04:44.155 EAL: No shared files mode enabled, IPC is disabled 00:04:44.155 EAL: Heap on socket 0 was expanded by 6MB 00:04:44.155 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.155 EAL: request: mp_malloc_sync 00:04:44.155 EAL: No shared files mode enabled, IPC is disabled 00:04:44.155 EAL: Heap on socket 0 was shrunk by 6MB 00:04:44.155 EAL: Trying to obtain current memory policy. 00:04:44.155 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.155 EAL: Restoring previous memory policy: 4 00:04:44.155 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.155 EAL: request: mp_malloc_sync 00:04:44.155 EAL: No shared files mode enabled, IPC is disabled 00:04:44.155 EAL: Heap on socket 0 was expanded by 10MB 00:04:44.155 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.155 EAL: request: mp_malloc_sync 00:04:44.155 EAL: No shared files mode enabled, IPC is disabled 00:04:44.155 EAL: Heap on socket 0 was shrunk by 10MB 00:04:44.155 EAL: Trying to obtain current memory policy. 00:04:44.155 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.155 EAL: Restoring previous memory policy: 4 00:04:44.155 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.155 EAL: request: mp_malloc_sync 00:04:44.155 EAL: No shared files mode enabled, IPC is disabled 00:04:44.155 EAL: Heap on socket 0 was expanded by 18MB 00:04:44.155 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.155 EAL: request: mp_malloc_sync 00:04:44.155 EAL: No shared files mode enabled, IPC is disabled 00:04:44.155 EAL: Heap on socket 0 was shrunk by 18MB 00:04:44.155 EAL: Trying to obtain current memory policy. 00:04:44.155 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.155 EAL: Restoring previous memory policy: 4 00:04:44.155 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.155 EAL: request: mp_malloc_sync 00:04:44.155 EAL: No shared files mode enabled, IPC is disabled 00:04:44.155 EAL: Heap on socket 0 was expanded by 34MB 00:04:44.155 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.155 EAL: request: mp_malloc_sync 00:04:44.155 EAL: No shared files mode enabled, IPC is disabled 00:04:44.155 EAL: Heap on socket 0 was shrunk by 34MB 00:04:44.155 EAL: Trying to obtain current memory policy. 00:04:44.155 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.155 EAL: Restoring previous memory policy: 4 00:04:44.155 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.155 EAL: request: mp_malloc_sync 00:04:44.155 EAL: No shared files mode enabled, IPC is disabled 00:04:44.155 EAL: Heap on socket 0 was expanded by 66MB 00:04:44.155 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.155 EAL: request: mp_malloc_sync 00:04:44.155 EAL: No shared files mode enabled, IPC is disabled 00:04:44.155 EAL: Heap on socket 0 was shrunk by 66MB 00:04:44.155 EAL: Trying to obtain current memory policy. 00:04:44.155 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.415 EAL: Restoring previous memory policy: 4 00:04:44.415 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.415 EAL: request: mp_malloc_sync 00:04:44.415 EAL: No shared files mode enabled, IPC is disabled 00:04:44.415 EAL: Heap on socket 0 was expanded by 130MB 00:04:44.415 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.415 EAL: request: mp_malloc_sync 00:04:44.415 EAL: No shared files mode enabled, IPC is disabled 00:04:44.415 EAL: Heap on socket 0 was shrunk by 130MB 00:04:44.415 EAL: Trying to obtain current memory policy. 00:04:44.415 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.415 EAL: Restoring previous memory policy: 4 00:04:44.415 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.415 EAL: request: mp_malloc_sync 00:04:44.415 EAL: No shared files mode enabled, IPC is disabled 00:04:44.415 EAL: Heap on socket 0 was expanded by 258MB 00:04:44.415 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.415 EAL: request: mp_malloc_sync 00:04:44.415 EAL: No shared files mode enabled, IPC is disabled 00:04:44.415 EAL: Heap on socket 0 was shrunk by 258MB 00:04:44.415 EAL: Trying to obtain current memory policy. 00:04:44.415 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.415 EAL: Restoring previous memory policy: 4 00:04:44.415 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.415 EAL: request: mp_malloc_sync 00:04:44.415 EAL: No shared files mode enabled, IPC is disabled 00:04:44.415 EAL: Heap on socket 0 was expanded by 514MB 00:04:44.415 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.675 EAL: request: mp_malloc_sync 00:04:44.675 EAL: No shared files mode enabled, IPC is disabled 00:04:44.675 EAL: Heap on socket 0 was shrunk by 514MB 00:04:44.675 EAL: Trying to obtain current memory policy. 00:04:44.675 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.675 EAL: Restoring previous memory policy: 4 00:04:44.675 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.675 EAL: request: mp_malloc_sync 00:04:44.675 EAL: No shared files mode enabled, IPC is disabled 00:04:44.675 EAL: Heap on socket 0 was expanded by 1026MB 00:04:44.934 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.934 EAL: request: mp_malloc_sync 00:04:44.934 EAL: No shared files mode enabled, IPC is disabled 00:04:44.934 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:44.934 passed 00:04:44.934 00:04:44.934 Run Summary: Type Total Ran Passed Failed Inactive 00:04:44.934 suites 1 1 n/a 0 0 00:04:44.934 tests 2 2 2 0 0 00:04:44.934 asserts 497 497 497 0 n/a 00:04:44.934 00:04:44.934 Elapsed time = 0.660 seconds 00:04:44.934 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.934 EAL: request: mp_malloc_sync 00:04:44.935 EAL: No shared files mode enabled, IPC is disabled 00:04:44.935 EAL: Heap on socket 0 was shrunk by 2MB 00:04:44.935 EAL: No shared files mode enabled, IPC is disabled 00:04:44.935 EAL: No shared files mode enabled, IPC is disabled 00:04:44.935 EAL: No shared files mode enabled, IPC is disabled 00:04:44.935 00:04:44.935 real 0m0.783s 00:04:44.935 user 0m0.414s 00:04:44.935 sys 0m0.339s 00:04:44.935 15:15:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:44.935 15:15:02 -- common/autotest_common.sh@10 -- # set +x 00:04:44.935 ************************************ 00:04:44.935 END TEST env_vtophys 00:04:44.935 ************************************ 00:04:44.935 15:15:02 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:44.935 15:15:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:44.935 15:15:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:44.935 15:15:02 -- common/autotest_common.sh@10 -- # set +x 00:04:45.195 ************************************ 00:04:45.195 START TEST env_pci 00:04:45.195 ************************************ 00:04:45.195 15:15:02 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:45.195 00:04:45.195 00:04:45.195 CUnit - A unit testing framework for C - Version 2.1-3 00:04:45.195 http://cunit.sourceforge.net/ 00:04:45.195 00:04:45.195 00:04:45.195 Suite: pci 00:04:45.195 Test: pci_hook ...[2024-04-26 15:15:02.464674] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1413815 has claimed it 00:04:45.195 EAL: Cannot find device (10000:00:01.0) 00:04:45.195 EAL: Failed to attach device on primary process 00:04:45.195 passed 00:04:45.195 00:04:45.195 Run Summary: Type Total Ran Passed Failed Inactive 00:04:45.195 suites 1 1 n/a 0 0 00:04:45.195 tests 1 1 1 0 0 00:04:45.195 asserts 25 25 25 0 n/a 00:04:45.195 00:04:45.195 Elapsed time = 0.031 seconds 00:04:45.195 00:04:45.195 real 0m0.054s 00:04:45.195 user 0m0.018s 00:04:45.195 sys 0m0.036s 00:04:45.195 15:15:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:45.195 15:15:02 -- common/autotest_common.sh@10 -- # set +x 00:04:45.195 ************************************ 00:04:45.195 END TEST env_pci 00:04:45.195 ************************************ 00:04:45.195 15:15:02 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:45.195 15:15:02 -- env/env.sh@15 -- # uname 00:04:45.195 15:15:02 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:45.195 15:15:02 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:45.195 15:15:02 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:45.195 15:15:02 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:04:45.195 15:15:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:45.195 15:15:02 -- common/autotest_common.sh@10 -- # set +x 00:04:45.455 ************************************ 00:04:45.455 START TEST env_dpdk_post_init 00:04:45.455 ************************************ 00:04:45.455 15:15:02 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:45.455 EAL: Detected CPU lcores: 128 00:04:45.455 EAL: Detected NUMA nodes: 2 00:04:45.455 EAL: Detected shared linkage of DPDK 00:04:45.455 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:45.455 EAL: Selected IOVA mode 'VA' 00:04:45.455 EAL: No free 2048 kB hugepages reported on node 1 00:04:45.455 EAL: VFIO support initialized 00:04:45.455 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:45.455 EAL: Using IOMMU type 1 (Type 1) 00:04:45.714 EAL: Ignore mapping IO port bar(1) 00:04:45.714 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:04:45.976 EAL: Ignore mapping IO port bar(1) 00:04:45.976 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:04:45.976 EAL: Ignore mapping IO port bar(1) 00:04:46.235 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:04:46.235 EAL: Ignore mapping IO port bar(1) 00:04:46.494 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:04:46.494 EAL: Ignore mapping IO port bar(1) 00:04:46.494 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:04:46.753 EAL: Ignore mapping IO port bar(1) 00:04:46.753 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:04:47.012 EAL: Ignore mapping IO port bar(1) 00:04:47.012 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:04:47.270 EAL: Ignore mapping IO port bar(1) 00:04:47.270 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:04:47.529 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:04:47.529 EAL: Ignore mapping IO port bar(1) 00:04:47.789 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:04:47.789 EAL: Ignore mapping IO port bar(1) 00:04:48.048 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:04:48.048 EAL: Ignore mapping IO port bar(1) 00:04:48.308 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:04:48.308 EAL: Ignore mapping IO port bar(1) 00:04:48.308 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:04:48.567 EAL: Ignore mapping IO port bar(1) 00:04:48.567 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:04:48.826 EAL: Ignore mapping IO port bar(1) 00:04:48.826 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:04:49.086 EAL: Ignore mapping IO port bar(1) 00:04:49.086 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:04:49.086 EAL: Ignore mapping IO port bar(1) 00:04:49.346 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:04:49.346 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:04:49.346 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:04:49.346 Starting DPDK initialization... 00:04:49.346 Starting SPDK post initialization... 00:04:49.346 SPDK NVMe probe 00:04:49.346 Attaching to 0000:65:00.0 00:04:49.346 Attached to 0000:65:00.0 00:04:49.346 Cleaning up... 00:04:51.251 00:04:51.251 real 0m5.715s 00:04:51.251 user 0m0.189s 00:04:51.251 sys 0m0.066s 00:04:51.252 15:15:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:51.252 15:15:08 -- common/autotest_common.sh@10 -- # set +x 00:04:51.252 ************************************ 00:04:51.252 END TEST env_dpdk_post_init 00:04:51.252 ************************************ 00:04:51.252 15:15:08 -- env/env.sh@26 -- # uname 00:04:51.252 15:15:08 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:51.252 15:15:08 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:51.252 15:15:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:51.252 15:15:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:51.252 15:15:08 -- common/autotest_common.sh@10 -- # set +x 00:04:51.252 ************************************ 00:04:51.252 START TEST env_mem_callbacks 00:04:51.252 ************************************ 00:04:51.252 15:15:08 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:51.252 EAL: Detected CPU lcores: 128 00:04:51.252 EAL: Detected NUMA nodes: 2 00:04:51.252 EAL: Detected shared linkage of DPDK 00:04:51.252 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:51.252 EAL: Selected IOVA mode 'VA' 00:04:51.252 EAL: No free 2048 kB hugepages reported on node 1 00:04:51.252 EAL: VFIO support initialized 00:04:51.252 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:51.252 00:04:51.252 00:04:51.252 CUnit - A unit testing framework for C - Version 2.1-3 00:04:51.252 http://cunit.sourceforge.net/ 00:04:51.252 00:04:51.252 00:04:51.252 Suite: memory 00:04:51.252 Test: test ... 00:04:51.252 register 0x200000200000 2097152 00:04:51.252 malloc 3145728 00:04:51.252 register 0x200000400000 4194304 00:04:51.252 buf 0x200000500000 len 3145728 PASSED 00:04:51.252 malloc 64 00:04:51.252 buf 0x2000004fff40 len 64 PASSED 00:04:51.252 malloc 4194304 00:04:51.252 register 0x200000800000 6291456 00:04:51.252 buf 0x200000a00000 len 4194304 PASSED 00:04:51.252 free 0x200000500000 3145728 00:04:51.252 free 0x2000004fff40 64 00:04:51.252 unregister 0x200000400000 4194304 PASSED 00:04:51.252 free 0x200000a00000 4194304 00:04:51.252 unregister 0x200000800000 6291456 PASSED 00:04:51.252 malloc 8388608 00:04:51.252 register 0x200000400000 10485760 00:04:51.252 buf 0x200000600000 len 8388608 PASSED 00:04:51.252 free 0x200000600000 8388608 00:04:51.252 unregister 0x200000400000 10485760 PASSED 00:04:51.252 passed 00:04:51.252 00:04:51.252 Run Summary: Type Total Ran Passed Failed Inactive 00:04:51.252 suites 1 1 n/a 0 0 00:04:51.252 tests 1 1 1 0 0 00:04:51.252 asserts 15 15 15 0 n/a 00:04:51.252 00:04:51.252 Elapsed time = 0.004 seconds 00:04:51.252 00:04:51.252 real 0m0.060s 00:04:51.252 user 0m0.017s 00:04:51.252 sys 0m0.043s 00:04:51.252 15:15:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:51.252 15:15:08 -- common/autotest_common.sh@10 -- # set +x 00:04:51.252 ************************************ 00:04:51.252 END TEST env_mem_callbacks 00:04:51.252 ************************************ 00:04:51.511 00:04:51.511 real 0m7.927s 00:04:51.511 user 0m1.237s 00:04:51.511 sys 0m1.124s 00:04:51.511 15:15:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:51.511 15:15:08 -- common/autotest_common.sh@10 -- # set +x 00:04:51.511 ************************************ 00:04:51.511 END TEST env 00:04:51.511 ************************************ 00:04:51.511 15:15:08 -- spdk/autotest.sh@165 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:51.511 15:15:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:51.511 15:15:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:51.511 15:15:08 -- common/autotest_common.sh@10 -- # set +x 00:04:51.511 ************************************ 00:04:51.511 START TEST rpc 00:04:51.511 ************************************ 00:04:51.511 15:15:08 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:51.770 * Looking for test storage... 00:04:51.770 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:51.770 15:15:09 -- rpc/rpc.sh@65 -- # spdk_pid=1415605 00:04:51.770 15:15:09 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:51.770 15:15:09 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:51.770 15:15:09 -- rpc/rpc.sh@67 -- # waitforlisten 1415605 00:04:51.770 15:15:09 -- common/autotest_common.sh@817 -- # '[' -z 1415605 ']' 00:04:51.770 15:15:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.770 15:15:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:51.770 15:15:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.770 15:15:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:51.770 15:15:09 -- common/autotest_common.sh@10 -- # set +x 00:04:51.770 [2024-04-26 15:15:09.089450] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:04:51.770 [2024-04-26 15:15:09.089517] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1415605 ] 00:04:51.770 EAL: No free 2048 kB hugepages reported on node 1 00:04:51.770 [2024-04-26 15:15:09.156852] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.030 [2024-04-26 15:15:09.229906] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:52.030 [2024-04-26 15:15:09.229946] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1415605' to capture a snapshot of events at runtime. 00:04:52.030 [2024-04-26 15:15:09.229954] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:52.030 [2024-04-26 15:15:09.229961] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:52.030 [2024-04-26 15:15:09.229966] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1415605 for offline analysis/debug. 00:04:52.030 [2024-04-26 15:15:09.229996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.599 15:15:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:52.599 15:15:09 -- common/autotest_common.sh@850 -- # return 0 00:04:52.599 15:15:09 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:52.599 15:15:09 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:52.599 15:15:09 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:52.599 15:15:09 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:52.599 15:15:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:52.599 15:15:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:52.599 15:15:09 -- common/autotest_common.sh@10 -- # set +x 00:04:52.599 ************************************ 00:04:52.599 START TEST rpc_integrity 00:04:52.599 ************************************ 00:04:52.599 15:15:10 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:04:52.599 15:15:10 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:52.599 15:15:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:52.599 15:15:10 -- common/autotest_common.sh@10 -- # set +x 00:04:52.599 15:15:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:52.599 15:15:10 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:52.599 15:15:10 -- rpc/rpc.sh@13 -- # jq length 00:04:52.858 15:15:10 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:52.858 15:15:10 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:52.858 15:15:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:52.858 15:15:10 -- common/autotest_common.sh@10 -- # set +x 00:04:52.858 15:15:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:52.858 15:15:10 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:52.858 15:15:10 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:52.858 15:15:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:52.858 15:15:10 -- common/autotest_common.sh@10 -- # set +x 00:04:52.858 15:15:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:52.858 15:15:10 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:52.858 { 00:04:52.858 "name": "Malloc0", 00:04:52.858 "aliases": [ 00:04:52.858 "50f6cc99-28f8-4ab9-84b8-273f22a0619d" 00:04:52.858 ], 00:04:52.858 "product_name": "Malloc disk", 00:04:52.858 "block_size": 512, 00:04:52.858 "num_blocks": 16384, 00:04:52.858 "uuid": "50f6cc99-28f8-4ab9-84b8-273f22a0619d", 00:04:52.858 "assigned_rate_limits": { 00:04:52.858 "rw_ios_per_sec": 0, 00:04:52.858 "rw_mbytes_per_sec": 0, 00:04:52.858 "r_mbytes_per_sec": 0, 00:04:52.858 "w_mbytes_per_sec": 0 00:04:52.858 }, 00:04:52.858 "claimed": false, 00:04:52.858 "zoned": false, 00:04:52.859 "supported_io_types": { 00:04:52.859 "read": true, 00:04:52.859 "write": true, 00:04:52.859 "unmap": true, 00:04:52.859 "write_zeroes": true, 00:04:52.859 "flush": true, 00:04:52.859 "reset": true, 00:04:52.859 "compare": false, 00:04:52.859 "compare_and_write": false, 00:04:52.859 "abort": true, 00:04:52.859 "nvme_admin": false, 00:04:52.859 "nvme_io": false 00:04:52.859 }, 00:04:52.859 "memory_domains": [ 00:04:52.859 { 00:04:52.859 "dma_device_id": "system", 00:04:52.859 "dma_device_type": 1 00:04:52.859 }, 00:04:52.859 { 00:04:52.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:52.859 "dma_device_type": 2 00:04:52.859 } 00:04:52.859 ], 00:04:52.859 "driver_specific": {} 00:04:52.859 } 00:04:52.859 ]' 00:04:52.859 15:15:10 -- rpc/rpc.sh@17 -- # jq length 00:04:52.859 15:15:10 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:52.859 15:15:10 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:52.859 15:15:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:52.859 15:15:10 -- common/autotest_common.sh@10 -- # set +x 00:04:52.859 [2024-04-26 15:15:10.155976] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:52.859 [2024-04-26 15:15:10.156010] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:52.859 [2024-04-26 15:15:10.156023] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a907e0 00:04:52.859 [2024-04-26 15:15:10.156030] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:52.859 [2024-04-26 15:15:10.157387] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:52.859 [2024-04-26 15:15:10.157409] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:52.859 Passthru0 00:04:52.859 15:15:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:52.859 15:15:10 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:52.859 15:15:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:52.859 15:15:10 -- common/autotest_common.sh@10 -- # set +x 00:04:52.859 15:15:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:52.859 15:15:10 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:52.859 { 00:04:52.859 "name": "Malloc0", 00:04:52.859 "aliases": [ 00:04:52.859 "50f6cc99-28f8-4ab9-84b8-273f22a0619d" 00:04:52.859 ], 00:04:52.859 "product_name": "Malloc disk", 00:04:52.859 "block_size": 512, 00:04:52.859 "num_blocks": 16384, 00:04:52.859 "uuid": "50f6cc99-28f8-4ab9-84b8-273f22a0619d", 00:04:52.859 "assigned_rate_limits": { 00:04:52.859 "rw_ios_per_sec": 0, 00:04:52.859 "rw_mbytes_per_sec": 0, 00:04:52.859 "r_mbytes_per_sec": 0, 00:04:52.859 "w_mbytes_per_sec": 0 00:04:52.859 }, 00:04:52.859 "claimed": true, 00:04:52.859 "claim_type": "exclusive_write", 00:04:52.859 "zoned": false, 00:04:52.859 "supported_io_types": { 00:04:52.859 "read": true, 00:04:52.859 "write": true, 00:04:52.859 "unmap": true, 00:04:52.859 "write_zeroes": true, 00:04:52.859 "flush": true, 00:04:52.859 "reset": true, 00:04:52.859 "compare": false, 00:04:52.859 "compare_and_write": false, 00:04:52.859 "abort": true, 00:04:52.859 "nvme_admin": false, 00:04:52.859 "nvme_io": false 00:04:52.859 }, 00:04:52.859 "memory_domains": [ 00:04:52.859 { 00:04:52.859 "dma_device_id": "system", 00:04:52.859 "dma_device_type": 1 00:04:52.859 }, 00:04:52.859 { 00:04:52.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:52.859 "dma_device_type": 2 00:04:52.859 } 00:04:52.859 ], 00:04:52.859 "driver_specific": {} 00:04:52.859 }, 00:04:52.859 { 00:04:52.859 "name": "Passthru0", 00:04:52.859 "aliases": [ 00:04:52.859 "0ed2cd21-95f6-588b-85c0-a8d4ac06db2d" 00:04:52.859 ], 00:04:52.859 "product_name": "passthru", 00:04:52.859 "block_size": 512, 00:04:52.859 "num_blocks": 16384, 00:04:52.859 "uuid": "0ed2cd21-95f6-588b-85c0-a8d4ac06db2d", 00:04:52.859 "assigned_rate_limits": { 00:04:52.859 "rw_ios_per_sec": 0, 00:04:52.859 "rw_mbytes_per_sec": 0, 00:04:52.859 "r_mbytes_per_sec": 0, 00:04:52.859 "w_mbytes_per_sec": 0 00:04:52.859 }, 00:04:52.859 "claimed": false, 00:04:52.859 "zoned": false, 00:04:52.859 "supported_io_types": { 00:04:52.859 "read": true, 00:04:52.859 "write": true, 00:04:52.859 "unmap": true, 00:04:52.859 "write_zeroes": true, 00:04:52.859 "flush": true, 00:04:52.859 "reset": true, 00:04:52.859 "compare": false, 00:04:52.859 "compare_and_write": false, 00:04:52.859 "abort": true, 00:04:52.859 "nvme_admin": false, 00:04:52.859 "nvme_io": false 00:04:52.859 }, 00:04:52.859 "memory_domains": [ 00:04:52.859 { 00:04:52.859 "dma_device_id": "system", 00:04:52.859 "dma_device_type": 1 00:04:52.859 }, 00:04:52.859 { 00:04:52.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:52.859 "dma_device_type": 2 00:04:52.859 } 00:04:52.859 ], 00:04:52.859 "driver_specific": { 00:04:52.859 "passthru": { 00:04:52.859 "name": "Passthru0", 00:04:52.859 "base_bdev_name": "Malloc0" 00:04:52.859 } 00:04:52.859 } 00:04:52.859 } 00:04:52.859 ]' 00:04:52.859 15:15:10 -- rpc/rpc.sh@21 -- # jq length 00:04:52.859 15:15:10 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:52.859 15:15:10 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:52.859 15:15:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:52.859 15:15:10 -- common/autotest_common.sh@10 -- # set +x 00:04:52.859 15:15:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:52.859 15:15:10 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:52.859 15:15:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:52.859 15:15:10 -- common/autotest_common.sh@10 -- # set +x 00:04:52.859 15:15:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:52.859 15:15:10 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:52.859 15:15:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:52.859 15:15:10 -- common/autotest_common.sh@10 -- # set +x 00:04:52.859 15:15:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:52.859 15:15:10 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:52.859 15:15:10 -- rpc/rpc.sh@26 -- # jq length 00:04:53.118 15:15:10 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:53.118 00:04:53.118 real 0m0.293s 00:04:53.118 user 0m0.190s 00:04:53.118 sys 0m0.038s 00:04:53.118 15:15:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:53.118 15:15:10 -- common/autotest_common.sh@10 -- # set +x 00:04:53.118 ************************************ 00:04:53.118 END TEST rpc_integrity 00:04:53.118 ************************************ 00:04:53.118 15:15:10 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:53.118 15:15:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:53.118 15:15:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:53.118 15:15:10 -- common/autotest_common.sh@10 -- # set +x 00:04:53.118 ************************************ 00:04:53.118 START TEST rpc_plugins 00:04:53.118 ************************************ 00:04:53.118 15:15:10 -- common/autotest_common.sh@1111 -- # rpc_plugins 00:04:53.118 15:15:10 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:53.118 15:15:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:53.118 15:15:10 -- common/autotest_common.sh@10 -- # set +x 00:04:53.118 15:15:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:53.118 15:15:10 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:53.118 15:15:10 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:53.118 15:15:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:53.118 15:15:10 -- common/autotest_common.sh@10 -- # set +x 00:04:53.118 15:15:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:53.118 15:15:10 -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:53.118 { 00:04:53.118 "name": "Malloc1", 00:04:53.118 "aliases": [ 00:04:53.118 "45b86ed7-ca8f-4e49-9f62-982d8207a85c" 00:04:53.118 ], 00:04:53.118 "product_name": "Malloc disk", 00:04:53.118 "block_size": 4096, 00:04:53.118 "num_blocks": 256, 00:04:53.118 "uuid": "45b86ed7-ca8f-4e49-9f62-982d8207a85c", 00:04:53.118 "assigned_rate_limits": { 00:04:53.118 "rw_ios_per_sec": 0, 00:04:53.118 "rw_mbytes_per_sec": 0, 00:04:53.119 "r_mbytes_per_sec": 0, 00:04:53.119 "w_mbytes_per_sec": 0 00:04:53.119 }, 00:04:53.119 "claimed": false, 00:04:53.119 "zoned": false, 00:04:53.119 "supported_io_types": { 00:04:53.119 "read": true, 00:04:53.119 "write": true, 00:04:53.119 "unmap": true, 00:04:53.119 "write_zeroes": true, 00:04:53.119 "flush": true, 00:04:53.119 "reset": true, 00:04:53.119 "compare": false, 00:04:53.119 "compare_and_write": false, 00:04:53.119 "abort": true, 00:04:53.119 "nvme_admin": false, 00:04:53.119 "nvme_io": false 00:04:53.119 }, 00:04:53.119 "memory_domains": [ 00:04:53.119 { 00:04:53.119 "dma_device_id": "system", 00:04:53.119 "dma_device_type": 1 00:04:53.119 }, 00:04:53.119 { 00:04:53.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:53.119 "dma_device_type": 2 00:04:53.119 } 00:04:53.119 ], 00:04:53.119 "driver_specific": {} 00:04:53.119 } 00:04:53.119 ]' 00:04:53.119 15:15:10 -- rpc/rpc.sh@32 -- # jq length 00:04:53.379 15:15:10 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:53.379 15:15:10 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:53.379 15:15:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:53.379 15:15:10 -- common/autotest_common.sh@10 -- # set +x 00:04:53.379 15:15:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:53.379 15:15:10 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:53.379 15:15:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:53.379 15:15:10 -- common/autotest_common.sh@10 -- # set +x 00:04:53.379 15:15:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:53.379 15:15:10 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:53.379 15:15:10 -- rpc/rpc.sh@36 -- # jq length 00:04:53.379 15:15:10 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:53.379 00:04:53.379 real 0m0.153s 00:04:53.379 user 0m0.097s 00:04:53.379 sys 0m0.018s 00:04:53.379 15:15:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:53.379 15:15:10 -- common/autotest_common.sh@10 -- # set +x 00:04:53.379 ************************************ 00:04:53.379 END TEST rpc_plugins 00:04:53.379 ************************************ 00:04:53.379 15:15:10 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:53.379 15:15:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:53.379 15:15:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:53.379 15:15:10 -- common/autotest_common.sh@10 -- # set +x 00:04:53.641 ************************************ 00:04:53.641 START TEST rpc_trace_cmd_test 00:04:53.641 ************************************ 00:04:53.641 15:15:10 -- common/autotest_common.sh@1111 -- # rpc_trace_cmd_test 00:04:53.641 15:15:10 -- rpc/rpc.sh@40 -- # local info 00:04:53.641 15:15:10 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:53.641 15:15:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:53.641 15:15:10 -- common/autotest_common.sh@10 -- # set +x 00:04:53.641 15:15:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:53.641 15:15:10 -- rpc/rpc.sh@42 -- # info='{ 00:04:53.641 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1415605", 00:04:53.641 "tpoint_group_mask": "0x8", 00:04:53.641 "iscsi_conn": { 00:04:53.641 "mask": "0x2", 00:04:53.641 "tpoint_mask": "0x0" 00:04:53.641 }, 00:04:53.641 "scsi": { 00:04:53.641 "mask": "0x4", 00:04:53.641 "tpoint_mask": "0x0" 00:04:53.641 }, 00:04:53.641 "bdev": { 00:04:53.641 "mask": "0x8", 00:04:53.641 "tpoint_mask": "0xffffffffffffffff" 00:04:53.641 }, 00:04:53.641 "nvmf_rdma": { 00:04:53.641 "mask": "0x10", 00:04:53.641 "tpoint_mask": "0x0" 00:04:53.641 }, 00:04:53.641 "nvmf_tcp": { 00:04:53.641 "mask": "0x20", 00:04:53.641 "tpoint_mask": "0x0" 00:04:53.641 }, 00:04:53.641 "ftl": { 00:04:53.641 "mask": "0x40", 00:04:53.641 "tpoint_mask": "0x0" 00:04:53.641 }, 00:04:53.641 "blobfs": { 00:04:53.641 "mask": "0x80", 00:04:53.641 "tpoint_mask": "0x0" 00:04:53.641 }, 00:04:53.641 "dsa": { 00:04:53.641 "mask": "0x200", 00:04:53.641 "tpoint_mask": "0x0" 00:04:53.641 }, 00:04:53.641 "thread": { 00:04:53.641 "mask": "0x400", 00:04:53.641 "tpoint_mask": "0x0" 00:04:53.641 }, 00:04:53.641 "nvme_pcie": { 00:04:53.641 "mask": "0x800", 00:04:53.641 "tpoint_mask": "0x0" 00:04:53.641 }, 00:04:53.641 "iaa": { 00:04:53.641 "mask": "0x1000", 00:04:53.641 "tpoint_mask": "0x0" 00:04:53.641 }, 00:04:53.641 "nvme_tcp": { 00:04:53.641 "mask": "0x2000", 00:04:53.641 "tpoint_mask": "0x0" 00:04:53.641 }, 00:04:53.641 "bdev_nvme": { 00:04:53.641 "mask": "0x4000", 00:04:53.641 "tpoint_mask": "0x0" 00:04:53.641 }, 00:04:53.641 "sock": { 00:04:53.641 "mask": "0x8000", 00:04:53.641 "tpoint_mask": "0x0" 00:04:53.641 } 00:04:53.641 }' 00:04:53.641 15:15:10 -- rpc/rpc.sh@43 -- # jq length 00:04:53.641 15:15:10 -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:53.641 15:15:10 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:53.641 15:15:10 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:53.641 15:15:10 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:53.641 15:15:10 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:53.641 15:15:10 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:53.641 15:15:11 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:53.641 15:15:11 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:53.641 15:15:11 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:53.641 00:04:53.641 real 0m0.250s 00:04:53.641 user 0m0.213s 00:04:53.641 sys 0m0.027s 00:04:53.641 15:15:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:53.641 15:15:11 -- common/autotest_common.sh@10 -- # set +x 00:04:53.641 ************************************ 00:04:53.641 END TEST rpc_trace_cmd_test 00:04:53.641 ************************************ 00:04:53.901 15:15:11 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:53.901 15:15:11 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:53.901 15:15:11 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:53.901 15:15:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:53.901 15:15:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:53.901 15:15:11 -- common/autotest_common.sh@10 -- # set +x 00:04:53.901 ************************************ 00:04:53.901 START TEST rpc_daemon_integrity 00:04:53.901 ************************************ 00:04:53.901 15:15:11 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:04:53.901 15:15:11 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:53.901 15:15:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:53.901 15:15:11 -- common/autotest_common.sh@10 -- # set +x 00:04:53.901 15:15:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:53.901 15:15:11 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:53.901 15:15:11 -- rpc/rpc.sh@13 -- # jq length 00:04:53.901 15:15:11 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:53.901 15:15:11 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:53.901 15:15:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:53.901 15:15:11 -- common/autotest_common.sh@10 -- # set +x 00:04:53.901 15:15:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:53.901 15:15:11 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:53.901 15:15:11 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:53.901 15:15:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:53.901 15:15:11 -- common/autotest_common.sh@10 -- # set +x 00:04:54.161 15:15:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:54.161 15:15:11 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:54.161 { 00:04:54.161 "name": "Malloc2", 00:04:54.161 "aliases": [ 00:04:54.161 "37c9a62f-65ab-48f3-8d21-50579bd53594" 00:04:54.161 ], 00:04:54.161 "product_name": "Malloc disk", 00:04:54.161 "block_size": 512, 00:04:54.161 "num_blocks": 16384, 00:04:54.161 "uuid": "37c9a62f-65ab-48f3-8d21-50579bd53594", 00:04:54.161 "assigned_rate_limits": { 00:04:54.161 "rw_ios_per_sec": 0, 00:04:54.161 "rw_mbytes_per_sec": 0, 00:04:54.161 "r_mbytes_per_sec": 0, 00:04:54.161 "w_mbytes_per_sec": 0 00:04:54.161 }, 00:04:54.161 "claimed": false, 00:04:54.161 "zoned": false, 00:04:54.161 "supported_io_types": { 00:04:54.161 "read": true, 00:04:54.161 "write": true, 00:04:54.161 "unmap": true, 00:04:54.161 "write_zeroes": true, 00:04:54.161 "flush": true, 00:04:54.161 "reset": true, 00:04:54.161 "compare": false, 00:04:54.161 "compare_and_write": false, 00:04:54.161 "abort": true, 00:04:54.161 "nvme_admin": false, 00:04:54.161 "nvme_io": false 00:04:54.161 }, 00:04:54.161 "memory_domains": [ 00:04:54.161 { 00:04:54.161 "dma_device_id": "system", 00:04:54.161 "dma_device_type": 1 00:04:54.161 }, 00:04:54.161 { 00:04:54.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:54.161 "dma_device_type": 2 00:04:54.161 } 00:04:54.161 ], 00:04:54.161 "driver_specific": {} 00:04:54.161 } 00:04:54.161 ]' 00:04:54.161 15:15:11 -- rpc/rpc.sh@17 -- # jq length 00:04:54.161 15:15:11 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:54.161 15:15:11 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:54.161 15:15:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:54.161 15:15:11 -- common/autotest_common.sh@10 -- # set +x 00:04:54.161 [2024-04-26 15:15:11.415382] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:54.161 [2024-04-26 15:15:11.415413] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:54.161 [2024-04-26 15:15:11.415429] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a90b30 00:04:54.161 [2024-04-26 15:15:11.415436] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:54.161 [2024-04-26 15:15:11.416663] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:54.161 [2024-04-26 15:15:11.416686] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:54.161 Passthru0 00:04:54.161 15:15:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:54.161 15:15:11 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:54.161 15:15:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:54.161 15:15:11 -- common/autotest_common.sh@10 -- # set +x 00:04:54.161 15:15:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:54.161 15:15:11 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:54.161 { 00:04:54.161 "name": "Malloc2", 00:04:54.161 "aliases": [ 00:04:54.161 "37c9a62f-65ab-48f3-8d21-50579bd53594" 00:04:54.161 ], 00:04:54.161 "product_name": "Malloc disk", 00:04:54.161 "block_size": 512, 00:04:54.161 "num_blocks": 16384, 00:04:54.161 "uuid": "37c9a62f-65ab-48f3-8d21-50579bd53594", 00:04:54.161 "assigned_rate_limits": { 00:04:54.161 "rw_ios_per_sec": 0, 00:04:54.161 "rw_mbytes_per_sec": 0, 00:04:54.161 "r_mbytes_per_sec": 0, 00:04:54.161 "w_mbytes_per_sec": 0 00:04:54.161 }, 00:04:54.161 "claimed": true, 00:04:54.161 "claim_type": "exclusive_write", 00:04:54.161 "zoned": false, 00:04:54.161 "supported_io_types": { 00:04:54.161 "read": true, 00:04:54.161 "write": true, 00:04:54.161 "unmap": true, 00:04:54.161 "write_zeroes": true, 00:04:54.161 "flush": true, 00:04:54.161 "reset": true, 00:04:54.161 "compare": false, 00:04:54.161 "compare_and_write": false, 00:04:54.161 "abort": true, 00:04:54.161 "nvme_admin": false, 00:04:54.161 "nvme_io": false 00:04:54.161 }, 00:04:54.161 "memory_domains": [ 00:04:54.161 { 00:04:54.161 "dma_device_id": "system", 00:04:54.161 "dma_device_type": 1 00:04:54.161 }, 00:04:54.161 { 00:04:54.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:54.161 "dma_device_type": 2 00:04:54.161 } 00:04:54.161 ], 00:04:54.161 "driver_specific": {} 00:04:54.161 }, 00:04:54.161 { 00:04:54.161 "name": "Passthru0", 00:04:54.161 "aliases": [ 00:04:54.161 "af227bf8-e492-5f4b-a824-77311db82b4c" 00:04:54.161 ], 00:04:54.161 "product_name": "passthru", 00:04:54.161 "block_size": 512, 00:04:54.161 "num_blocks": 16384, 00:04:54.161 "uuid": "af227bf8-e492-5f4b-a824-77311db82b4c", 00:04:54.161 "assigned_rate_limits": { 00:04:54.161 "rw_ios_per_sec": 0, 00:04:54.161 "rw_mbytes_per_sec": 0, 00:04:54.161 "r_mbytes_per_sec": 0, 00:04:54.161 "w_mbytes_per_sec": 0 00:04:54.161 }, 00:04:54.161 "claimed": false, 00:04:54.161 "zoned": false, 00:04:54.161 "supported_io_types": { 00:04:54.161 "read": true, 00:04:54.161 "write": true, 00:04:54.161 "unmap": true, 00:04:54.161 "write_zeroes": true, 00:04:54.161 "flush": true, 00:04:54.161 "reset": true, 00:04:54.161 "compare": false, 00:04:54.161 "compare_and_write": false, 00:04:54.161 "abort": true, 00:04:54.161 "nvme_admin": false, 00:04:54.161 "nvme_io": false 00:04:54.161 }, 00:04:54.161 "memory_domains": [ 00:04:54.161 { 00:04:54.161 "dma_device_id": "system", 00:04:54.161 "dma_device_type": 1 00:04:54.161 }, 00:04:54.161 { 00:04:54.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:54.161 "dma_device_type": 2 00:04:54.161 } 00:04:54.161 ], 00:04:54.161 "driver_specific": { 00:04:54.161 "passthru": { 00:04:54.161 "name": "Passthru0", 00:04:54.161 "base_bdev_name": "Malloc2" 00:04:54.161 } 00:04:54.161 } 00:04:54.161 } 00:04:54.161 ]' 00:04:54.161 15:15:11 -- rpc/rpc.sh@21 -- # jq length 00:04:54.161 15:15:11 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:54.161 15:15:11 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:54.161 15:15:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:54.161 15:15:11 -- common/autotest_common.sh@10 -- # set +x 00:04:54.161 15:15:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:54.161 15:15:11 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:54.161 15:15:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:54.161 15:15:11 -- common/autotest_common.sh@10 -- # set +x 00:04:54.161 15:15:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:54.161 15:15:11 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:54.161 15:15:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:54.161 15:15:11 -- common/autotest_common.sh@10 -- # set +x 00:04:54.161 15:15:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:54.161 15:15:11 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:54.161 15:15:11 -- rpc/rpc.sh@26 -- # jq length 00:04:54.161 15:15:11 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:54.161 00:04:54.161 real 0m0.286s 00:04:54.161 user 0m0.187s 00:04:54.161 sys 0m0.036s 00:04:54.161 15:15:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:54.161 15:15:11 -- common/autotest_common.sh@10 -- # set +x 00:04:54.161 ************************************ 00:04:54.161 END TEST rpc_daemon_integrity 00:04:54.161 ************************************ 00:04:54.161 15:15:11 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:54.161 15:15:11 -- rpc/rpc.sh@84 -- # killprocess 1415605 00:04:54.161 15:15:11 -- common/autotest_common.sh@936 -- # '[' -z 1415605 ']' 00:04:54.161 15:15:11 -- common/autotest_common.sh@940 -- # kill -0 1415605 00:04:54.161 15:15:11 -- common/autotest_common.sh@941 -- # uname 00:04:54.162 15:15:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:54.162 15:15:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1415605 00:04:54.421 15:15:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:54.421 15:15:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:54.421 15:15:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1415605' 00:04:54.421 killing process with pid 1415605 00:04:54.421 15:15:11 -- common/autotest_common.sh@955 -- # kill 1415605 00:04:54.421 15:15:11 -- common/autotest_common.sh@960 -- # wait 1415605 00:04:54.421 00:04:54.421 real 0m2.929s 00:04:54.421 user 0m3.930s 00:04:54.421 sys 0m0.840s 00:04:54.421 15:15:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:54.421 15:15:11 -- common/autotest_common.sh@10 -- # set +x 00:04:54.421 ************************************ 00:04:54.421 END TEST rpc 00:04:54.421 ************************************ 00:04:54.682 15:15:11 -- spdk/autotest.sh@166 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:54.682 15:15:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:54.682 15:15:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:54.682 15:15:11 -- common/autotest_common.sh@10 -- # set +x 00:04:54.682 ************************************ 00:04:54.682 START TEST skip_rpc 00:04:54.682 ************************************ 00:04:54.682 15:15:12 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:54.682 * Looking for test storage... 00:04:54.942 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:54.942 15:15:12 -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:54.942 15:15:12 -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:54.942 15:15:12 -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:54.942 15:15:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:54.942 15:15:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:54.942 15:15:12 -- common/autotest_common.sh@10 -- # set +x 00:04:54.942 ************************************ 00:04:54.942 START TEST skip_rpc 00:04:54.942 ************************************ 00:04:54.942 15:15:12 -- common/autotest_common.sh@1111 -- # test_skip_rpc 00:04:54.942 15:15:12 -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1416598 00:04:54.942 15:15:12 -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:54.942 15:15:12 -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:54.942 15:15:12 -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:54.942 [2024-04-26 15:15:12.340595] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:04:54.942 [2024-04-26 15:15:12.340639] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1416598 ] 00:04:54.942 EAL: No free 2048 kB hugepages reported on node 1 00:04:55.203 [2024-04-26 15:15:12.400208] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.203 [2024-04-26 15:15:12.461994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.487 15:15:17 -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:00.487 15:15:17 -- common/autotest_common.sh@638 -- # local es=0 00:05:00.487 15:15:17 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:00.487 15:15:17 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:05:00.487 15:15:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:00.487 15:15:17 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:05:00.487 15:15:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:00.487 15:15:17 -- common/autotest_common.sh@641 -- # rpc_cmd spdk_get_version 00:05:00.487 15:15:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:00.487 15:15:17 -- common/autotest_common.sh@10 -- # set +x 00:05:00.487 15:15:17 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:05:00.487 15:15:17 -- common/autotest_common.sh@641 -- # es=1 00:05:00.487 15:15:17 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:00.487 15:15:17 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:00.487 15:15:17 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:00.487 15:15:17 -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:00.487 15:15:17 -- rpc/skip_rpc.sh@23 -- # killprocess 1416598 00:05:00.487 15:15:17 -- common/autotest_common.sh@936 -- # '[' -z 1416598 ']' 00:05:00.487 15:15:17 -- common/autotest_common.sh@940 -- # kill -0 1416598 00:05:00.487 15:15:17 -- common/autotest_common.sh@941 -- # uname 00:05:00.487 15:15:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:00.487 15:15:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1416598 00:05:00.487 15:15:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:00.487 15:15:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:00.487 15:15:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1416598' 00:05:00.487 killing process with pid 1416598 00:05:00.487 15:15:17 -- common/autotest_common.sh@955 -- # kill 1416598 00:05:00.487 15:15:17 -- common/autotest_common.sh@960 -- # wait 1416598 00:05:00.487 00:05:00.487 real 0m5.277s 00:05:00.487 user 0m5.091s 00:05:00.487 sys 0m0.225s 00:05:00.487 15:15:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:00.487 15:15:17 -- common/autotest_common.sh@10 -- # set +x 00:05:00.487 ************************************ 00:05:00.487 END TEST skip_rpc 00:05:00.487 ************************************ 00:05:00.487 15:15:17 -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:00.487 15:15:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:00.487 15:15:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:00.487 15:15:17 -- common/autotest_common.sh@10 -- # set +x 00:05:00.487 ************************************ 00:05:00.487 START TEST skip_rpc_with_json 00:05:00.487 ************************************ 00:05:00.487 15:15:17 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_json 00:05:00.487 15:15:17 -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:00.487 15:15:17 -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1417718 00:05:00.487 15:15:17 -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:00.487 15:15:17 -- rpc/skip_rpc.sh@31 -- # waitforlisten 1417718 00:05:00.487 15:15:17 -- common/autotest_common.sh@817 -- # '[' -z 1417718 ']' 00:05:00.487 15:15:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.487 15:15:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:00.487 15:15:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.487 15:15:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:00.487 15:15:17 -- common/autotest_common.sh@10 -- # set +x 00:05:00.487 15:15:17 -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:00.487 [2024-04-26 15:15:17.816816] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:05:00.487 [2024-04-26 15:15:17.816880] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1417718 ] 00:05:00.487 EAL: No free 2048 kB hugepages reported on node 1 00:05:00.487 [2024-04-26 15:15:17.881183] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.747 [2024-04-26 15:15:17.954340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.317 15:15:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:01.317 15:15:18 -- common/autotest_common.sh@850 -- # return 0 00:05:01.317 15:15:18 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:01.317 15:15:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:01.317 15:15:18 -- common/autotest_common.sh@10 -- # set +x 00:05:01.317 [2024-04-26 15:15:18.576894] nvmf_rpc.c:2513:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:01.317 request: 00:05:01.317 { 00:05:01.317 "trtype": "tcp", 00:05:01.317 "method": "nvmf_get_transports", 00:05:01.317 "req_id": 1 00:05:01.317 } 00:05:01.317 Got JSON-RPC error response 00:05:01.317 response: 00:05:01.317 { 00:05:01.317 "code": -19, 00:05:01.317 "message": "No such device" 00:05:01.317 } 00:05:01.317 15:15:18 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:05:01.317 15:15:18 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:01.317 15:15:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:01.317 15:15:18 -- common/autotest_common.sh@10 -- # set +x 00:05:01.317 [2024-04-26 15:15:18.584995] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:01.317 15:15:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:01.317 15:15:18 -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:01.317 15:15:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:01.317 15:15:18 -- common/autotest_common.sh@10 -- # set +x 00:05:01.317 15:15:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:01.317 15:15:18 -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:01.317 { 00:05:01.317 "subsystems": [ 00:05:01.317 { 00:05:01.317 "subsystem": "vfio_user_target", 00:05:01.317 "config": null 00:05:01.317 }, 00:05:01.317 { 00:05:01.317 "subsystem": "keyring", 00:05:01.317 "config": [] 00:05:01.317 }, 00:05:01.317 { 00:05:01.317 "subsystem": "iobuf", 00:05:01.317 "config": [ 00:05:01.317 { 00:05:01.317 "method": "iobuf_set_options", 00:05:01.317 "params": { 00:05:01.317 "small_pool_count": 8192, 00:05:01.317 "large_pool_count": 1024, 00:05:01.317 "small_bufsize": 8192, 00:05:01.317 "large_bufsize": 135168 00:05:01.317 } 00:05:01.317 } 00:05:01.317 ] 00:05:01.317 }, 00:05:01.317 { 00:05:01.317 "subsystem": "sock", 00:05:01.317 "config": [ 00:05:01.317 { 00:05:01.317 "method": "sock_impl_set_options", 00:05:01.317 "params": { 00:05:01.317 "impl_name": "posix", 00:05:01.317 "recv_buf_size": 2097152, 00:05:01.317 "send_buf_size": 2097152, 00:05:01.317 "enable_recv_pipe": true, 00:05:01.317 "enable_quickack": false, 00:05:01.317 "enable_placement_id": 0, 00:05:01.318 "enable_zerocopy_send_server": true, 00:05:01.318 "enable_zerocopy_send_client": false, 00:05:01.318 "zerocopy_threshold": 0, 00:05:01.318 "tls_version": 0, 00:05:01.318 "enable_ktls": false 00:05:01.318 } 00:05:01.318 }, 00:05:01.318 { 00:05:01.318 "method": "sock_impl_set_options", 00:05:01.318 "params": { 00:05:01.318 "impl_name": "ssl", 00:05:01.318 "recv_buf_size": 4096, 00:05:01.318 "send_buf_size": 4096, 00:05:01.318 "enable_recv_pipe": true, 00:05:01.318 "enable_quickack": false, 00:05:01.318 "enable_placement_id": 0, 00:05:01.318 "enable_zerocopy_send_server": true, 00:05:01.318 "enable_zerocopy_send_client": false, 00:05:01.318 "zerocopy_threshold": 0, 00:05:01.318 "tls_version": 0, 00:05:01.318 "enable_ktls": false 00:05:01.318 } 00:05:01.318 } 00:05:01.318 ] 00:05:01.318 }, 00:05:01.318 { 00:05:01.318 "subsystem": "vmd", 00:05:01.318 "config": [] 00:05:01.318 }, 00:05:01.318 { 00:05:01.318 "subsystem": "accel", 00:05:01.318 "config": [ 00:05:01.318 { 00:05:01.318 "method": "accel_set_options", 00:05:01.318 "params": { 00:05:01.318 "small_cache_size": 128, 00:05:01.318 "large_cache_size": 16, 00:05:01.318 "task_count": 2048, 00:05:01.318 "sequence_count": 2048, 00:05:01.318 "buf_count": 2048 00:05:01.318 } 00:05:01.318 } 00:05:01.318 ] 00:05:01.318 }, 00:05:01.318 { 00:05:01.318 "subsystem": "bdev", 00:05:01.318 "config": [ 00:05:01.318 { 00:05:01.318 "method": "bdev_set_options", 00:05:01.318 "params": { 00:05:01.318 "bdev_io_pool_size": 65535, 00:05:01.318 "bdev_io_cache_size": 256, 00:05:01.318 "bdev_auto_examine": true, 00:05:01.318 "iobuf_small_cache_size": 128, 00:05:01.318 "iobuf_large_cache_size": 16 00:05:01.318 } 00:05:01.318 }, 00:05:01.318 { 00:05:01.318 "method": "bdev_raid_set_options", 00:05:01.318 "params": { 00:05:01.318 "process_window_size_kb": 1024 00:05:01.318 } 00:05:01.318 }, 00:05:01.318 { 00:05:01.318 "method": "bdev_iscsi_set_options", 00:05:01.318 "params": { 00:05:01.318 "timeout_sec": 30 00:05:01.318 } 00:05:01.318 }, 00:05:01.318 { 00:05:01.318 "method": "bdev_nvme_set_options", 00:05:01.318 "params": { 00:05:01.318 "action_on_timeout": "none", 00:05:01.318 "timeout_us": 0, 00:05:01.318 "timeout_admin_us": 0, 00:05:01.318 "keep_alive_timeout_ms": 10000, 00:05:01.318 "arbitration_burst": 0, 00:05:01.318 "low_priority_weight": 0, 00:05:01.318 "medium_priority_weight": 0, 00:05:01.318 "high_priority_weight": 0, 00:05:01.318 "nvme_adminq_poll_period_us": 10000, 00:05:01.318 "nvme_ioq_poll_period_us": 0, 00:05:01.318 "io_queue_requests": 0, 00:05:01.318 "delay_cmd_submit": true, 00:05:01.318 "transport_retry_count": 4, 00:05:01.318 "bdev_retry_count": 3, 00:05:01.318 "transport_ack_timeout": 0, 00:05:01.318 "ctrlr_loss_timeout_sec": 0, 00:05:01.318 "reconnect_delay_sec": 0, 00:05:01.318 "fast_io_fail_timeout_sec": 0, 00:05:01.318 "disable_auto_failback": false, 00:05:01.318 "generate_uuids": false, 00:05:01.318 "transport_tos": 0, 00:05:01.318 "nvme_error_stat": false, 00:05:01.318 "rdma_srq_size": 0, 00:05:01.318 "io_path_stat": false, 00:05:01.318 "allow_accel_sequence": false, 00:05:01.318 "rdma_max_cq_size": 0, 00:05:01.318 "rdma_cm_event_timeout_ms": 0, 00:05:01.318 "dhchap_digests": [ 00:05:01.318 "sha256", 00:05:01.318 "sha384", 00:05:01.318 "sha512" 00:05:01.318 ], 00:05:01.318 "dhchap_dhgroups": [ 00:05:01.318 "null", 00:05:01.318 "ffdhe2048", 00:05:01.318 "ffdhe3072", 00:05:01.318 "ffdhe4096", 00:05:01.318 "ffdhe6144", 00:05:01.318 "ffdhe8192" 00:05:01.318 ] 00:05:01.318 } 00:05:01.318 }, 00:05:01.318 { 00:05:01.318 "method": "bdev_nvme_set_hotplug", 00:05:01.318 "params": { 00:05:01.318 "period_us": 100000, 00:05:01.318 "enable": false 00:05:01.318 } 00:05:01.318 }, 00:05:01.318 { 00:05:01.318 "method": "bdev_wait_for_examine" 00:05:01.318 } 00:05:01.318 ] 00:05:01.318 }, 00:05:01.318 { 00:05:01.318 "subsystem": "scsi", 00:05:01.318 "config": null 00:05:01.318 }, 00:05:01.318 { 00:05:01.318 "subsystem": "scheduler", 00:05:01.318 "config": [ 00:05:01.318 { 00:05:01.318 "method": "framework_set_scheduler", 00:05:01.318 "params": { 00:05:01.318 "name": "static" 00:05:01.318 } 00:05:01.318 } 00:05:01.318 ] 00:05:01.318 }, 00:05:01.318 { 00:05:01.318 "subsystem": "vhost_scsi", 00:05:01.318 "config": [] 00:05:01.318 }, 00:05:01.318 { 00:05:01.318 "subsystem": "vhost_blk", 00:05:01.318 "config": [] 00:05:01.318 }, 00:05:01.318 { 00:05:01.318 "subsystem": "ublk", 00:05:01.318 "config": [] 00:05:01.318 }, 00:05:01.318 { 00:05:01.318 "subsystem": "nbd", 00:05:01.318 "config": [] 00:05:01.318 }, 00:05:01.318 { 00:05:01.318 "subsystem": "nvmf", 00:05:01.318 "config": [ 00:05:01.318 { 00:05:01.318 "method": "nvmf_set_config", 00:05:01.318 "params": { 00:05:01.318 "discovery_filter": "match_any", 00:05:01.318 "admin_cmd_passthru": { 00:05:01.318 "identify_ctrlr": false 00:05:01.318 } 00:05:01.318 } 00:05:01.318 }, 00:05:01.318 { 00:05:01.318 "method": "nvmf_set_max_subsystems", 00:05:01.318 "params": { 00:05:01.318 "max_subsystems": 1024 00:05:01.318 } 00:05:01.318 }, 00:05:01.318 { 00:05:01.318 "method": "nvmf_set_crdt", 00:05:01.318 "params": { 00:05:01.318 "crdt1": 0, 00:05:01.318 "crdt2": 0, 00:05:01.318 "crdt3": 0 00:05:01.318 } 00:05:01.318 }, 00:05:01.318 { 00:05:01.318 "method": "nvmf_create_transport", 00:05:01.318 "params": { 00:05:01.318 "trtype": "TCP", 00:05:01.318 "max_queue_depth": 128, 00:05:01.318 "max_io_qpairs_per_ctrlr": 127, 00:05:01.318 "in_capsule_data_size": 4096, 00:05:01.318 "max_io_size": 131072, 00:05:01.318 "io_unit_size": 131072, 00:05:01.318 "max_aq_depth": 128, 00:05:01.318 "num_shared_buffers": 511, 00:05:01.318 "buf_cache_size": 4294967295, 00:05:01.318 "dif_insert_or_strip": false, 00:05:01.318 "zcopy": false, 00:05:01.318 "c2h_success": true, 00:05:01.318 "sock_priority": 0, 00:05:01.318 "abort_timeout_sec": 1, 00:05:01.318 "ack_timeout": 0, 00:05:01.318 "data_wr_pool_size": 0 00:05:01.318 } 00:05:01.318 } 00:05:01.318 ] 00:05:01.318 }, 00:05:01.318 { 00:05:01.318 "subsystem": "iscsi", 00:05:01.318 "config": [ 00:05:01.318 { 00:05:01.318 "method": "iscsi_set_options", 00:05:01.318 "params": { 00:05:01.318 "node_base": "iqn.2016-06.io.spdk", 00:05:01.318 "max_sessions": 128, 00:05:01.318 "max_connections_per_session": 2, 00:05:01.318 "max_queue_depth": 64, 00:05:01.318 "default_time2wait": 2, 00:05:01.318 "default_time2retain": 20, 00:05:01.318 "first_burst_length": 8192, 00:05:01.318 "immediate_data": true, 00:05:01.318 "allow_duplicated_isid": false, 00:05:01.318 "error_recovery_level": 0, 00:05:01.318 "nop_timeout": 60, 00:05:01.318 "nop_in_interval": 30, 00:05:01.318 "disable_chap": false, 00:05:01.318 "require_chap": false, 00:05:01.318 "mutual_chap": false, 00:05:01.318 "chap_group": 0, 00:05:01.318 "max_large_datain_per_connection": 64, 00:05:01.318 "max_r2t_per_connection": 4, 00:05:01.318 "pdu_pool_size": 36864, 00:05:01.318 "immediate_data_pool_size": 16384, 00:05:01.318 "data_out_pool_size": 2048 00:05:01.318 } 00:05:01.318 } 00:05:01.318 ] 00:05:01.318 } 00:05:01.318 ] 00:05:01.318 } 00:05:01.318 15:15:18 -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:01.318 15:15:18 -- rpc/skip_rpc.sh@40 -- # killprocess 1417718 00:05:01.318 15:15:18 -- common/autotest_common.sh@936 -- # '[' -z 1417718 ']' 00:05:01.318 15:15:18 -- common/autotest_common.sh@940 -- # kill -0 1417718 00:05:01.318 15:15:18 -- common/autotest_common.sh@941 -- # uname 00:05:01.318 15:15:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:01.318 15:15:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1417718 00:05:01.578 15:15:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:01.578 15:15:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:01.578 15:15:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1417718' 00:05:01.578 killing process with pid 1417718 00:05:01.578 15:15:18 -- common/autotest_common.sh@955 -- # kill 1417718 00:05:01.578 15:15:18 -- common/autotest_common.sh@960 -- # wait 1417718 00:05:01.578 15:15:18 -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1417983 00:05:01.578 15:15:18 -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:01.578 15:15:18 -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:06.866 15:15:23 -- rpc/skip_rpc.sh@50 -- # killprocess 1417983 00:05:06.866 15:15:23 -- common/autotest_common.sh@936 -- # '[' -z 1417983 ']' 00:05:06.866 15:15:23 -- common/autotest_common.sh@940 -- # kill -0 1417983 00:05:06.866 15:15:23 -- common/autotest_common.sh@941 -- # uname 00:05:06.866 15:15:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:06.866 15:15:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1417983 00:05:06.866 15:15:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:06.866 15:15:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:06.866 15:15:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1417983' 00:05:06.866 killing process with pid 1417983 00:05:06.866 15:15:24 -- common/autotest_common.sh@955 -- # kill 1417983 00:05:06.866 15:15:24 -- common/autotest_common.sh@960 -- # wait 1417983 00:05:06.866 15:15:24 -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:06.866 15:15:24 -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:06.866 00:05:06.866 real 0m6.503s 00:05:06.866 user 0m6.366s 00:05:06.866 sys 0m0.504s 00:05:06.866 15:15:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:06.866 15:15:24 -- common/autotest_common.sh@10 -- # set +x 00:05:06.866 ************************************ 00:05:06.866 END TEST skip_rpc_with_json 00:05:06.866 ************************************ 00:05:06.866 15:15:24 -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:06.866 15:15:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:06.866 15:15:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:06.866 15:15:24 -- common/autotest_common.sh@10 -- # set +x 00:05:07.126 ************************************ 00:05:07.126 START TEST skip_rpc_with_delay 00:05:07.126 ************************************ 00:05:07.126 15:15:24 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_delay 00:05:07.126 15:15:24 -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:07.126 15:15:24 -- common/autotest_common.sh@638 -- # local es=0 00:05:07.126 15:15:24 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:07.126 15:15:24 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:07.126 15:15:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:07.126 15:15:24 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:07.126 15:15:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:07.126 15:15:24 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:07.126 15:15:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:07.126 15:15:24 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:07.126 15:15:24 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:07.126 15:15:24 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:07.126 [2024-04-26 15:15:24.517389] app.c: 751:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:07.126 [2024-04-26 15:15:24.517483] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:07.126 15:15:24 -- common/autotest_common.sh@641 -- # es=1 00:05:07.126 15:15:24 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:07.126 15:15:24 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:07.126 15:15:24 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:07.126 00:05:07.126 real 0m0.081s 00:05:07.126 user 0m0.053s 00:05:07.126 sys 0m0.027s 00:05:07.126 15:15:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:07.126 15:15:24 -- common/autotest_common.sh@10 -- # set +x 00:05:07.126 ************************************ 00:05:07.126 END TEST skip_rpc_with_delay 00:05:07.126 ************************************ 00:05:07.126 15:15:24 -- rpc/skip_rpc.sh@77 -- # uname 00:05:07.126 15:15:24 -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:07.126 15:15:24 -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:07.126 15:15:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:07.126 15:15:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:07.126 15:15:24 -- common/autotest_common.sh@10 -- # set +x 00:05:07.387 ************************************ 00:05:07.387 START TEST exit_on_failed_rpc_init 00:05:07.387 ************************************ 00:05:07.387 15:15:24 -- common/autotest_common.sh@1111 -- # test_exit_on_failed_rpc_init 00:05:07.387 15:15:24 -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1419234 00:05:07.387 15:15:24 -- rpc/skip_rpc.sh@63 -- # waitforlisten 1419234 00:05:07.387 15:15:24 -- common/autotest_common.sh@817 -- # '[' -z 1419234 ']' 00:05:07.387 15:15:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.387 15:15:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:07.387 15:15:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.387 15:15:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:07.387 15:15:24 -- common/autotest_common.sh@10 -- # set +x 00:05:07.387 15:15:24 -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:07.387 [2024-04-26 15:15:24.779372] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:05:07.387 [2024-04-26 15:15:24.779421] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1419234 ] 00:05:07.387 EAL: No free 2048 kB hugepages reported on node 1 00:05:07.648 [2024-04-26 15:15:24.840760] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.648 [2024-04-26 15:15:24.909332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.220 15:15:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:08.220 15:15:25 -- common/autotest_common.sh@850 -- # return 0 00:05:08.220 15:15:25 -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:08.220 15:15:25 -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:08.220 15:15:25 -- common/autotest_common.sh@638 -- # local es=0 00:05:08.220 15:15:25 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:08.220 15:15:25 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:08.220 15:15:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:08.220 15:15:25 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:08.220 15:15:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:08.220 15:15:25 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:08.220 15:15:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:08.220 15:15:25 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:08.220 15:15:25 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:08.220 15:15:25 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:08.220 [2024-04-26 15:15:25.590362] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:05:08.220 [2024-04-26 15:15:25.590413] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1419397 ] 00:05:08.220 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.220 [2024-04-26 15:15:25.666843] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.481 [2024-04-26 15:15:25.728875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:08.481 [2024-04-26 15:15:25.728938] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:08.481 [2024-04-26 15:15:25.728948] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:08.481 [2024-04-26 15:15:25.728954] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:08.481 15:15:25 -- common/autotest_common.sh@641 -- # es=234 00:05:08.481 15:15:25 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:08.481 15:15:25 -- common/autotest_common.sh@650 -- # es=106 00:05:08.481 15:15:25 -- common/autotest_common.sh@651 -- # case "$es" in 00:05:08.481 15:15:25 -- common/autotest_common.sh@658 -- # es=1 00:05:08.481 15:15:25 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:08.481 15:15:25 -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:08.481 15:15:25 -- rpc/skip_rpc.sh@70 -- # killprocess 1419234 00:05:08.481 15:15:25 -- common/autotest_common.sh@936 -- # '[' -z 1419234 ']' 00:05:08.481 15:15:25 -- common/autotest_common.sh@940 -- # kill -0 1419234 00:05:08.481 15:15:25 -- common/autotest_common.sh@941 -- # uname 00:05:08.481 15:15:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:08.481 15:15:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1419234 00:05:08.481 15:15:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:08.481 15:15:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:08.481 15:15:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1419234' 00:05:08.481 killing process with pid 1419234 00:05:08.481 15:15:25 -- common/autotest_common.sh@955 -- # kill 1419234 00:05:08.481 15:15:25 -- common/autotest_common.sh@960 -- # wait 1419234 00:05:08.743 00:05:08.743 real 0m1.322s 00:05:08.743 user 0m1.531s 00:05:08.743 sys 0m0.369s 00:05:08.743 15:15:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:08.743 15:15:26 -- common/autotest_common.sh@10 -- # set +x 00:05:08.743 ************************************ 00:05:08.743 END TEST exit_on_failed_rpc_init 00:05:08.743 ************************************ 00:05:08.743 15:15:26 -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:08.743 00:05:08.743 real 0m14.051s 00:05:08.743 user 0m13.352s 00:05:08.743 sys 0m1.621s 00:05:08.743 15:15:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:08.743 15:15:26 -- common/autotest_common.sh@10 -- # set +x 00:05:08.743 ************************************ 00:05:08.743 END TEST skip_rpc 00:05:08.743 ************************************ 00:05:08.743 15:15:26 -- spdk/autotest.sh@167 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:08.743 15:15:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:08.743 15:15:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:08.743 15:15:26 -- common/autotest_common.sh@10 -- # set +x 00:05:09.004 ************************************ 00:05:09.004 START TEST rpc_client 00:05:09.004 ************************************ 00:05:09.005 15:15:26 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:09.005 * Looking for test storage... 00:05:09.005 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:09.005 15:15:26 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:09.005 OK 00:05:09.005 15:15:26 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:09.005 00:05:09.005 real 0m0.136s 00:05:09.005 user 0m0.050s 00:05:09.005 sys 0m0.094s 00:05:09.005 15:15:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:09.005 15:15:26 -- common/autotest_common.sh@10 -- # set +x 00:05:09.005 ************************************ 00:05:09.005 END TEST rpc_client 00:05:09.005 ************************************ 00:05:09.265 15:15:26 -- spdk/autotest.sh@168 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:09.265 15:15:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:09.265 15:15:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:09.265 15:15:26 -- common/autotest_common.sh@10 -- # set +x 00:05:09.265 ************************************ 00:05:09.265 START TEST json_config 00:05:09.265 ************************************ 00:05:09.265 15:15:26 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:09.265 15:15:26 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:09.265 15:15:26 -- nvmf/common.sh@7 -- # uname -s 00:05:09.265 15:15:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:09.265 15:15:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:09.265 15:15:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:09.265 15:15:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:09.265 15:15:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:09.265 15:15:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:09.265 15:15:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:09.265 15:15:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:09.265 15:15:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:09.265 15:15:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:09.526 15:15:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:09.526 15:15:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:09.526 15:15:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:09.526 15:15:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:09.526 15:15:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:09.526 15:15:26 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:09.526 15:15:26 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:09.526 15:15:26 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:09.526 15:15:26 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:09.526 15:15:26 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:09.526 15:15:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.526 15:15:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.526 15:15:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.526 15:15:26 -- paths/export.sh@5 -- # export PATH 00:05:09.526 15:15:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.526 15:15:26 -- nvmf/common.sh@47 -- # : 0 00:05:09.526 15:15:26 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:09.526 15:15:26 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:09.526 15:15:26 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:09.526 15:15:26 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:09.526 15:15:26 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:09.526 15:15:26 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:09.526 15:15:26 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:09.526 15:15:26 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:09.526 15:15:26 -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:09.526 15:15:26 -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:09.526 15:15:26 -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:09.526 15:15:26 -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:09.526 15:15:26 -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:09.526 15:15:26 -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:09.526 15:15:26 -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:09.526 15:15:26 -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:09.526 15:15:26 -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:09.526 15:15:26 -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:09.526 15:15:26 -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:09.526 15:15:26 -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:09.526 15:15:26 -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:09.526 15:15:26 -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:09.526 15:15:26 -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:09.526 15:15:26 -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:09.526 INFO: JSON configuration test init 00:05:09.526 15:15:26 -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:09.526 15:15:26 -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:09.526 15:15:26 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:09.526 15:15:26 -- common/autotest_common.sh@10 -- # set +x 00:05:09.526 15:15:26 -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:09.526 15:15:26 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:09.526 15:15:26 -- common/autotest_common.sh@10 -- # set +x 00:05:09.526 15:15:26 -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:09.526 15:15:26 -- json_config/common.sh@9 -- # local app=target 00:05:09.526 15:15:26 -- json_config/common.sh@10 -- # shift 00:05:09.527 15:15:26 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:09.527 15:15:26 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:09.527 15:15:26 -- json_config/common.sh@15 -- # local app_extra_params= 00:05:09.527 15:15:26 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:09.527 15:15:26 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:09.527 15:15:26 -- json_config/common.sh@22 -- # app_pid["$app"]=1419852 00:05:09.527 15:15:26 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:09.527 Waiting for target to run... 00:05:09.527 15:15:26 -- json_config/common.sh@25 -- # waitforlisten 1419852 /var/tmp/spdk_tgt.sock 00:05:09.527 15:15:26 -- common/autotest_common.sh@817 -- # '[' -z 1419852 ']' 00:05:09.527 15:15:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:09.527 15:15:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:09.527 15:15:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:09.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:09.527 15:15:26 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:09.527 15:15:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:09.527 15:15:26 -- common/autotest_common.sh@10 -- # set +x 00:05:09.527 [2024-04-26 15:15:26.804778] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:05:09.527 [2024-04-26 15:15:26.804853] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1419852 ] 00:05:09.527 EAL: No free 2048 kB hugepages reported on node 1 00:05:09.787 [2024-04-26 15:15:27.220569] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.047 [2024-04-26 15:15:27.280028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.308 15:15:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:10.308 15:15:27 -- common/autotest_common.sh@850 -- # return 0 00:05:10.308 15:15:27 -- json_config/common.sh@26 -- # echo '' 00:05:10.308 00:05:10.308 15:15:27 -- json_config/json_config.sh@269 -- # create_accel_config 00:05:10.308 15:15:27 -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:10.308 15:15:27 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:10.308 15:15:27 -- common/autotest_common.sh@10 -- # set +x 00:05:10.308 15:15:27 -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:10.308 15:15:27 -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:10.308 15:15:27 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:10.308 15:15:27 -- common/autotest_common.sh@10 -- # set +x 00:05:10.308 15:15:27 -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:10.308 15:15:27 -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:10.308 15:15:27 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:10.878 15:15:28 -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:10.878 15:15:28 -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:10.878 15:15:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:10.878 15:15:28 -- common/autotest_common.sh@10 -- # set +x 00:05:10.878 15:15:28 -- json_config/json_config.sh@45 -- # local ret=0 00:05:10.878 15:15:28 -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:10.878 15:15:28 -- json_config/json_config.sh@46 -- # local enabled_types 00:05:10.878 15:15:28 -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:10.878 15:15:28 -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:10.878 15:15:28 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:11.137 15:15:28 -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:11.137 15:15:28 -- json_config/json_config.sh@48 -- # local get_types 00:05:11.137 15:15:28 -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:11.137 15:15:28 -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:11.137 15:15:28 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:11.137 15:15:28 -- common/autotest_common.sh@10 -- # set +x 00:05:11.137 15:15:28 -- json_config/json_config.sh@55 -- # return 0 00:05:11.137 15:15:28 -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:11.137 15:15:28 -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:11.137 15:15:28 -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:11.137 15:15:28 -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:11.137 15:15:28 -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:11.137 15:15:28 -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:11.137 15:15:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:11.137 15:15:28 -- common/autotest_common.sh@10 -- # set +x 00:05:11.137 15:15:28 -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:11.137 15:15:28 -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:11.137 15:15:28 -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:11.137 15:15:28 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:11.137 15:15:28 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:11.137 MallocForNvmf0 00:05:11.137 15:15:28 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:11.137 15:15:28 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:11.397 MallocForNvmf1 00:05:11.397 15:15:28 -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:11.397 15:15:28 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:11.397 [2024-04-26 15:15:28.822515] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:11.397 15:15:28 -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:11.397 15:15:28 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:11.657 15:15:29 -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:11.657 15:15:29 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:11.917 15:15:29 -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:11.917 15:15:29 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:11.917 15:15:29 -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:11.917 15:15:29 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:12.196 [2024-04-26 15:15:29.444547] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:12.196 15:15:29 -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:12.196 15:15:29 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:12.196 15:15:29 -- common/autotest_common.sh@10 -- # set +x 00:05:12.196 15:15:29 -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:12.196 15:15:29 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:12.196 15:15:29 -- common/autotest_common.sh@10 -- # set +x 00:05:12.196 15:15:29 -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:12.196 15:15:29 -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:12.196 15:15:29 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:12.455 MallocBdevForConfigChangeCheck 00:05:12.455 15:15:29 -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:12.455 15:15:29 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:12.455 15:15:29 -- common/autotest_common.sh@10 -- # set +x 00:05:12.455 15:15:29 -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:12.455 15:15:29 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:12.714 15:15:30 -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:12.714 INFO: shutting down applications... 00:05:12.714 15:15:30 -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:12.714 15:15:30 -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:12.714 15:15:30 -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:12.714 15:15:30 -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:12.977 Calling clear_iscsi_subsystem 00:05:12.978 Calling clear_nvmf_subsystem 00:05:12.978 Calling clear_nbd_subsystem 00:05:12.978 Calling clear_ublk_subsystem 00:05:12.978 Calling clear_vhost_blk_subsystem 00:05:12.978 Calling clear_vhost_scsi_subsystem 00:05:12.978 Calling clear_bdev_subsystem 00:05:13.237 15:15:30 -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:13.237 15:15:30 -- json_config/json_config.sh@343 -- # count=100 00:05:13.237 15:15:30 -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:13.237 15:15:30 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:13.237 15:15:30 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:13.237 15:15:30 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:13.497 15:15:30 -- json_config/json_config.sh@345 -- # break 00:05:13.497 15:15:30 -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:13.497 15:15:30 -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:13.497 15:15:30 -- json_config/common.sh@31 -- # local app=target 00:05:13.497 15:15:30 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:13.497 15:15:30 -- json_config/common.sh@35 -- # [[ -n 1419852 ]] 00:05:13.497 15:15:30 -- json_config/common.sh@38 -- # kill -SIGINT 1419852 00:05:13.497 15:15:30 -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:13.497 15:15:30 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:13.497 15:15:30 -- json_config/common.sh@41 -- # kill -0 1419852 00:05:13.497 15:15:30 -- json_config/common.sh@45 -- # sleep 0.5 00:05:14.067 15:15:31 -- json_config/common.sh@40 -- # (( i++ )) 00:05:14.067 15:15:31 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:14.067 15:15:31 -- json_config/common.sh@41 -- # kill -0 1419852 00:05:14.067 15:15:31 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:14.067 15:15:31 -- json_config/common.sh@43 -- # break 00:05:14.067 15:15:31 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:14.067 15:15:31 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:14.067 SPDK target shutdown done 00:05:14.067 15:15:31 -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:14.067 INFO: relaunching applications... 00:05:14.067 15:15:31 -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:14.067 15:15:31 -- json_config/common.sh@9 -- # local app=target 00:05:14.067 15:15:31 -- json_config/common.sh@10 -- # shift 00:05:14.067 15:15:31 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:14.067 15:15:31 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:14.067 15:15:31 -- json_config/common.sh@15 -- # local app_extra_params= 00:05:14.067 15:15:31 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:14.067 15:15:31 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:14.067 15:15:31 -- json_config/common.sh@22 -- # app_pid["$app"]=1420718 00:05:14.067 15:15:31 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:14.067 Waiting for target to run... 00:05:14.067 15:15:31 -- json_config/common.sh@25 -- # waitforlisten 1420718 /var/tmp/spdk_tgt.sock 00:05:14.067 15:15:31 -- common/autotest_common.sh@817 -- # '[' -z 1420718 ']' 00:05:14.067 15:15:31 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:14.067 15:15:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:14.067 15:15:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:14.067 15:15:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:14.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:14.067 15:15:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:14.067 15:15:31 -- common/autotest_common.sh@10 -- # set +x 00:05:14.067 [2024-04-26 15:15:31.321001] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:05:14.067 [2024-04-26 15:15:31.321079] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1420718 ] 00:05:14.067 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.327 [2024-04-26 15:15:31.589473] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.327 [2024-04-26 15:15:31.640265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.896 [2024-04-26 15:15:32.127800] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:14.896 [2024-04-26 15:15:32.160174] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:14.896 15:15:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:14.896 15:15:32 -- common/autotest_common.sh@850 -- # return 0 00:05:14.896 15:15:32 -- json_config/common.sh@26 -- # echo '' 00:05:14.896 00:05:14.896 15:15:32 -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:14.897 15:15:32 -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:14.897 INFO: Checking if target configuration is the same... 00:05:14.897 15:15:32 -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:14.897 15:15:32 -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:14.897 15:15:32 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:14.897 + '[' 2 -ne 2 ']' 00:05:14.897 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:14.897 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:14.897 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:14.897 +++ basename /dev/fd/62 00:05:14.897 ++ mktemp /tmp/62.XXX 00:05:14.897 + tmp_file_1=/tmp/62.bzK 00:05:14.897 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:14.897 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:14.897 + tmp_file_2=/tmp/spdk_tgt_config.json.eF0 00:05:14.897 + ret=0 00:05:14.897 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:15.156 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:15.156 + diff -u /tmp/62.bzK /tmp/spdk_tgt_config.json.eF0 00:05:15.156 + echo 'INFO: JSON config files are the same' 00:05:15.156 INFO: JSON config files are the same 00:05:15.156 + rm /tmp/62.bzK /tmp/spdk_tgt_config.json.eF0 00:05:15.156 + exit 0 00:05:15.156 15:15:32 -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:15.156 15:15:32 -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:15.156 INFO: changing configuration and checking if this can be detected... 00:05:15.156 15:15:32 -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:15.156 15:15:32 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:15.416 15:15:32 -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:15.416 15:15:32 -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:15.416 15:15:32 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:15.416 + '[' 2 -ne 2 ']' 00:05:15.416 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:15.416 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:15.416 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:15.416 +++ basename /dev/fd/62 00:05:15.416 ++ mktemp /tmp/62.XXX 00:05:15.416 + tmp_file_1=/tmp/62.0jD 00:05:15.416 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:15.416 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:15.416 + tmp_file_2=/tmp/spdk_tgt_config.json.4Lj 00:05:15.416 + ret=0 00:05:15.416 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:15.677 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:15.677 + diff -u /tmp/62.0jD /tmp/spdk_tgt_config.json.4Lj 00:05:15.677 + ret=1 00:05:15.677 + echo '=== Start of file: /tmp/62.0jD ===' 00:05:15.677 + cat /tmp/62.0jD 00:05:15.677 + echo '=== End of file: /tmp/62.0jD ===' 00:05:15.677 + echo '' 00:05:15.677 + echo '=== Start of file: /tmp/spdk_tgt_config.json.4Lj ===' 00:05:15.677 + cat /tmp/spdk_tgt_config.json.4Lj 00:05:15.677 + echo '=== End of file: /tmp/spdk_tgt_config.json.4Lj ===' 00:05:15.677 + echo '' 00:05:15.677 + rm /tmp/62.0jD /tmp/spdk_tgt_config.json.4Lj 00:05:15.677 + exit 1 00:05:15.677 15:15:33 -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:15.677 INFO: configuration change detected. 00:05:15.677 15:15:33 -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:15.677 15:15:33 -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:15.677 15:15:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:15.677 15:15:33 -- common/autotest_common.sh@10 -- # set +x 00:05:15.677 15:15:33 -- json_config/json_config.sh@307 -- # local ret=0 00:05:15.677 15:15:33 -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:15.677 15:15:33 -- json_config/json_config.sh@317 -- # [[ -n 1420718 ]] 00:05:15.677 15:15:33 -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:15.677 15:15:33 -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:15.677 15:15:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:15.677 15:15:33 -- common/autotest_common.sh@10 -- # set +x 00:05:15.677 15:15:33 -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:15.677 15:15:33 -- json_config/json_config.sh@193 -- # uname -s 00:05:15.677 15:15:33 -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:15.677 15:15:33 -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:15.677 15:15:33 -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:15.677 15:15:33 -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:15.677 15:15:33 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:15.677 15:15:33 -- common/autotest_common.sh@10 -- # set +x 00:05:15.936 15:15:33 -- json_config/json_config.sh@323 -- # killprocess 1420718 00:05:15.937 15:15:33 -- common/autotest_common.sh@936 -- # '[' -z 1420718 ']' 00:05:15.937 15:15:33 -- common/autotest_common.sh@940 -- # kill -0 1420718 00:05:15.937 15:15:33 -- common/autotest_common.sh@941 -- # uname 00:05:15.937 15:15:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:15.937 15:15:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1420718 00:05:15.937 15:15:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:15.937 15:15:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:15.937 15:15:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1420718' 00:05:15.937 killing process with pid 1420718 00:05:15.937 15:15:33 -- common/autotest_common.sh@955 -- # kill 1420718 00:05:15.937 15:15:33 -- common/autotest_common.sh@960 -- # wait 1420718 00:05:16.196 15:15:33 -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:16.196 15:15:33 -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:16.196 15:15:33 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:16.196 15:15:33 -- common/autotest_common.sh@10 -- # set +x 00:05:16.196 15:15:33 -- json_config/json_config.sh@328 -- # return 0 00:05:16.196 15:15:33 -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:16.196 INFO: Success 00:05:16.196 00:05:16.196 real 0m6.894s 00:05:16.196 user 0m8.234s 00:05:16.196 sys 0m1.817s 00:05:16.196 15:15:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:16.196 15:15:33 -- common/autotest_common.sh@10 -- # set +x 00:05:16.196 ************************************ 00:05:16.196 END TEST json_config 00:05:16.196 ************************************ 00:05:16.196 15:15:33 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:16.196 15:15:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:16.196 15:15:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:16.196 15:15:33 -- common/autotest_common.sh@10 -- # set +x 00:05:16.457 ************************************ 00:05:16.457 START TEST json_config_extra_key 00:05:16.457 ************************************ 00:05:16.457 15:15:33 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:16.457 15:15:33 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:16.457 15:15:33 -- nvmf/common.sh@7 -- # uname -s 00:05:16.457 15:15:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:16.457 15:15:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:16.457 15:15:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:16.457 15:15:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:16.457 15:15:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:16.457 15:15:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:16.457 15:15:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:16.457 15:15:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:16.457 15:15:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:16.457 15:15:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:16.457 15:15:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:16.457 15:15:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:16.457 15:15:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:16.457 15:15:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:16.457 15:15:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:16.457 15:15:33 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:16.457 15:15:33 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:16.457 15:15:33 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:16.457 15:15:33 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:16.457 15:15:33 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:16.457 15:15:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:16.457 15:15:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:16.457 15:15:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:16.457 15:15:33 -- paths/export.sh@5 -- # export PATH 00:05:16.457 15:15:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:16.457 15:15:33 -- nvmf/common.sh@47 -- # : 0 00:05:16.457 15:15:33 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:16.457 15:15:33 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:16.457 15:15:33 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:16.457 15:15:33 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:16.457 15:15:33 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:16.457 15:15:33 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:16.457 15:15:33 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:16.457 15:15:33 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:16.457 15:15:33 -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:16.457 15:15:33 -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:16.457 15:15:33 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:16.457 15:15:33 -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:16.458 15:15:33 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:16.458 15:15:33 -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:16.458 15:15:33 -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:16.458 15:15:33 -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:16.458 15:15:33 -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:16.458 15:15:33 -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:16.458 15:15:33 -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:16.458 INFO: launching applications... 00:05:16.458 15:15:33 -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:16.458 15:15:33 -- json_config/common.sh@9 -- # local app=target 00:05:16.458 15:15:33 -- json_config/common.sh@10 -- # shift 00:05:16.458 15:15:33 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:16.458 15:15:33 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:16.458 15:15:33 -- json_config/common.sh@15 -- # local app_extra_params= 00:05:16.458 15:15:33 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:16.458 15:15:33 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:16.458 15:15:33 -- json_config/common.sh@22 -- # app_pid["$app"]=1421445 00:05:16.458 15:15:33 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:16.458 Waiting for target to run... 00:05:16.458 15:15:33 -- json_config/common.sh@25 -- # waitforlisten 1421445 /var/tmp/spdk_tgt.sock 00:05:16.458 15:15:33 -- common/autotest_common.sh@817 -- # '[' -z 1421445 ']' 00:05:16.458 15:15:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:16.458 15:15:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:16.458 15:15:33 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:16.458 15:15:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:16.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:16.458 15:15:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:16.458 15:15:33 -- common/autotest_common.sh@10 -- # set +x 00:05:16.458 [2024-04-26 15:15:33.882991] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:05:16.458 [2024-04-26 15:15:33.883054] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1421445 ] 00:05:16.719 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.979 [2024-04-26 15:15:34.173720] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.979 [2024-04-26 15:15:34.223382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.240 15:15:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:17.240 15:15:34 -- common/autotest_common.sh@850 -- # return 0 00:05:17.240 15:15:34 -- json_config/common.sh@26 -- # echo '' 00:05:17.240 00:05:17.240 15:15:34 -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:17.240 INFO: shutting down applications... 00:05:17.240 15:15:34 -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:17.240 15:15:34 -- json_config/common.sh@31 -- # local app=target 00:05:17.240 15:15:34 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:17.240 15:15:34 -- json_config/common.sh@35 -- # [[ -n 1421445 ]] 00:05:17.240 15:15:34 -- json_config/common.sh@38 -- # kill -SIGINT 1421445 00:05:17.240 15:15:34 -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:17.240 15:15:34 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:17.240 15:15:34 -- json_config/common.sh@41 -- # kill -0 1421445 00:05:17.240 15:15:34 -- json_config/common.sh@45 -- # sleep 0.5 00:05:17.809 15:15:35 -- json_config/common.sh@40 -- # (( i++ )) 00:05:17.809 15:15:35 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:17.809 15:15:35 -- json_config/common.sh@41 -- # kill -0 1421445 00:05:17.809 15:15:35 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:17.809 15:15:35 -- json_config/common.sh@43 -- # break 00:05:17.809 15:15:35 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:17.809 15:15:35 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:17.809 SPDK target shutdown done 00:05:17.809 15:15:35 -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:17.809 Success 00:05:17.809 00:05:17.809 real 0m1.451s 00:05:17.809 user 0m1.090s 00:05:17.809 sys 0m0.393s 00:05:17.809 15:15:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:17.809 15:15:35 -- common/autotest_common.sh@10 -- # set +x 00:05:17.809 ************************************ 00:05:17.809 END TEST json_config_extra_key 00:05:17.809 ************************************ 00:05:17.810 15:15:35 -- spdk/autotest.sh@170 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:17.810 15:15:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:17.810 15:15:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:17.810 15:15:35 -- common/autotest_common.sh@10 -- # set +x 00:05:18.173 ************************************ 00:05:18.173 START TEST alias_rpc 00:05:18.173 ************************************ 00:05:18.173 15:15:35 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:18.173 * Looking for test storage... 00:05:18.173 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:18.173 15:15:35 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:18.173 15:15:35 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1421834 00:05:18.173 15:15:35 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1421834 00:05:18.173 15:15:35 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:18.173 15:15:35 -- common/autotest_common.sh@817 -- # '[' -z 1421834 ']' 00:05:18.173 15:15:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.173 15:15:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:18.173 15:15:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.173 15:15:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:18.173 15:15:35 -- common/autotest_common.sh@10 -- # set +x 00:05:18.173 [2024-04-26 15:15:35.524555] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:05:18.173 [2024-04-26 15:15:35.524615] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1421834 ] 00:05:18.457 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.457 [2024-04-26 15:15:35.589601] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.457 [2024-04-26 15:15:35.661700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.026 15:15:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:19.026 15:15:36 -- common/autotest_common.sh@850 -- # return 0 00:05:19.026 15:15:36 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:19.286 15:15:36 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1421834 00:05:19.286 15:15:36 -- common/autotest_common.sh@936 -- # '[' -z 1421834 ']' 00:05:19.286 15:15:36 -- common/autotest_common.sh@940 -- # kill -0 1421834 00:05:19.286 15:15:36 -- common/autotest_common.sh@941 -- # uname 00:05:19.286 15:15:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:19.286 15:15:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1421834 00:05:19.286 15:15:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:19.286 15:15:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:19.286 15:15:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1421834' 00:05:19.286 killing process with pid 1421834 00:05:19.286 15:15:36 -- common/autotest_common.sh@955 -- # kill 1421834 00:05:19.286 15:15:36 -- common/autotest_common.sh@960 -- # wait 1421834 00:05:19.546 00:05:19.546 real 0m1.387s 00:05:19.547 user 0m1.522s 00:05:19.547 sys 0m0.388s 00:05:19.547 15:15:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:19.547 15:15:36 -- common/autotest_common.sh@10 -- # set +x 00:05:19.547 ************************************ 00:05:19.547 END TEST alias_rpc 00:05:19.547 ************************************ 00:05:19.547 15:15:36 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:05:19.547 15:15:36 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:19.547 15:15:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:19.547 15:15:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:19.547 15:15:36 -- common/autotest_common.sh@10 -- # set +x 00:05:19.547 ************************************ 00:05:19.547 START TEST spdkcli_tcp 00:05:19.547 ************************************ 00:05:19.547 15:15:36 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:19.807 * Looking for test storage... 00:05:19.807 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:19.807 15:15:37 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:19.807 15:15:37 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:19.807 15:15:37 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:19.807 15:15:37 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:19.807 15:15:37 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:19.807 15:15:37 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:19.807 15:15:37 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:19.807 15:15:37 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:19.807 15:15:37 -- common/autotest_common.sh@10 -- # set +x 00:05:19.807 15:15:37 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1422236 00:05:19.807 15:15:37 -- spdkcli/tcp.sh@27 -- # waitforlisten 1422236 00:05:19.807 15:15:37 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:19.807 15:15:37 -- common/autotest_common.sh@817 -- # '[' -z 1422236 ']' 00:05:19.807 15:15:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.807 15:15:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:19.807 15:15:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.807 15:15:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:19.807 15:15:37 -- common/autotest_common.sh@10 -- # set +x 00:05:19.807 [2024-04-26 15:15:37.115454] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:05:19.807 [2024-04-26 15:15:37.115520] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1422236 ] 00:05:19.807 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.807 [2024-04-26 15:15:37.182958] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:19.807 [2024-04-26 15:15:37.255874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:19.807 [2024-04-26 15:15:37.255880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.761 15:15:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:20.761 15:15:37 -- common/autotest_common.sh@850 -- # return 0 00:05:20.762 15:15:37 -- spdkcli/tcp.sh@31 -- # socat_pid=1422269 00:05:20.762 15:15:37 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:20.762 15:15:37 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:20.762 [ 00:05:20.762 "bdev_malloc_delete", 00:05:20.762 "bdev_malloc_create", 00:05:20.762 "bdev_null_resize", 00:05:20.762 "bdev_null_delete", 00:05:20.762 "bdev_null_create", 00:05:20.762 "bdev_nvme_cuse_unregister", 00:05:20.762 "bdev_nvme_cuse_register", 00:05:20.762 "bdev_opal_new_user", 00:05:20.762 "bdev_opal_set_lock_state", 00:05:20.762 "bdev_opal_delete", 00:05:20.762 "bdev_opal_get_info", 00:05:20.762 "bdev_opal_create", 00:05:20.762 "bdev_nvme_opal_revert", 00:05:20.762 "bdev_nvme_opal_init", 00:05:20.762 "bdev_nvme_send_cmd", 00:05:20.762 "bdev_nvme_get_path_iostat", 00:05:20.762 "bdev_nvme_get_mdns_discovery_info", 00:05:20.762 "bdev_nvme_stop_mdns_discovery", 00:05:20.762 "bdev_nvme_start_mdns_discovery", 00:05:20.762 "bdev_nvme_set_multipath_policy", 00:05:20.762 "bdev_nvme_set_preferred_path", 00:05:20.762 "bdev_nvme_get_io_paths", 00:05:20.762 "bdev_nvme_remove_error_injection", 00:05:20.762 "bdev_nvme_add_error_injection", 00:05:20.762 "bdev_nvme_get_discovery_info", 00:05:20.762 "bdev_nvme_stop_discovery", 00:05:20.762 "bdev_nvme_start_discovery", 00:05:20.762 "bdev_nvme_get_controller_health_info", 00:05:20.762 "bdev_nvme_disable_controller", 00:05:20.762 "bdev_nvme_enable_controller", 00:05:20.762 "bdev_nvme_reset_controller", 00:05:20.762 "bdev_nvme_get_transport_statistics", 00:05:20.762 "bdev_nvme_apply_firmware", 00:05:20.762 "bdev_nvme_detach_controller", 00:05:20.762 "bdev_nvme_get_controllers", 00:05:20.762 "bdev_nvme_attach_controller", 00:05:20.762 "bdev_nvme_set_hotplug", 00:05:20.762 "bdev_nvme_set_options", 00:05:20.762 "bdev_passthru_delete", 00:05:20.762 "bdev_passthru_create", 00:05:20.762 "bdev_lvol_grow_lvstore", 00:05:20.762 "bdev_lvol_get_lvols", 00:05:20.762 "bdev_lvol_get_lvstores", 00:05:20.762 "bdev_lvol_delete", 00:05:20.762 "bdev_lvol_set_read_only", 00:05:20.762 "bdev_lvol_resize", 00:05:20.762 "bdev_lvol_decouple_parent", 00:05:20.762 "bdev_lvol_inflate", 00:05:20.762 "bdev_lvol_rename", 00:05:20.762 "bdev_lvol_clone_bdev", 00:05:20.762 "bdev_lvol_clone", 00:05:20.762 "bdev_lvol_snapshot", 00:05:20.762 "bdev_lvol_create", 00:05:20.762 "bdev_lvol_delete_lvstore", 00:05:20.762 "bdev_lvol_rename_lvstore", 00:05:20.762 "bdev_lvol_create_lvstore", 00:05:20.762 "bdev_raid_set_options", 00:05:20.762 "bdev_raid_remove_base_bdev", 00:05:20.762 "bdev_raid_add_base_bdev", 00:05:20.762 "bdev_raid_delete", 00:05:20.762 "bdev_raid_create", 00:05:20.762 "bdev_raid_get_bdevs", 00:05:20.762 "bdev_error_inject_error", 00:05:20.762 "bdev_error_delete", 00:05:20.762 "bdev_error_create", 00:05:20.762 "bdev_split_delete", 00:05:20.762 "bdev_split_create", 00:05:20.762 "bdev_delay_delete", 00:05:20.762 "bdev_delay_create", 00:05:20.762 "bdev_delay_update_latency", 00:05:20.762 "bdev_zone_block_delete", 00:05:20.762 "bdev_zone_block_create", 00:05:20.762 "blobfs_create", 00:05:20.762 "blobfs_detect", 00:05:20.762 "blobfs_set_cache_size", 00:05:20.762 "bdev_aio_delete", 00:05:20.762 "bdev_aio_rescan", 00:05:20.762 "bdev_aio_create", 00:05:20.762 "bdev_ftl_set_property", 00:05:20.762 "bdev_ftl_get_properties", 00:05:20.762 "bdev_ftl_get_stats", 00:05:20.762 "bdev_ftl_unmap", 00:05:20.762 "bdev_ftl_unload", 00:05:20.762 "bdev_ftl_delete", 00:05:20.762 "bdev_ftl_load", 00:05:20.762 "bdev_ftl_create", 00:05:20.762 "bdev_virtio_attach_controller", 00:05:20.762 "bdev_virtio_scsi_get_devices", 00:05:20.762 "bdev_virtio_detach_controller", 00:05:20.762 "bdev_virtio_blk_set_hotplug", 00:05:20.762 "bdev_iscsi_delete", 00:05:20.762 "bdev_iscsi_create", 00:05:20.762 "bdev_iscsi_set_options", 00:05:20.762 "accel_error_inject_error", 00:05:20.762 "ioat_scan_accel_module", 00:05:20.762 "dsa_scan_accel_module", 00:05:20.762 "iaa_scan_accel_module", 00:05:20.762 "vfu_virtio_create_scsi_endpoint", 00:05:20.762 "vfu_virtio_scsi_remove_target", 00:05:20.762 "vfu_virtio_scsi_add_target", 00:05:20.762 "vfu_virtio_create_blk_endpoint", 00:05:20.762 "vfu_virtio_delete_endpoint", 00:05:20.762 "keyring_file_remove_key", 00:05:20.762 "keyring_file_add_key", 00:05:20.762 "iscsi_get_histogram", 00:05:20.762 "iscsi_enable_histogram", 00:05:20.762 "iscsi_set_options", 00:05:20.762 "iscsi_get_auth_groups", 00:05:20.762 "iscsi_auth_group_remove_secret", 00:05:20.762 "iscsi_auth_group_add_secret", 00:05:20.762 "iscsi_delete_auth_group", 00:05:20.762 "iscsi_create_auth_group", 00:05:20.762 "iscsi_set_discovery_auth", 00:05:20.762 "iscsi_get_options", 00:05:20.762 "iscsi_target_node_request_logout", 00:05:20.762 "iscsi_target_node_set_redirect", 00:05:20.762 "iscsi_target_node_set_auth", 00:05:20.762 "iscsi_target_node_add_lun", 00:05:20.762 "iscsi_get_stats", 00:05:20.762 "iscsi_get_connections", 00:05:20.762 "iscsi_portal_group_set_auth", 00:05:20.762 "iscsi_start_portal_group", 00:05:20.762 "iscsi_delete_portal_group", 00:05:20.762 "iscsi_create_portal_group", 00:05:20.762 "iscsi_get_portal_groups", 00:05:20.762 "iscsi_delete_target_node", 00:05:20.762 "iscsi_target_node_remove_pg_ig_maps", 00:05:20.762 "iscsi_target_node_add_pg_ig_maps", 00:05:20.762 "iscsi_create_target_node", 00:05:20.762 "iscsi_get_target_nodes", 00:05:20.762 "iscsi_delete_initiator_group", 00:05:20.762 "iscsi_initiator_group_remove_initiators", 00:05:20.762 "iscsi_initiator_group_add_initiators", 00:05:20.762 "iscsi_create_initiator_group", 00:05:20.762 "iscsi_get_initiator_groups", 00:05:20.762 "nvmf_set_crdt", 00:05:20.762 "nvmf_set_config", 00:05:20.762 "nvmf_set_max_subsystems", 00:05:20.762 "nvmf_subsystem_get_listeners", 00:05:20.762 "nvmf_subsystem_get_qpairs", 00:05:20.762 "nvmf_subsystem_get_controllers", 00:05:20.762 "nvmf_get_stats", 00:05:20.762 "nvmf_get_transports", 00:05:20.762 "nvmf_create_transport", 00:05:20.762 "nvmf_get_targets", 00:05:20.762 "nvmf_delete_target", 00:05:20.762 "nvmf_create_target", 00:05:20.762 "nvmf_subsystem_allow_any_host", 00:05:20.762 "nvmf_subsystem_remove_host", 00:05:20.762 "nvmf_subsystem_add_host", 00:05:20.762 "nvmf_ns_remove_host", 00:05:20.762 "nvmf_ns_add_host", 00:05:20.762 "nvmf_subsystem_remove_ns", 00:05:20.762 "nvmf_subsystem_add_ns", 00:05:20.762 "nvmf_subsystem_listener_set_ana_state", 00:05:20.762 "nvmf_discovery_get_referrals", 00:05:20.762 "nvmf_discovery_remove_referral", 00:05:20.762 "nvmf_discovery_add_referral", 00:05:20.762 "nvmf_subsystem_remove_listener", 00:05:20.762 "nvmf_subsystem_add_listener", 00:05:20.762 "nvmf_delete_subsystem", 00:05:20.762 "nvmf_create_subsystem", 00:05:20.762 "nvmf_get_subsystems", 00:05:20.762 "env_dpdk_get_mem_stats", 00:05:20.762 "nbd_get_disks", 00:05:20.762 "nbd_stop_disk", 00:05:20.762 "nbd_start_disk", 00:05:20.762 "ublk_recover_disk", 00:05:20.762 "ublk_get_disks", 00:05:20.762 "ublk_stop_disk", 00:05:20.762 "ublk_start_disk", 00:05:20.762 "ublk_destroy_target", 00:05:20.762 "ublk_create_target", 00:05:20.762 "virtio_blk_create_transport", 00:05:20.762 "virtio_blk_get_transports", 00:05:20.762 "vhost_controller_set_coalescing", 00:05:20.762 "vhost_get_controllers", 00:05:20.762 "vhost_delete_controller", 00:05:20.762 "vhost_create_blk_controller", 00:05:20.762 "vhost_scsi_controller_remove_target", 00:05:20.762 "vhost_scsi_controller_add_target", 00:05:20.762 "vhost_start_scsi_controller", 00:05:20.762 "vhost_create_scsi_controller", 00:05:20.762 "thread_set_cpumask", 00:05:20.762 "framework_get_scheduler", 00:05:20.762 "framework_set_scheduler", 00:05:20.762 "framework_get_reactors", 00:05:20.762 "thread_get_io_channels", 00:05:20.762 "thread_get_pollers", 00:05:20.762 "thread_get_stats", 00:05:20.762 "framework_monitor_context_switch", 00:05:20.762 "spdk_kill_instance", 00:05:20.762 "log_enable_timestamps", 00:05:20.762 "log_get_flags", 00:05:20.762 "log_clear_flag", 00:05:20.762 "log_set_flag", 00:05:20.762 "log_get_level", 00:05:20.762 "log_set_level", 00:05:20.762 "log_get_print_level", 00:05:20.762 "log_set_print_level", 00:05:20.762 "framework_enable_cpumask_locks", 00:05:20.762 "framework_disable_cpumask_locks", 00:05:20.762 "framework_wait_init", 00:05:20.762 "framework_start_init", 00:05:20.762 "scsi_get_devices", 00:05:20.762 "bdev_get_histogram", 00:05:20.762 "bdev_enable_histogram", 00:05:20.762 "bdev_set_qos_limit", 00:05:20.762 "bdev_set_qd_sampling_period", 00:05:20.762 "bdev_get_bdevs", 00:05:20.762 "bdev_reset_iostat", 00:05:20.762 "bdev_get_iostat", 00:05:20.762 "bdev_examine", 00:05:20.762 "bdev_wait_for_examine", 00:05:20.762 "bdev_set_options", 00:05:20.762 "notify_get_notifications", 00:05:20.762 "notify_get_types", 00:05:20.762 "accel_get_stats", 00:05:20.762 "accel_set_options", 00:05:20.762 "accel_set_driver", 00:05:20.762 "accel_crypto_key_destroy", 00:05:20.762 "accel_crypto_keys_get", 00:05:20.762 "accel_crypto_key_create", 00:05:20.762 "accel_assign_opc", 00:05:20.762 "accel_get_module_info", 00:05:20.762 "accel_get_opc_assignments", 00:05:20.762 "vmd_rescan", 00:05:20.762 "vmd_remove_device", 00:05:20.762 "vmd_enable", 00:05:20.762 "sock_get_default_impl", 00:05:20.762 "sock_set_default_impl", 00:05:20.762 "sock_impl_set_options", 00:05:20.762 "sock_impl_get_options", 00:05:20.762 "iobuf_get_stats", 00:05:20.762 "iobuf_set_options", 00:05:20.762 "keyring_get_keys", 00:05:20.762 "framework_get_pci_devices", 00:05:20.762 "framework_get_config", 00:05:20.763 "framework_get_subsystems", 00:05:20.763 "vfu_tgt_set_base_path", 00:05:20.763 "trace_get_info", 00:05:20.763 "trace_get_tpoint_group_mask", 00:05:20.763 "trace_disable_tpoint_group", 00:05:20.763 "trace_enable_tpoint_group", 00:05:20.763 "trace_clear_tpoint_mask", 00:05:20.763 "trace_set_tpoint_mask", 00:05:20.763 "spdk_get_version", 00:05:20.763 "rpc_get_methods" 00:05:20.763 ] 00:05:20.763 15:15:38 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:20.763 15:15:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:20.763 15:15:38 -- common/autotest_common.sh@10 -- # set +x 00:05:20.763 15:15:38 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:20.763 15:15:38 -- spdkcli/tcp.sh@38 -- # killprocess 1422236 00:05:20.763 15:15:38 -- common/autotest_common.sh@936 -- # '[' -z 1422236 ']' 00:05:20.763 15:15:38 -- common/autotest_common.sh@940 -- # kill -0 1422236 00:05:20.763 15:15:38 -- common/autotest_common.sh@941 -- # uname 00:05:20.763 15:15:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:20.763 15:15:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1422236 00:05:20.763 15:15:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:20.763 15:15:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:20.763 15:15:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1422236' 00:05:20.763 killing process with pid 1422236 00:05:20.763 15:15:38 -- common/autotest_common.sh@955 -- # kill 1422236 00:05:20.763 15:15:38 -- common/autotest_common.sh@960 -- # wait 1422236 00:05:21.024 00:05:21.024 real 0m1.407s 00:05:21.024 user 0m2.575s 00:05:21.024 sys 0m0.420s 00:05:21.024 15:15:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:21.024 15:15:38 -- common/autotest_common.sh@10 -- # set +x 00:05:21.024 ************************************ 00:05:21.024 END TEST spdkcli_tcp 00:05:21.024 ************************************ 00:05:21.024 15:15:38 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:21.024 15:15:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:21.024 15:15:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:21.024 15:15:38 -- common/autotest_common.sh@10 -- # set +x 00:05:21.285 ************************************ 00:05:21.285 START TEST dpdk_mem_utility 00:05:21.285 ************************************ 00:05:21.285 15:15:38 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:21.285 * Looking for test storage... 00:05:21.285 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:21.285 15:15:38 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:21.285 15:15:38 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1422648 00:05:21.285 15:15:38 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1422648 00:05:21.285 15:15:38 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:21.285 15:15:38 -- common/autotest_common.sh@817 -- # '[' -z 1422648 ']' 00:05:21.285 15:15:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.285 15:15:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:21.285 15:15:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.285 15:15:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:21.285 15:15:38 -- common/autotest_common.sh@10 -- # set +x 00:05:21.285 [2024-04-26 15:15:38.706504] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:05:21.285 [2024-04-26 15:15:38.706572] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1422648 ] 00:05:21.545 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.545 [2024-04-26 15:15:38.771761] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.545 [2024-04-26 15:15:38.843808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.115 15:15:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:22.115 15:15:39 -- common/autotest_common.sh@850 -- # return 0 00:05:22.115 15:15:39 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:22.115 15:15:39 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:22.115 15:15:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:22.115 15:15:39 -- common/autotest_common.sh@10 -- # set +x 00:05:22.115 { 00:05:22.115 "filename": "/tmp/spdk_mem_dump.txt" 00:05:22.115 } 00:05:22.115 15:15:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:22.115 15:15:39 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:22.115 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:22.115 1 heaps totaling size 814.000000 MiB 00:05:22.115 size: 814.000000 MiB heap id: 0 00:05:22.115 end heaps---------- 00:05:22.115 8 mempools totaling size 598.116089 MiB 00:05:22.115 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:22.115 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:22.115 size: 84.521057 MiB name: bdev_io_1422648 00:05:22.115 size: 51.011292 MiB name: evtpool_1422648 00:05:22.115 size: 50.003479 MiB name: msgpool_1422648 00:05:22.115 size: 21.763794 MiB name: PDU_Pool 00:05:22.115 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:22.115 size: 0.026123 MiB name: Session_Pool 00:05:22.115 end mempools------- 00:05:22.115 6 memzones totaling size 4.142822 MiB 00:05:22.115 size: 1.000366 MiB name: RG_ring_0_1422648 00:05:22.115 size: 1.000366 MiB name: RG_ring_1_1422648 00:05:22.115 size: 1.000366 MiB name: RG_ring_4_1422648 00:05:22.115 size: 1.000366 MiB name: RG_ring_5_1422648 00:05:22.115 size: 0.125366 MiB name: RG_ring_2_1422648 00:05:22.115 size: 0.015991 MiB name: RG_ring_3_1422648 00:05:22.115 end memzones------- 00:05:22.115 15:15:39 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:22.376 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:22.376 list of free elements. size: 12.519348 MiB 00:05:22.376 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:22.376 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:22.376 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:22.376 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:22.376 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:22.376 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:22.376 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:22.376 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:22.376 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:22.376 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:22.376 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:22.376 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:22.376 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:22.376 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:22.376 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:22.376 list of standard malloc elements. size: 199.218079 MiB 00:05:22.376 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:22.376 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:22.376 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:22.376 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:22.376 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:22.376 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:22.376 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:22.376 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:22.376 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:22.376 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:22.376 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:22.376 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:22.376 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:22.376 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:22.376 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:22.376 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:22.376 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:22.376 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:22.376 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:22.376 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:22.376 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:22.376 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:22.376 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:22.377 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:22.377 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:22.377 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:22.377 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:22.377 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:22.377 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:22.377 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:22.377 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:22.377 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:22.377 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:22.377 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:22.377 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:22.377 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:22.377 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:22.377 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:22.377 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:22.377 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:22.377 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:22.377 list of memzone associated elements. size: 602.262573 MiB 00:05:22.377 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:22.377 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:22.377 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:22.377 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:22.377 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:22.377 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1422648_0 00:05:22.377 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:22.377 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1422648_0 00:05:22.377 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:22.377 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1422648_0 00:05:22.377 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:22.377 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:22.377 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:22.377 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:22.377 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:22.377 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1422648 00:05:22.377 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:22.377 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1422648 00:05:22.377 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:22.377 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1422648 00:05:22.377 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:22.377 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:22.377 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:22.377 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:22.377 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:22.377 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:22.377 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:22.377 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:22.377 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:22.377 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1422648 00:05:22.377 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:22.377 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1422648 00:05:22.377 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:22.377 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1422648 00:05:22.377 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:22.377 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1422648 00:05:22.377 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:22.377 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1422648 00:05:22.377 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:22.377 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:22.377 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:22.377 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:22.377 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:22.377 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:22.377 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:22.377 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1422648 00:05:22.377 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:22.377 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:22.377 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:22.377 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:22.377 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:22.377 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1422648 00:05:22.377 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:22.377 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:22.377 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:22.377 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1422648 00:05:22.377 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:22.377 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1422648 00:05:22.377 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:22.377 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:22.377 15:15:39 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:22.377 15:15:39 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1422648 00:05:22.377 15:15:39 -- common/autotest_common.sh@936 -- # '[' -z 1422648 ']' 00:05:22.377 15:15:39 -- common/autotest_common.sh@940 -- # kill -0 1422648 00:05:22.377 15:15:39 -- common/autotest_common.sh@941 -- # uname 00:05:22.377 15:15:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:22.377 15:15:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1422648 00:05:22.377 15:15:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:22.377 15:15:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:22.377 15:15:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1422648' 00:05:22.377 killing process with pid 1422648 00:05:22.377 15:15:39 -- common/autotest_common.sh@955 -- # kill 1422648 00:05:22.377 15:15:39 -- common/autotest_common.sh@960 -- # wait 1422648 00:05:22.642 00:05:22.642 real 0m1.284s 00:05:22.642 user 0m1.358s 00:05:22.642 sys 0m0.368s 00:05:22.642 15:15:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:22.642 15:15:39 -- common/autotest_common.sh@10 -- # set +x 00:05:22.642 ************************************ 00:05:22.642 END TEST dpdk_mem_utility 00:05:22.642 ************************************ 00:05:22.642 15:15:39 -- spdk/autotest.sh@177 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:22.642 15:15:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:22.642 15:15:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:22.642 15:15:39 -- common/autotest_common.sh@10 -- # set +x 00:05:22.642 ************************************ 00:05:22.642 START TEST event 00:05:22.642 ************************************ 00:05:22.642 15:15:40 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:22.904 * Looking for test storage... 00:05:22.904 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:22.904 15:15:40 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:22.904 15:15:40 -- bdev/nbd_common.sh@6 -- # set -e 00:05:22.904 15:15:40 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:22.904 15:15:40 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:22.904 15:15:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:22.904 15:15:40 -- common/autotest_common.sh@10 -- # set +x 00:05:22.904 ************************************ 00:05:22.904 START TEST event_perf 00:05:22.904 ************************************ 00:05:22.904 15:15:40 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:22.904 Running I/O for 1 seconds...[2024-04-26 15:15:40.294590] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:05:22.904 [2024-04-26 15:15:40.294680] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1423057 ] 00:05:22.904 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.164 [2024-04-26 15:15:40.360136] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:23.164 [2024-04-26 15:15:40.425671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.164 [2024-04-26 15:15:40.425787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:23.164 [2024-04-26 15:15:40.425940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:23.164 [2024-04-26 15:15:40.426142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.105 Running I/O for 1 seconds... 00:05:24.105 lcore 0: 172300 00:05:24.105 lcore 1: 172301 00:05:24.105 lcore 2: 172300 00:05:24.105 lcore 3: 172303 00:05:24.105 done. 00:05:24.105 00:05:24.105 real 0m1.205s 00:05:24.105 user 0m4.129s 00:05:24.105 sys 0m0.076s 00:05:24.105 15:15:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:24.105 15:15:41 -- common/autotest_common.sh@10 -- # set +x 00:05:24.105 ************************************ 00:05:24.105 END TEST event_perf 00:05:24.105 ************************************ 00:05:24.105 15:15:41 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:24.105 15:15:41 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:24.105 15:15:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:24.105 15:15:41 -- common/autotest_common.sh@10 -- # set +x 00:05:24.365 ************************************ 00:05:24.365 START TEST event_reactor 00:05:24.365 ************************************ 00:05:24.366 15:15:41 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:24.366 [2024-04-26 15:15:41.688561] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:05:24.366 [2024-04-26 15:15:41.688675] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1423297 ] 00:05:24.366 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.366 [2024-04-26 15:15:41.757833] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.626 [2024-04-26 15:15:41.831437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.567 test_start 00:05:25.567 oneshot 00:05:25.567 tick 100 00:05:25.567 tick 100 00:05:25.567 tick 250 00:05:25.567 tick 100 00:05:25.567 tick 100 00:05:25.567 tick 100 00:05:25.567 tick 250 00:05:25.567 tick 500 00:05:25.567 tick 100 00:05:25.567 tick 100 00:05:25.567 tick 250 00:05:25.567 tick 100 00:05:25.567 tick 100 00:05:25.567 test_end 00:05:25.567 00:05:25.567 real 0m1.218s 00:05:25.567 user 0m1.137s 00:05:25.567 sys 0m0.077s 00:05:25.567 15:15:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:25.567 15:15:42 -- common/autotest_common.sh@10 -- # set +x 00:05:25.567 ************************************ 00:05:25.567 END TEST event_reactor 00:05:25.567 ************************************ 00:05:25.567 15:15:42 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:25.567 15:15:42 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:25.567 15:15:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:25.567 15:15:42 -- common/autotest_common.sh@10 -- # set +x 00:05:25.827 ************************************ 00:05:25.827 START TEST event_reactor_perf 00:05:25.827 ************************************ 00:05:25.827 15:15:43 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:25.827 [2024-04-26 15:15:43.103396] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:05:25.827 [2024-04-26 15:15:43.103487] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1423522 ] 00:05:25.827 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.827 [2024-04-26 15:15:43.169824] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.827 [2024-04-26 15:15:43.237638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.213 test_start 00:05:27.213 test_end 00:05:27.213 Performance: 367948 events per second 00:05:27.213 00:05:27.213 real 0m1.207s 00:05:27.213 user 0m1.132s 00:05:27.213 sys 0m0.071s 00:05:27.213 15:15:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:27.213 15:15:44 -- common/autotest_common.sh@10 -- # set +x 00:05:27.213 ************************************ 00:05:27.213 END TEST event_reactor_perf 00:05:27.213 ************************************ 00:05:27.213 15:15:44 -- event/event.sh@49 -- # uname -s 00:05:27.213 15:15:44 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:27.213 15:15:44 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:27.213 15:15:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:27.213 15:15:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:27.213 15:15:44 -- common/autotest_common.sh@10 -- # set +x 00:05:27.213 ************************************ 00:05:27.213 START TEST event_scheduler 00:05:27.213 ************************************ 00:05:27.213 15:15:44 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:27.213 * Looking for test storage... 00:05:27.213 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:27.213 15:15:44 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:27.213 15:15:44 -- scheduler/scheduler.sh@35 -- # scheduler_pid=1423851 00:05:27.213 15:15:44 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:27.213 15:15:44 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:27.213 15:15:44 -- scheduler/scheduler.sh@37 -- # waitforlisten 1423851 00:05:27.213 15:15:44 -- common/autotest_common.sh@817 -- # '[' -z 1423851 ']' 00:05:27.213 15:15:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.213 15:15:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:27.213 15:15:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.213 15:15:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:27.213 15:15:44 -- common/autotest_common.sh@10 -- # set +x 00:05:27.213 [2024-04-26 15:15:44.637152] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:05:27.213 [2024-04-26 15:15:44.637212] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1423851 ] 00:05:27.474 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.474 [2024-04-26 15:15:44.693758] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:27.474 [2024-04-26 15:15:44.759504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.474 [2024-04-26 15:15:44.759661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:27.474 [2024-04-26 15:15:44.759815] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:27.474 [2024-04-26 15:15:44.759816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:28.045 15:15:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:28.045 15:15:45 -- common/autotest_common.sh@850 -- # return 0 00:05:28.045 15:15:45 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:28.045 15:15:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:28.045 15:15:45 -- common/autotest_common.sh@10 -- # set +x 00:05:28.045 POWER: Env isn't set yet! 00:05:28.045 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:28.045 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:28.045 POWER: Cannot set governor of lcore 0 to userspace 00:05:28.045 POWER: Attempting to initialise PSTAT power management... 00:05:28.045 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:05:28.045 POWER: Initialized successfully for lcore 0 power management 00:05:28.045 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:05:28.045 POWER: Initialized successfully for lcore 1 power management 00:05:28.045 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:05:28.045 POWER: Initialized successfully for lcore 2 power management 00:05:28.045 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:05:28.045 POWER: Initialized successfully for lcore 3 power management 00:05:28.045 15:15:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:28.046 15:15:45 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:28.046 15:15:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:28.046 15:15:45 -- common/autotest_common.sh@10 -- # set +x 00:05:28.306 [2024-04-26 15:15:45.517111] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:28.306 15:15:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:28.306 15:15:45 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:28.306 15:15:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:28.306 15:15:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:28.306 15:15:45 -- common/autotest_common.sh@10 -- # set +x 00:05:28.306 ************************************ 00:05:28.306 START TEST scheduler_create_thread 00:05:28.306 ************************************ 00:05:28.306 15:15:45 -- common/autotest_common.sh@1111 -- # scheduler_create_thread 00:05:28.306 15:15:45 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:28.306 15:15:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:28.306 15:15:45 -- common/autotest_common.sh@10 -- # set +x 00:05:28.306 2 00:05:28.306 15:15:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:28.306 15:15:45 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:28.306 15:15:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:28.306 15:15:45 -- common/autotest_common.sh@10 -- # set +x 00:05:28.306 3 00:05:28.306 15:15:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:28.306 15:15:45 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:28.306 15:15:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:28.306 15:15:45 -- common/autotest_common.sh@10 -- # set +x 00:05:28.306 4 00:05:28.306 15:15:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:28.306 15:15:45 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:28.306 15:15:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:28.306 15:15:45 -- common/autotest_common.sh@10 -- # set +x 00:05:28.306 5 00:05:28.306 15:15:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:28.306 15:15:45 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:28.306 15:15:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:28.306 15:15:45 -- common/autotest_common.sh@10 -- # set +x 00:05:28.306 6 00:05:28.306 15:15:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:28.306 15:15:45 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:28.306 15:15:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:28.306 15:15:45 -- common/autotest_common.sh@10 -- # set +x 00:05:28.306 7 00:05:28.306 15:15:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:28.306 15:15:45 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:28.306 15:15:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:28.306 15:15:45 -- common/autotest_common.sh@10 -- # set +x 00:05:28.566 8 00:05:28.566 15:15:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:28.566 15:15:45 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:28.566 15:15:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:28.566 15:15:45 -- common/autotest_common.sh@10 -- # set +x 00:05:29.950 9 00:05:29.950 15:15:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:29.950 15:15:46 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:29.950 15:15:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:29.950 15:15:46 -- common/autotest_common.sh@10 -- # set +x 00:05:30.892 10 00:05:30.892 15:15:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:30.892 15:15:48 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:30.892 15:15:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:30.892 15:15:48 -- common/autotest_common.sh@10 -- # set +x 00:05:31.833 15:15:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:31.833 15:15:49 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:31.833 15:15:49 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:31.833 15:15:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:31.833 15:15:49 -- common/autotest_common.sh@10 -- # set +x 00:05:32.402 15:15:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:32.402 15:15:49 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:32.402 15:15:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:32.402 15:15:49 -- common/autotest_common.sh@10 -- # set +x 00:05:33.341 15:15:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:33.341 15:15:50 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:33.341 15:15:50 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:33.341 15:15:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:33.341 15:15:50 -- common/autotest_common.sh@10 -- # set +x 00:05:33.911 15:15:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:33.911 00:05:33.911 real 0m5.421s 00:05:33.911 user 0m0.023s 00:05:33.911 sys 0m0.007s 00:05:33.911 15:15:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:33.911 15:15:51 -- common/autotest_common.sh@10 -- # set +x 00:05:33.911 ************************************ 00:05:33.911 END TEST scheduler_create_thread 00:05:33.911 ************************************ 00:05:33.911 15:15:51 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:33.911 15:15:51 -- scheduler/scheduler.sh@46 -- # killprocess 1423851 00:05:33.911 15:15:51 -- common/autotest_common.sh@936 -- # '[' -z 1423851 ']' 00:05:33.911 15:15:51 -- common/autotest_common.sh@940 -- # kill -0 1423851 00:05:33.911 15:15:51 -- common/autotest_common.sh@941 -- # uname 00:05:33.911 15:15:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:33.911 15:15:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1423851 00:05:33.911 15:15:51 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:33.911 15:15:51 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:33.911 15:15:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1423851' 00:05:33.911 killing process with pid 1423851 00:05:33.911 15:15:51 -- common/autotest_common.sh@955 -- # kill 1423851 00:05:33.911 15:15:51 -- common/autotest_common.sh@960 -- # wait 1423851 00:05:33.911 [2024-04-26 15:15:51.349470] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:34.185 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:05:34.185 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:05:34.185 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:05:34.185 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:05:34.185 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:05:34.185 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:05:34.185 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:05:34.185 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:05:34.185 00:05:34.185 real 0m7.043s 00:05:34.185 user 0m14.680s 00:05:34.185 sys 0m0.402s 00:05:34.185 15:15:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:34.185 15:15:51 -- common/autotest_common.sh@10 -- # set +x 00:05:34.185 ************************************ 00:05:34.185 END TEST event_scheduler 00:05:34.185 ************************************ 00:05:34.185 15:15:51 -- event/event.sh@51 -- # modprobe -n nbd 00:05:34.185 15:15:51 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:34.185 15:15:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:34.185 15:15:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:34.185 15:15:51 -- common/autotest_common.sh@10 -- # set +x 00:05:34.444 ************************************ 00:05:34.444 START TEST app_repeat 00:05:34.444 ************************************ 00:05:34.444 15:15:51 -- common/autotest_common.sh@1111 -- # app_repeat_test 00:05:34.444 15:15:51 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.444 15:15:51 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.444 15:15:51 -- event/event.sh@13 -- # local nbd_list 00:05:34.444 15:15:51 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:34.444 15:15:51 -- event/event.sh@14 -- # local bdev_list 00:05:34.444 15:15:51 -- event/event.sh@15 -- # local repeat_times=4 00:05:34.444 15:15:51 -- event/event.sh@17 -- # modprobe nbd 00:05:34.444 15:15:51 -- event/event.sh@19 -- # repeat_pid=1425409 00:05:34.444 15:15:51 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:34.444 15:15:51 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:34.444 15:15:51 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1425409' 00:05:34.444 Process app_repeat pid: 1425409 00:05:34.444 15:15:51 -- event/event.sh@23 -- # for i in {0..2} 00:05:34.444 15:15:51 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:34.444 spdk_app_start Round 0 00:05:34.444 15:15:51 -- event/event.sh@25 -- # waitforlisten 1425409 /var/tmp/spdk-nbd.sock 00:05:34.444 15:15:51 -- common/autotest_common.sh@817 -- # '[' -z 1425409 ']' 00:05:34.444 15:15:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:34.444 15:15:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:34.444 15:15:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:34.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:34.444 15:15:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:34.444 15:15:51 -- common/autotest_common.sh@10 -- # set +x 00:05:34.444 [2024-04-26 15:15:51.776226] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:05:34.444 [2024-04-26 15:15:51.776302] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1425409 ] 00:05:34.444 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.444 [2024-04-26 15:15:51.843311] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:34.705 [2024-04-26 15:15:51.917232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.705 [2024-04-26 15:15:51.917235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.274 15:15:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:35.274 15:15:52 -- common/autotest_common.sh@850 -- # return 0 00:05:35.274 15:15:52 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:35.274 Malloc0 00:05:35.533 15:15:52 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:35.533 Malloc1 00:05:35.533 15:15:52 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:35.533 15:15:52 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.533 15:15:52 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:35.533 15:15:52 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:35.533 15:15:52 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.533 15:15:52 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:35.533 15:15:52 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:35.533 15:15:52 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.533 15:15:52 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:35.533 15:15:52 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:35.533 15:15:52 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.533 15:15:52 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:35.533 15:15:52 -- bdev/nbd_common.sh@12 -- # local i 00:05:35.533 15:15:52 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:35.533 15:15:52 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:35.533 15:15:52 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:35.793 /dev/nbd0 00:05:35.793 15:15:53 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:35.793 15:15:53 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:35.793 15:15:53 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:05:35.793 15:15:53 -- common/autotest_common.sh@855 -- # local i 00:05:35.793 15:15:53 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:35.793 15:15:53 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:35.793 15:15:53 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:05:35.793 15:15:53 -- common/autotest_common.sh@859 -- # break 00:05:35.793 15:15:53 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:35.793 15:15:53 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:35.793 15:15:53 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:35.793 1+0 records in 00:05:35.793 1+0 records out 00:05:35.793 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000228528 s, 17.9 MB/s 00:05:35.793 15:15:53 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:35.793 15:15:53 -- common/autotest_common.sh@872 -- # size=4096 00:05:35.793 15:15:53 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:35.793 15:15:53 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:35.793 15:15:53 -- common/autotest_common.sh@875 -- # return 0 00:05:35.793 15:15:53 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:35.793 15:15:53 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:35.793 15:15:53 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:36.053 /dev/nbd1 00:05:36.053 15:15:53 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:36.053 15:15:53 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:36.053 15:15:53 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:05:36.053 15:15:53 -- common/autotest_common.sh@855 -- # local i 00:05:36.053 15:15:53 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:36.054 15:15:53 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:36.054 15:15:53 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:05:36.054 15:15:53 -- common/autotest_common.sh@859 -- # break 00:05:36.054 15:15:53 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:36.054 15:15:53 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:36.054 15:15:53 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:36.054 1+0 records in 00:05:36.054 1+0 records out 00:05:36.054 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000142672 s, 28.7 MB/s 00:05:36.054 15:15:53 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:36.054 15:15:53 -- common/autotest_common.sh@872 -- # size=4096 00:05:36.054 15:15:53 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:36.054 15:15:53 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:36.054 15:15:53 -- common/autotest_common.sh@875 -- # return 0 00:05:36.054 15:15:53 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:36.054 15:15:53 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.054 15:15:53 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:36.054 15:15:53 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.054 15:15:53 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:36.054 15:15:53 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:36.054 { 00:05:36.054 "nbd_device": "/dev/nbd0", 00:05:36.054 "bdev_name": "Malloc0" 00:05:36.054 }, 00:05:36.054 { 00:05:36.054 "nbd_device": "/dev/nbd1", 00:05:36.054 "bdev_name": "Malloc1" 00:05:36.054 } 00:05:36.054 ]' 00:05:36.054 15:15:53 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:36.054 { 00:05:36.054 "nbd_device": "/dev/nbd0", 00:05:36.054 "bdev_name": "Malloc0" 00:05:36.054 }, 00:05:36.054 { 00:05:36.054 "nbd_device": "/dev/nbd1", 00:05:36.054 "bdev_name": "Malloc1" 00:05:36.054 } 00:05:36.054 ]' 00:05:36.054 15:15:53 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:36.054 15:15:53 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:36.054 /dev/nbd1' 00:05:36.054 15:15:53 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:36.054 15:15:53 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:36.054 /dev/nbd1' 00:05:36.054 15:15:53 -- bdev/nbd_common.sh@65 -- # count=2 00:05:36.054 15:15:53 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:36.054 15:15:53 -- bdev/nbd_common.sh@95 -- # count=2 00:05:36.054 15:15:53 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:36.054 15:15:53 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:36.054 15:15:53 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.054 15:15:53 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:36.054 15:15:53 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:36.054 15:15:53 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:36.054 15:15:53 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:36.054 15:15:53 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:36.318 256+0 records in 00:05:36.318 256+0 records out 00:05:36.318 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.012461 s, 84.1 MB/s 00:05:36.318 15:15:53 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:36.318 15:15:53 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:36.318 256+0 records in 00:05:36.318 256+0 records out 00:05:36.318 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0193797 s, 54.1 MB/s 00:05:36.318 15:15:53 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:36.318 15:15:53 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:36.318 256+0 records in 00:05:36.318 256+0 records out 00:05:36.318 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0187593 s, 55.9 MB/s 00:05:36.318 15:15:53 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:36.318 15:15:53 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.318 15:15:53 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:36.318 15:15:53 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:36.318 15:15:53 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:36.318 15:15:53 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:36.318 15:15:53 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:36.318 15:15:53 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:36.318 15:15:53 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:36.318 15:15:53 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:36.318 15:15:53 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:36.318 15:15:53 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:36.318 15:15:53 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:36.318 15:15:53 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.318 15:15:53 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.318 15:15:53 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:36.318 15:15:53 -- bdev/nbd_common.sh@51 -- # local i 00:05:36.318 15:15:53 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:36.318 15:15:53 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:36.318 15:15:53 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:36.318 15:15:53 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:36.318 15:15:53 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:36.318 15:15:53 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:36.318 15:15:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:36.318 15:15:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:36.318 15:15:53 -- bdev/nbd_common.sh@41 -- # break 00:05:36.318 15:15:53 -- bdev/nbd_common.sh@45 -- # return 0 00:05:36.318 15:15:53 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:36.318 15:15:53 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:36.579 15:15:53 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:36.579 15:15:53 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:36.579 15:15:53 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:36.579 15:15:53 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:36.579 15:15:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:36.579 15:15:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:36.579 15:15:53 -- bdev/nbd_common.sh@41 -- # break 00:05:36.579 15:15:53 -- bdev/nbd_common.sh@45 -- # return 0 00:05:36.579 15:15:53 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:36.579 15:15:53 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.579 15:15:53 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:36.840 15:15:54 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:36.840 15:15:54 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:36.840 15:15:54 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:36.840 15:15:54 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:36.840 15:15:54 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:36.840 15:15:54 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:36.840 15:15:54 -- bdev/nbd_common.sh@65 -- # true 00:05:36.840 15:15:54 -- bdev/nbd_common.sh@65 -- # count=0 00:05:36.840 15:15:54 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:36.840 15:15:54 -- bdev/nbd_common.sh@104 -- # count=0 00:05:36.840 15:15:54 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:36.840 15:15:54 -- bdev/nbd_common.sh@109 -- # return 0 00:05:36.840 15:15:54 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:37.100 15:15:54 -- event/event.sh@35 -- # sleep 3 00:05:37.100 [2024-04-26 15:15:54.438803] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:37.100 [2024-04-26 15:15:54.499505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.100 [2024-04-26 15:15:54.499507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.100 [2024-04-26 15:15:54.531302] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:37.100 [2024-04-26 15:15:54.531338] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:40.399 15:15:57 -- event/event.sh@23 -- # for i in {0..2} 00:05:40.399 15:15:57 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:40.399 spdk_app_start Round 1 00:05:40.399 15:15:57 -- event/event.sh@25 -- # waitforlisten 1425409 /var/tmp/spdk-nbd.sock 00:05:40.399 15:15:57 -- common/autotest_common.sh@817 -- # '[' -z 1425409 ']' 00:05:40.399 15:15:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:40.399 15:15:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:40.399 15:15:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:40.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:40.399 15:15:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:40.399 15:15:57 -- common/autotest_common.sh@10 -- # set +x 00:05:40.399 15:15:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:40.399 15:15:57 -- common/autotest_common.sh@850 -- # return 0 00:05:40.399 15:15:57 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:40.399 Malloc0 00:05:40.399 15:15:57 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:40.399 Malloc1 00:05:40.399 15:15:57 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:40.399 15:15:57 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.399 15:15:57 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.399 15:15:57 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:40.399 15:15:57 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.399 15:15:57 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:40.399 15:15:57 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:40.399 15:15:57 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.399 15:15:57 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.399 15:15:57 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:40.399 15:15:57 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.399 15:15:57 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:40.399 15:15:57 -- bdev/nbd_common.sh@12 -- # local i 00:05:40.399 15:15:57 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:40.399 15:15:57 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.399 15:15:57 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:40.660 /dev/nbd0 00:05:40.660 15:15:57 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:40.660 15:15:57 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:40.660 15:15:57 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:05:40.660 15:15:57 -- common/autotest_common.sh@855 -- # local i 00:05:40.660 15:15:57 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:40.660 15:15:57 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:40.660 15:15:57 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:05:40.660 15:15:57 -- common/autotest_common.sh@859 -- # break 00:05:40.660 15:15:57 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:40.660 15:15:57 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:40.660 15:15:57 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:40.660 1+0 records in 00:05:40.660 1+0 records out 00:05:40.660 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000204223 s, 20.1 MB/s 00:05:40.660 15:15:57 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:40.660 15:15:57 -- common/autotest_common.sh@872 -- # size=4096 00:05:40.660 15:15:57 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:40.660 15:15:57 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:40.660 15:15:57 -- common/autotest_common.sh@875 -- # return 0 00:05:40.660 15:15:57 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:40.660 15:15:57 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.660 15:15:57 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:40.922 /dev/nbd1 00:05:40.922 15:15:58 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:40.922 15:15:58 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:40.922 15:15:58 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:05:40.922 15:15:58 -- common/autotest_common.sh@855 -- # local i 00:05:40.922 15:15:58 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:40.922 15:15:58 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:40.922 15:15:58 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:05:40.922 15:15:58 -- common/autotest_common.sh@859 -- # break 00:05:40.922 15:15:58 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:40.922 15:15:58 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:40.922 15:15:58 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:40.922 1+0 records in 00:05:40.922 1+0 records out 00:05:40.922 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000326559 s, 12.5 MB/s 00:05:40.922 15:15:58 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:40.922 15:15:58 -- common/autotest_common.sh@872 -- # size=4096 00:05:40.922 15:15:58 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:40.922 15:15:58 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:40.922 15:15:58 -- common/autotest_common.sh@875 -- # return 0 00:05:40.922 15:15:58 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:40.922 15:15:58 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.922 15:15:58 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:40.922 15:15:58 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.922 15:15:58 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:40.922 15:15:58 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:40.922 { 00:05:40.922 "nbd_device": "/dev/nbd0", 00:05:40.922 "bdev_name": "Malloc0" 00:05:40.922 }, 00:05:40.922 { 00:05:40.922 "nbd_device": "/dev/nbd1", 00:05:40.922 "bdev_name": "Malloc1" 00:05:40.922 } 00:05:40.922 ]' 00:05:40.922 15:15:58 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:40.922 { 00:05:40.922 "nbd_device": "/dev/nbd0", 00:05:40.922 "bdev_name": "Malloc0" 00:05:40.922 }, 00:05:40.922 { 00:05:40.922 "nbd_device": "/dev/nbd1", 00:05:40.922 "bdev_name": "Malloc1" 00:05:40.922 } 00:05:40.922 ]' 00:05:40.922 15:15:58 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:40.922 15:15:58 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:40.922 /dev/nbd1' 00:05:40.922 15:15:58 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:40.922 /dev/nbd1' 00:05:40.922 15:15:58 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:40.922 15:15:58 -- bdev/nbd_common.sh@65 -- # count=2 00:05:40.922 15:15:58 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:40.922 15:15:58 -- bdev/nbd_common.sh@95 -- # count=2 00:05:40.922 15:15:58 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:40.922 15:15:58 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:40.922 15:15:58 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.922 15:15:58 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:40.922 15:15:58 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:40.922 15:15:58 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:40.922 15:15:58 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:40.922 15:15:58 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:41.183 256+0 records in 00:05:41.183 256+0 records out 00:05:41.183 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124701 s, 84.1 MB/s 00:05:41.183 15:15:58 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:41.183 15:15:58 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:41.183 256+0 records in 00:05:41.183 256+0 records out 00:05:41.183 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0177464 s, 59.1 MB/s 00:05:41.183 15:15:58 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:41.183 15:15:58 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:41.183 256+0 records in 00:05:41.183 256+0 records out 00:05:41.183 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0177653 s, 59.0 MB/s 00:05:41.183 15:15:58 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:41.183 15:15:58 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.183 15:15:58 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:41.183 15:15:58 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:41.183 15:15:58 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:41.183 15:15:58 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:41.183 15:15:58 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:41.183 15:15:58 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:41.183 15:15:58 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:41.183 15:15:58 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:41.183 15:15:58 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:41.183 15:15:58 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:41.183 15:15:58 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:41.183 15:15:58 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.184 15:15:58 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.184 15:15:58 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:41.184 15:15:58 -- bdev/nbd_common.sh@51 -- # local i 00:05:41.184 15:15:58 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:41.184 15:15:58 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:41.184 15:15:58 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:41.184 15:15:58 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:41.184 15:15:58 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:41.184 15:15:58 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:41.184 15:15:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:41.184 15:15:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:41.184 15:15:58 -- bdev/nbd_common.sh@41 -- # break 00:05:41.184 15:15:58 -- bdev/nbd_common.sh@45 -- # return 0 00:05:41.184 15:15:58 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:41.184 15:15:58 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:41.445 15:15:58 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:41.445 15:15:58 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:41.445 15:15:58 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:41.445 15:15:58 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:41.445 15:15:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:41.445 15:15:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:41.445 15:15:58 -- bdev/nbd_common.sh@41 -- # break 00:05:41.445 15:15:58 -- bdev/nbd_common.sh@45 -- # return 0 00:05:41.445 15:15:58 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:41.445 15:15:58 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.445 15:15:58 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:41.706 15:15:58 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:41.706 15:15:58 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:41.706 15:15:58 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:41.706 15:15:58 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:41.706 15:15:58 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:41.706 15:15:58 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:41.706 15:15:58 -- bdev/nbd_common.sh@65 -- # true 00:05:41.706 15:15:58 -- bdev/nbd_common.sh@65 -- # count=0 00:05:41.706 15:15:58 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:41.706 15:15:58 -- bdev/nbd_common.sh@104 -- # count=0 00:05:41.706 15:15:58 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:41.706 15:15:58 -- bdev/nbd_common.sh@109 -- # return 0 00:05:41.706 15:15:58 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:41.967 15:15:59 -- event/event.sh@35 -- # sleep 3 00:05:41.967 [2024-04-26 15:15:59.287069] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:41.967 [2024-04-26 15:15:59.347991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.967 [2024-04-26 15:15:59.348081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.967 [2024-04-26 15:15:59.380484] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:41.967 [2024-04-26 15:15:59.380521] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:45.265 15:16:02 -- event/event.sh@23 -- # for i in {0..2} 00:05:45.265 15:16:02 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:45.265 spdk_app_start Round 2 00:05:45.265 15:16:02 -- event/event.sh@25 -- # waitforlisten 1425409 /var/tmp/spdk-nbd.sock 00:05:45.265 15:16:02 -- common/autotest_common.sh@817 -- # '[' -z 1425409 ']' 00:05:45.265 15:16:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:45.265 15:16:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:45.265 15:16:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:45.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:45.265 15:16:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:45.266 15:16:02 -- common/autotest_common.sh@10 -- # set +x 00:05:45.266 15:16:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:45.266 15:16:02 -- common/autotest_common.sh@850 -- # return 0 00:05:45.266 15:16:02 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:45.266 Malloc0 00:05:45.266 15:16:02 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:45.266 Malloc1 00:05:45.266 15:16:02 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:45.266 15:16:02 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.266 15:16:02 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:45.266 15:16:02 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:45.266 15:16:02 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.266 15:16:02 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:45.266 15:16:02 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:45.266 15:16:02 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.266 15:16:02 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:45.266 15:16:02 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:45.266 15:16:02 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.266 15:16:02 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:45.266 15:16:02 -- bdev/nbd_common.sh@12 -- # local i 00:05:45.266 15:16:02 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:45.266 15:16:02 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:45.266 15:16:02 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:45.529 /dev/nbd0 00:05:45.529 15:16:02 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:45.529 15:16:02 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:45.529 15:16:02 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:05:45.529 15:16:02 -- common/autotest_common.sh@855 -- # local i 00:05:45.529 15:16:02 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:45.529 15:16:02 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:45.529 15:16:02 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:05:45.529 15:16:02 -- common/autotest_common.sh@859 -- # break 00:05:45.529 15:16:02 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:45.529 15:16:02 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:45.529 15:16:02 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:45.529 1+0 records in 00:05:45.529 1+0 records out 00:05:45.529 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000208984 s, 19.6 MB/s 00:05:45.529 15:16:02 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:45.529 15:16:02 -- common/autotest_common.sh@872 -- # size=4096 00:05:45.529 15:16:02 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:45.529 15:16:02 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:45.529 15:16:02 -- common/autotest_common.sh@875 -- # return 0 00:05:45.529 15:16:02 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:45.529 15:16:02 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:45.529 15:16:02 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:45.529 /dev/nbd1 00:05:45.529 15:16:02 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:45.529 15:16:02 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:45.529 15:16:02 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:05:45.529 15:16:02 -- common/autotest_common.sh@855 -- # local i 00:05:45.529 15:16:02 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:45.529 15:16:02 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:45.529 15:16:02 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:05:45.529 15:16:02 -- common/autotest_common.sh@859 -- # break 00:05:45.529 15:16:02 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:45.529 15:16:02 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:45.529 15:16:02 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:45.529 1+0 records in 00:05:45.529 1+0 records out 00:05:45.529 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000295358 s, 13.9 MB/s 00:05:45.529 15:16:02 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:45.529 15:16:02 -- common/autotest_common.sh@872 -- # size=4096 00:05:45.529 15:16:02 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:45.529 15:16:02 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:45.529 15:16:02 -- common/autotest_common.sh@875 -- # return 0 00:05:45.529 15:16:02 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:45.529 15:16:02 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:45.529 15:16:02 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:45.790 15:16:02 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.790 15:16:02 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:45.790 15:16:03 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:45.790 { 00:05:45.790 "nbd_device": "/dev/nbd0", 00:05:45.790 "bdev_name": "Malloc0" 00:05:45.790 }, 00:05:45.790 { 00:05:45.790 "nbd_device": "/dev/nbd1", 00:05:45.790 "bdev_name": "Malloc1" 00:05:45.790 } 00:05:45.790 ]' 00:05:45.790 15:16:03 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:45.790 { 00:05:45.790 "nbd_device": "/dev/nbd0", 00:05:45.790 "bdev_name": "Malloc0" 00:05:45.790 }, 00:05:45.790 { 00:05:45.790 "nbd_device": "/dev/nbd1", 00:05:45.790 "bdev_name": "Malloc1" 00:05:45.790 } 00:05:45.790 ]' 00:05:45.790 15:16:03 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:45.790 15:16:03 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:45.790 /dev/nbd1' 00:05:45.790 15:16:03 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:45.790 /dev/nbd1' 00:05:45.790 15:16:03 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:45.790 15:16:03 -- bdev/nbd_common.sh@65 -- # count=2 00:05:45.790 15:16:03 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:45.790 15:16:03 -- bdev/nbd_common.sh@95 -- # count=2 00:05:45.790 15:16:03 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:45.790 15:16:03 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:45.790 15:16:03 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.790 15:16:03 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:45.790 15:16:03 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:45.790 15:16:03 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:45.790 15:16:03 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:45.790 15:16:03 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:45.790 256+0 records in 00:05:45.790 256+0 records out 00:05:45.790 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0118462 s, 88.5 MB/s 00:05:45.790 15:16:03 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:45.790 15:16:03 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:45.790 256+0 records in 00:05:45.790 256+0 records out 00:05:45.790 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0158166 s, 66.3 MB/s 00:05:45.790 15:16:03 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:45.790 15:16:03 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:46.049 256+0 records in 00:05:46.049 256+0 records out 00:05:46.049 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0168892 s, 62.1 MB/s 00:05:46.049 15:16:03 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:46.049 15:16:03 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.049 15:16:03 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:46.049 15:16:03 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:46.049 15:16:03 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:46.049 15:16:03 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:46.049 15:16:03 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:46.049 15:16:03 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:46.049 15:16:03 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:46.049 15:16:03 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:46.049 15:16:03 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:46.049 15:16:03 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:46.049 15:16:03 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:46.049 15:16:03 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.049 15:16:03 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.049 15:16:03 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:46.049 15:16:03 -- bdev/nbd_common.sh@51 -- # local i 00:05:46.049 15:16:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:46.049 15:16:03 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:46.049 15:16:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:46.049 15:16:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:46.049 15:16:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:46.049 15:16:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:46.049 15:16:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:46.049 15:16:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:46.049 15:16:03 -- bdev/nbd_common.sh@41 -- # break 00:05:46.049 15:16:03 -- bdev/nbd_common.sh@45 -- # return 0 00:05:46.049 15:16:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:46.049 15:16:03 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:46.309 15:16:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:46.309 15:16:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:46.309 15:16:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:46.309 15:16:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:46.309 15:16:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:46.309 15:16:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:46.309 15:16:03 -- bdev/nbd_common.sh@41 -- # break 00:05:46.309 15:16:03 -- bdev/nbd_common.sh@45 -- # return 0 00:05:46.309 15:16:03 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:46.309 15:16:03 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.309 15:16:03 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:46.572 15:16:03 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:46.572 15:16:03 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:46.572 15:16:03 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:46.572 15:16:03 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:46.572 15:16:03 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:46.572 15:16:03 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:46.572 15:16:03 -- bdev/nbd_common.sh@65 -- # true 00:05:46.572 15:16:03 -- bdev/nbd_common.sh@65 -- # count=0 00:05:46.572 15:16:03 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:46.572 15:16:03 -- bdev/nbd_common.sh@104 -- # count=0 00:05:46.572 15:16:03 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:46.572 15:16:03 -- bdev/nbd_common.sh@109 -- # return 0 00:05:46.572 15:16:03 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:46.572 15:16:03 -- event/event.sh@35 -- # sleep 3 00:05:46.878 [2024-04-26 15:16:04.124472] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:46.878 [2024-04-26 15:16:04.186040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.878 [2024-04-26 15:16:04.186130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.878 [2024-04-26 15:16:04.217941] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:46.878 [2024-04-26 15:16:04.217977] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:50.186 15:16:06 -- event/event.sh@38 -- # waitforlisten 1425409 /var/tmp/spdk-nbd.sock 00:05:50.186 15:16:06 -- common/autotest_common.sh@817 -- # '[' -z 1425409 ']' 00:05:50.186 15:16:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:50.186 15:16:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:50.186 15:16:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:50.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:50.186 15:16:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:50.186 15:16:06 -- common/autotest_common.sh@10 -- # set +x 00:05:50.186 15:16:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:50.186 15:16:07 -- common/autotest_common.sh@850 -- # return 0 00:05:50.186 15:16:07 -- event/event.sh@39 -- # killprocess 1425409 00:05:50.186 15:16:07 -- common/autotest_common.sh@936 -- # '[' -z 1425409 ']' 00:05:50.186 15:16:07 -- common/autotest_common.sh@940 -- # kill -0 1425409 00:05:50.186 15:16:07 -- common/autotest_common.sh@941 -- # uname 00:05:50.186 15:16:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:50.186 15:16:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1425409 00:05:50.187 15:16:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:50.187 15:16:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:50.187 15:16:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1425409' 00:05:50.187 killing process with pid 1425409 00:05:50.187 15:16:07 -- common/autotest_common.sh@955 -- # kill 1425409 00:05:50.187 15:16:07 -- common/autotest_common.sh@960 -- # wait 1425409 00:05:50.187 spdk_app_start is called in Round 0. 00:05:50.187 Shutdown signal received, stop current app iteration 00:05:50.187 Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 reinitialization... 00:05:50.187 spdk_app_start is called in Round 1. 00:05:50.187 Shutdown signal received, stop current app iteration 00:05:50.187 Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 reinitialization... 00:05:50.187 spdk_app_start is called in Round 2. 00:05:50.187 Shutdown signal received, stop current app iteration 00:05:50.187 Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 reinitialization... 00:05:50.187 spdk_app_start is called in Round 3. 00:05:50.187 Shutdown signal received, stop current app iteration 00:05:50.187 15:16:07 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:50.187 15:16:07 -- event/event.sh@42 -- # return 0 00:05:50.187 00:05:50.187 real 0m15.577s 00:05:50.187 user 0m33.583s 00:05:50.187 sys 0m2.107s 00:05:50.187 15:16:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:50.187 15:16:07 -- common/autotest_common.sh@10 -- # set +x 00:05:50.187 ************************************ 00:05:50.187 END TEST app_repeat 00:05:50.187 ************************************ 00:05:50.187 15:16:07 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:50.187 15:16:07 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:50.187 15:16:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:50.187 15:16:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:50.187 15:16:07 -- common/autotest_common.sh@10 -- # set +x 00:05:50.187 ************************************ 00:05:50.187 START TEST cpu_locks 00:05:50.187 ************************************ 00:05:50.187 15:16:07 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:50.187 * Looking for test storage... 00:05:50.187 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:50.187 15:16:07 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:50.187 15:16:07 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:50.187 15:16:07 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:50.187 15:16:07 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:50.187 15:16:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:50.187 15:16:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:50.187 15:16:07 -- common/autotest_common.sh@10 -- # set +x 00:05:50.449 ************************************ 00:05:50.449 START TEST default_locks 00:05:50.449 ************************************ 00:05:50.449 15:16:07 -- common/autotest_common.sh@1111 -- # default_locks 00:05:50.449 15:16:07 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1428855 00:05:50.449 15:16:07 -- event/cpu_locks.sh@47 -- # waitforlisten 1428855 00:05:50.449 15:16:07 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:50.449 15:16:07 -- common/autotest_common.sh@817 -- # '[' -z 1428855 ']' 00:05:50.449 15:16:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.449 15:16:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:50.449 15:16:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.449 15:16:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:50.449 15:16:07 -- common/autotest_common.sh@10 -- # set +x 00:05:50.449 [2024-04-26 15:16:07.825170] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:05:50.449 [2024-04-26 15:16:07.825228] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1428855 ] 00:05:50.449 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.449 [2024-04-26 15:16:07.891931] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.709 [2024-04-26 15:16:07.964290] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.281 15:16:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:51.281 15:16:08 -- common/autotest_common.sh@850 -- # return 0 00:05:51.281 15:16:08 -- event/cpu_locks.sh@49 -- # locks_exist 1428855 00:05:51.281 15:16:08 -- event/cpu_locks.sh@22 -- # lslocks -p 1428855 00:05:51.281 15:16:08 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:51.852 lslocks: write error 00:05:51.852 15:16:09 -- event/cpu_locks.sh@50 -- # killprocess 1428855 00:05:51.852 15:16:09 -- common/autotest_common.sh@936 -- # '[' -z 1428855 ']' 00:05:51.852 15:16:09 -- common/autotest_common.sh@940 -- # kill -0 1428855 00:05:51.852 15:16:09 -- common/autotest_common.sh@941 -- # uname 00:05:51.852 15:16:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:51.852 15:16:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1428855 00:05:51.852 15:16:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:51.852 15:16:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:51.852 15:16:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1428855' 00:05:51.852 killing process with pid 1428855 00:05:51.852 15:16:09 -- common/autotest_common.sh@955 -- # kill 1428855 00:05:51.852 15:16:09 -- common/autotest_common.sh@960 -- # wait 1428855 00:05:52.113 15:16:09 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1428855 00:05:52.113 15:16:09 -- common/autotest_common.sh@638 -- # local es=0 00:05:52.113 15:16:09 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 1428855 00:05:52.113 15:16:09 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:05:52.113 15:16:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:52.113 15:16:09 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:05:52.113 15:16:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:52.113 15:16:09 -- common/autotest_common.sh@641 -- # waitforlisten 1428855 00:05:52.113 15:16:09 -- common/autotest_common.sh@817 -- # '[' -z 1428855 ']' 00:05:52.113 15:16:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.113 15:16:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:52.113 15:16:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.113 15:16:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:52.113 15:16:09 -- common/autotest_common.sh@10 -- # set +x 00:05:52.113 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (1428855) - No such process 00:05:52.113 ERROR: process (pid: 1428855) is no longer running 00:05:52.113 15:16:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:52.113 15:16:09 -- common/autotest_common.sh@850 -- # return 1 00:05:52.113 15:16:09 -- common/autotest_common.sh@641 -- # es=1 00:05:52.113 15:16:09 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:52.113 15:16:09 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:52.113 15:16:09 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:52.113 15:16:09 -- event/cpu_locks.sh@54 -- # no_locks 00:05:52.113 15:16:09 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:52.113 15:16:09 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:52.113 15:16:09 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:52.113 00:05:52.113 real 0m1.657s 00:05:52.113 user 0m1.752s 00:05:52.113 sys 0m0.557s 00:05:52.113 15:16:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:52.113 15:16:09 -- common/autotest_common.sh@10 -- # set +x 00:05:52.113 ************************************ 00:05:52.113 END TEST default_locks 00:05:52.113 ************************************ 00:05:52.113 15:16:09 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:52.113 15:16:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:52.113 15:16:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:52.113 15:16:09 -- common/autotest_common.sh@10 -- # set +x 00:05:52.374 ************************************ 00:05:52.374 START TEST default_locks_via_rpc 00:05:52.374 ************************************ 00:05:52.374 15:16:09 -- common/autotest_common.sh@1111 -- # default_locks_via_rpc 00:05:52.374 15:16:09 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1429233 00:05:52.374 15:16:09 -- event/cpu_locks.sh@63 -- # waitforlisten 1429233 00:05:52.374 15:16:09 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:52.374 15:16:09 -- common/autotest_common.sh@817 -- # '[' -z 1429233 ']' 00:05:52.374 15:16:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.374 15:16:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:52.374 15:16:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.374 15:16:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:52.374 15:16:09 -- common/autotest_common.sh@10 -- # set +x 00:05:52.374 [2024-04-26 15:16:09.649386] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:05:52.374 [2024-04-26 15:16:09.649429] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1429233 ] 00:05:52.374 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.374 [2024-04-26 15:16:09.709118] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.374 [2024-04-26 15:16:09.770598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.317 15:16:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:53.317 15:16:10 -- common/autotest_common.sh@850 -- # return 0 00:05:53.317 15:16:10 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:53.317 15:16:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:53.317 15:16:10 -- common/autotest_common.sh@10 -- # set +x 00:05:53.317 15:16:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:53.317 15:16:10 -- event/cpu_locks.sh@67 -- # no_locks 00:05:53.317 15:16:10 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:53.317 15:16:10 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:53.317 15:16:10 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:53.317 15:16:10 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:53.317 15:16:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:53.317 15:16:10 -- common/autotest_common.sh@10 -- # set +x 00:05:53.317 15:16:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:53.317 15:16:10 -- event/cpu_locks.sh@71 -- # locks_exist 1429233 00:05:53.317 15:16:10 -- event/cpu_locks.sh@22 -- # lslocks -p 1429233 00:05:53.317 15:16:10 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:53.889 15:16:11 -- event/cpu_locks.sh@73 -- # killprocess 1429233 00:05:53.889 15:16:11 -- common/autotest_common.sh@936 -- # '[' -z 1429233 ']' 00:05:53.889 15:16:11 -- common/autotest_common.sh@940 -- # kill -0 1429233 00:05:53.889 15:16:11 -- common/autotest_common.sh@941 -- # uname 00:05:53.889 15:16:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:53.889 15:16:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1429233 00:05:53.889 15:16:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:53.889 15:16:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:53.889 15:16:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1429233' 00:05:53.889 killing process with pid 1429233 00:05:53.889 15:16:11 -- common/autotest_common.sh@955 -- # kill 1429233 00:05:53.889 15:16:11 -- common/autotest_common.sh@960 -- # wait 1429233 00:05:54.150 00:05:54.150 real 0m1.750s 00:05:54.150 user 0m1.869s 00:05:54.150 sys 0m0.552s 00:05:54.150 15:16:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:54.150 15:16:11 -- common/autotest_common.sh@10 -- # set +x 00:05:54.150 ************************************ 00:05:54.150 END TEST default_locks_via_rpc 00:05:54.150 ************************************ 00:05:54.150 15:16:11 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:54.150 15:16:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:54.150 15:16:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:54.150 15:16:11 -- common/autotest_common.sh@10 -- # set +x 00:05:54.150 ************************************ 00:05:54.150 START TEST non_locking_app_on_locked_coremask 00:05:54.150 ************************************ 00:05:54.150 15:16:11 -- common/autotest_common.sh@1111 -- # non_locking_app_on_locked_coremask 00:05:54.150 15:16:11 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:54.150 15:16:11 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1429605 00:05:54.150 15:16:11 -- event/cpu_locks.sh@81 -- # waitforlisten 1429605 /var/tmp/spdk.sock 00:05:54.150 15:16:11 -- common/autotest_common.sh@817 -- # '[' -z 1429605 ']' 00:05:54.150 15:16:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.150 15:16:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:54.150 15:16:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.150 15:16:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:54.150 15:16:11 -- common/autotest_common.sh@10 -- # set +x 00:05:54.150 [2024-04-26 15:16:11.521950] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:05:54.150 [2024-04-26 15:16:11.521985] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1429605 ] 00:05:54.151 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.151 [2024-04-26 15:16:11.575443] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.411 [2024-04-26 15:16:11.637424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.992 15:16:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:54.993 15:16:12 -- common/autotest_common.sh@850 -- # return 0 00:05:54.993 15:16:12 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:54.993 15:16:12 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1429931 00:05:54.993 15:16:12 -- event/cpu_locks.sh@85 -- # waitforlisten 1429931 /var/tmp/spdk2.sock 00:05:54.993 15:16:12 -- common/autotest_common.sh@817 -- # '[' -z 1429931 ']' 00:05:54.993 15:16:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:54.993 15:16:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:54.993 15:16:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:54.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:54.993 15:16:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:54.993 15:16:12 -- common/autotest_common.sh@10 -- # set +x 00:05:54.993 [2024-04-26 15:16:12.324580] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:05:54.993 [2024-04-26 15:16:12.324620] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1429931 ] 00:05:54.993 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.993 [2024-04-26 15:16:12.406578] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:54.993 [2024-04-26 15:16:12.406608] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.262 [2024-04-26 15:16:12.535650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.835 15:16:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:55.835 15:16:13 -- common/autotest_common.sh@850 -- # return 0 00:05:55.835 15:16:13 -- event/cpu_locks.sh@87 -- # locks_exist 1429605 00:05:55.835 15:16:13 -- event/cpu_locks.sh@22 -- # lslocks -p 1429605 00:05:55.835 15:16:13 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:56.406 lslocks: write error 00:05:56.406 15:16:13 -- event/cpu_locks.sh@89 -- # killprocess 1429605 00:05:56.406 15:16:13 -- common/autotest_common.sh@936 -- # '[' -z 1429605 ']' 00:05:56.406 15:16:13 -- common/autotest_common.sh@940 -- # kill -0 1429605 00:05:56.406 15:16:13 -- common/autotest_common.sh@941 -- # uname 00:05:56.406 15:16:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:56.406 15:16:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1429605 00:05:56.406 15:16:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:56.406 15:16:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:56.406 15:16:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1429605' 00:05:56.406 killing process with pid 1429605 00:05:56.406 15:16:13 -- common/autotest_common.sh@955 -- # kill 1429605 00:05:56.406 15:16:13 -- common/autotest_common.sh@960 -- # wait 1429605 00:05:56.666 15:16:14 -- event/cpu_locks.sh@90 -- # killprocess 1429931 00:05:56.666 15:16:14 -- common/autotest_common.sh@936 -- # '[' -z 1429931 ']' 00:05:56.666 15:16:14 -- common/autotest_common.sh@940 -- # kill -0 1429931 00:05:56.666 15:16:14 -- common/autotest_common.sh@941 -- # uname 00:05:56.666 15:16:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:56.666 15:16:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1429931 00:05:56.927 15:16:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:56.927 15:16:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:56.927 15:16:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1429931' 00:05:56.927 killing process with pid 1429931 00:05:56.927 15:16:14 -- common/autotest_common.sh@955 -- # kill 1429931 00:05:56.927 15:16:14 -- common/autotest_common.sh@960 -- # wait 1429931 00:05:56.927 00:05:56.927 real 0m2.856s 00:05:56.927 user 0m3.107s 00:05:56.927 sys 0m0.800s 00:05:56.927 15:16:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:56.927 15:16:14 -- common/autotest_common.sh@10 -- # set +x 00:05:56.927 ************************************ 00:05:56.927 END TEST non_locking_app_on_locked_coremask 00:05:56.927 ************************************ 00:05:57.187 15:16:14 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:57.187 15:16:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:57.187 15:16:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:57.187 15:16:14 -- common/autotest_common.sh@10 -- # set +x 00:05:57.187 ************************************ 00:05:57.187 START TEST locking_app_on_unlocked_coremask 00:05:57.187 ************************************ 00:05:57.187 15:16:14 -- common/autotest_common.sh@1111 -- # locking_app_on_unlocked_coremask 00:05:57.187 15:16:14 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1430319 00:05:57.187 15:16:14 -- event/cpu_locks.sh@99 -- # waitforlisten 1430319 /var/tmp/spdk.sock 00:05:57.187 15:16:14 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:57.187 15:16:14 -- common/autotest_common.sh@817 -- # '[' -z 1430319 ']' 00:05:57.187 15:16:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.187 15:16:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:57.187 15:16:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.187 15:16:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:57.187 15:16:14 -- common/autotest_common.sh@10 -- # set +x 00:05:57.187 [2024-04-26 15:16:14.602232] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:05:57.188 [2024-04-26 15:16:14.602292] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1430319 ] 00:05:57.188 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.449 [2024-04-26 15:16:14.668686] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:57.449 [2024-04-26 15:16:14.668720] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.449 [2024-04-26 15:16:14.740795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.020 15:16:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:58.020 15:16:15 -- common/autotest_common.sh@850 -- # return 0 00:05:58.020 15:16:15 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1430530 00:05:58.020 15:16:15 -- event/cpu_locks.sh@103 -- # waitforlisten 1430530 /var/tmp/spdk2.sock 00:05:58.020 15:16:15 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:58.020 15:16:15 -- common/autotest_common.sh@817 -- # '[' -z 1430530 ']' 00:05:58.020 15:16:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:58.020 15:16:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:58.020 15:16:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:58.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:58.020 15:16:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:58.020 15:16:15 -- common/autotest_common.sh@10 -- # set +x 00:05:58.020 [2024-04-26 15:16:15.412520] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:05:58.020 [2024-04-26 15:16:15.412575] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1430530 ] 00:05:58.020 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.281 [2024-04-26 15:16:15.499446] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.281 [2024-04-26 15:16:15.627421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.852 15:16:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:58.852 15:16:16 -- common/autotest_common.sh@850 -- # return 0 00:05:58.852 15:16:16 -- event/cpu_locks.sh@105 -- # locks_exist 1430530 00:05:58.852 15:16:16 -- event/cpu_locks.sh@22 -- # lslocks -p 1430530 00:05:58.852 15:16:16 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:59.424 lslocks: write error 00:05:59.424 15:16:16 -- event/cpu_locks.sh@107 -- # killprocess 1430319 00:05:59.424 15:16:16 -- common/autotest_common.sh@936 -- # '[' -z 1430319 ']' 00:05:59.424 15:16:16 -- common/autotest_common.sh@940 -- # kill -0 1430319 00:05:59.424 15:16:16 -- common/autotest_common.sh@941 -- # uname 00:05:59.424 15:16:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:59.424 15:16:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1430319 00:05:59.424 15:16:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:59.424 15:16:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:59.424 15:16:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1430319' 00:05:59.424 killing process with pid 1430319 00:05:59.424 15:16:16 -- common/autotest_common.sh@955 -- # kill 1430319 00:05:59.424 15:16:16 -- common/autotest_common.sh@960 -- # wait 1430319 00:05:59.997 15:16:17 -- event/cpu_locks.sh@108 -- # killprocess 1430530 00:05:59.997 15:16:17 -- common/autotest_common.sh@936 -- # '[' -z 1430530 ']' 00:05:59.997 15:16:17 -- common/autotest_common.sh@940 -- # kill -0 1430530 00:05:59.997 15:16:17 -- common/autotest_common.sh@941 -- # uname 00:05:59.997 15:16:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:59.997 15:16:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1430530 00:05:59.997 15:16:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:59.997 15:16:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:59.997 15:16:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1430530' 00:05:59.997 killing process with pid 1430530 00:05:59.997 15:16:17 -- common/autotest_common.sh@955 -- # kill 1430530 00:05:59.997 15:16:17 -- common/autotest_common.sh@960 -- # wait 1430530 00:05:59.997 00:05:59.997 real 0m2.857s 00:05:59.997 user 0m3.108s 00:05:59.997 sys 0m0.853s 00:05:59.997 15:16:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:59.997 15:16:17 -- common/autotest_common.sh@10 -- # set +x 00:05:59.997 ************************************ 00:05:59.997 END TEST locking_app_on_unlocked_coremask 00:05:59.997 ************************************ 00:05:59.997 15:16:17 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:59.997 15:16:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:59.997 15:16:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:59.997 15:16:17 -- common/autotest_common.sh@10 -- # set +x 00:06:00.259 ************************************ 00:06:00.259 START TEST locking_app_on_locked_coremask 00:06:00.259 ************************************ 00:06:00.259 15:16:17 -- common/autotest_common.sh@1111 -- # locking_app_on_locked_coremask 00:06:00.259 15:16:17 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1431034 00:06:00.259 15:16:17 -- event/cpu_locks.sh@116 -- # waitforlisten 1431034 /var/tmp/spdk.sock 00:06:00.259 15:16:17 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:00.259 15:16:17 -- common/autotest_common.sh@817 -- # '[' -z 1431034 ']' 00:06:00.259 15:16:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.259 15:16:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:00.259 15:16:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.259 15:16:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:00.259 15:16:17 -- common/autotest_common.sh@10 -- # set +x 00:06:00.259 [2024-04-26 15:16:17.650158] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:06:00.259 [2024-04-26 15:16:17.650216] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1431034 ] 00:06:00.259 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.520 [2024-04-26 15:16:17.713906] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.520 [2024-04-26 15:16:17.786394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.093 15:16:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:01.093 15:16:18 -- common/autotest_common.sh@850 -- # return 0 00:06:01.093 15:16:18 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1431069 00:06:01.093 15:16:18 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1431069 /var/tmp/spdk2.sock 00:06:01.093 15:16:18 -- common/autotest_common.sh@638 -- # local es=0 00:06:01.093 15:16:18 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:01.093 15:16:18 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 1431069 /var/tmp/spdk2.sock 00:06:01.093 15:16:18 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:06:01.093 15:16:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:01.093 15:16:18 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:06:01.093 15:16:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:01.093 15:16:18 -- common/autotest_common.sh@641 -- # waitforlisten 1431069 /var/tmp/spdk2.sock 00:06:01.093 15:16:18 -- common/autotest_common.sh@817 -- # '[' -z 1431069 ']' 00:06:01.093 15:16:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:01.093 15:16:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:01.093 15:16:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:01.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:01.093 15:16:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:01.093 15:16:18 -- common/autotest_common.sh@10 -- # set +x 00:06:01.093 [2024-04-26 15:16:18.463040] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:06:01.093 [2024-04-26 15:16:18.463090] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1431069 ] 00:06:01.093 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.354 [2024-04-26 15:16:18.551490] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1431034 has claimed it. 00:06:01.354 [2024-04-26 15:16:18.551530] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:01.616 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (1431069) - No such process 00:06:01.616 ERROR: process (pid: 1431069) is no longer running 00:06:01.877 15:16:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:01.877 15:16:19 -- common/autotest_common.sh@850 -- # return 1 00:06:01.877 15:16:19 -- common/autotest_common.sh@641 -- # es=1 00:06:01.877 15:16:19 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:01.877 15:16:19 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:01.877 15:16:19 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:01.877 15:16:19 -- event/cpu_locks.sh@122 -- # locks_exist 1431034 00:06:01.877 15:16:19 -- event/cpu_locks.sh@22 -- # lslocks -p 1431034 00:06:01.877 15:16:19 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:02.138 lslocks: write error 00:06:02.138 15:16:19 -- event/cpu_locks.sh@124 -- # killprocess 1431034 00:06:02.138 15:16:19 -- common/autotest_common.sh@936 -- # '[' -z 1431034 ']' 00:06:02.138 15:16:19 -- common/autotest_common.sh@940 -- # kill -0 1431034 00:06:02.138 15:16:19 -- common/autotest_common.sh@941 -- # uname 00:06:02.138 15:16:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:02.138 15:16:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1431034 00:06:02.138 15:16:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:02.138 15:16:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:02.138 15:16:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1431034' 00:06:02.138 killing process with pid 1431034 00:06:02.138 15:16:19 -- common/autotest_common.sh@955 -- # kill 1431034 00:06:02.138 15:16:19 -- common/autotest_common.sh@960 -- # wait 1431034 00:06:02.399 00:06:02.399 real 0m2.203s 00:06:02.399 user 0m2.423s 00:06:02.399 sys 0m0.617s 00:06:02.399 15:16:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:02.399 15:16:19 -- common/autotest_common.sh@10 -- # set +x 00:06:02.399 ************************************ 00:06:02.399 END TEST locking_app_on_locked_coremask 00:06:02.399 ************************************ 00:06:02.399 15:16:19 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:02.399 15:16:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:02.399 15:16:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:02.399 15:16:19 -- common/autotest_common.sh@10 -- # set +x 00:06:02.660 ************************************ 00:06:02.660 START TEST locking_overlapped_coremask 00:06:02.660 ************************************ 00:06:02.660 15:16:19 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask 00:06:02.660 15:16:19 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1431418 00:06:02.660 15:16:19 -- event/cpu_locks.sh@133 -- # waitforlisten 1431418 /var/tmp/spdk.sock 00:06:02.660 15:16:19 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:02.660 15:16:19 -- common/autotest_common.sh@817 -- # '[' -z 1431418 ']' 00:06:02.660 15:16:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.660 15:16:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:02.660 15:16:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.660 15:16:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:02.660 15:16:19 -- common/autotest_common.sh@10 -- # set +x 00:06:02.660 [2024-04-26 15:16:20.045212] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:06:02.660 [2024-04-26 15:16:20.045274] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1431418 ] 00:06:02.660 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.921 [2024-04-26 15:16:20.111284] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:02.922 [2024-04-26 15:16:20.188242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.922 [2024-04-26 15:16:20.188360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:02.922 [2024-04-26 15:16:20.188363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.493 15:16:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:03.493 15:16:20 -- common/autotest_common.sh@850 -- # return 0 00:06:03.493 15:16:20 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1431751 00:06:03.493 15:16:20 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1431751 /var/tmp/spdk2.sock 00:06:03.494 15:16:20 -- common/autotest_common.sh@638 -- # local es=0 00:06:03.494 15:16:20 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:03.494 15:16:20 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 1431751 /var/tmp/spdk2.sock 00:06:03.494 15:16:20 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:06:03.494 15:16:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:03.494 15:16:20 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:06:03.494 15:16:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:03.494 15:16:20 -- common/autotest_common.sh@641 -- # waitforlisten 1431751 /var/tmp/spdk2.sock 00:06:03.494 15:16:20 -- common/autotest_common.sh@817 -- # '[' -z 1431751 ']' 00:06:03.494 15:16:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:03.494 15:16:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:03.494 15:16:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:03.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:03.494 15:16:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:03.494 15:16:20 -- common/autotest_common.sh@10 -- # set +x 00:06:03.494 [2024-04-26 15:16:20.878384] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:06:03.494 [2024-04-26 15:16:20.878435] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1431751 ] 00:06:03.494 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.755 [2024-04-26 15:16:20.948930] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1431418 has claimed it. 00:06:03.755 [2024-04-26 15:16:20.948960] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:04.327 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (1431751) - No such process 00:06:04.327 ERROR: process (pid: 1431751) is no longer running 00:06:04.327 15:16:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:04.327 15:16:21 -- common/autotest_common.sh@850 -- # return 1 00:06:04.327 15:16:21 -- common/autotest_common.sh@641 -- # es=1 00:06:04.327 15:16:21 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:04.327 15:16:21 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:04.327 15:16:21 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:04.327 15:16:21 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:04.327 15:16:21 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:04.327 15:16:21 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:04.327 15:16:21 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:04.327 15:16:21 -- event/cpu_locks.sh@141 -- # killprocess 1431418 00:06:04.327 15:16:21 -- common/autotest_common.sh@936 -- # '[' -z 1431418 ']' 00:06:04.327 15:16:21 -- common/autotest_common.sh@940 -- # kill -0 1431418 00:06:04.327 15:16:21 -- common/autotest_common.sh@941 -- # uname 00:06:04.327 15:16:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:04.327 15:16:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1431418 00:06:04.327 15:16:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:04.327 15:16:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:04.327 15:16:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1431418' 00:06:04.327 killing process with pid 1431418 00:06:04.327 15:16:21 -- common/autotest_common.sh@955 -- # kill 1431418 00:06:04.327 15:16:21 -- common/autotest_common.sh@960 -- # wait 1431418 00:06:04.327 00:06:04.327 real 0m1.760s 00:06:04.327 user 0m4.969s 00:06:04.327 sys 0m0.373s 00:06:04.327 15:16:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:04.327 15:16:21 -- common/autotest_common.sh@10 -- # set +x 00:06:04.328 ************************************ 00:06:04.328 END TEST locking_overlapped_coremask 00:06:04.328 ************************************ 00:06:04.589 15:16:21 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:04.589 15:16:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:04.589 15:16:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:04.589 15:16:21 -- common/autotest_common.sh@10 -- # set +x 00:06:04.589 ************************************ 00:06:04.589 START TEST locking_overlapped_coremask_via_rpc 00:06:04.589 ************************************ 00:06:04.589 15:16:21 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask_via_rpc 00:06:04.589 15:16:21 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1431893 00:06:04.589 15:16:21 -- event/cpu_locks.sh@149 -- # waitforlisten 1431893 /var/tmp/spdk.sock 00:06:04.589 15:16:21 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:04.589 15:16:21 -- common/autotest_common.sh@817 -- # '[' -z 1431893 ']' 00:06:04.589 15:16:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.589 15:16:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:04.589 15:16:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.589 15:16:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:04.589 15:16:21 -- common/autotest_common.sh@10 -- # set +x 00:06:04.589 [2024-04-26 15:16:22.003969] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:06:04.589 [2024-04-26 15:16:22.004016] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1431893 ] 00:06:04.589 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.850 [2024-04-26 15:16:22.063946] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:04.850 [2024-04-26 15:16:22.063973] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:04.850 [2024-04-26 15:16:22.128781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.850 [2024-04-26 15:16:22.128911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:04.850 [2024-04-26 15:16:22.129080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.422 15:16:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:05.422 15:16:22 -- common/autotest_common.sh@850 -- # return 0 00:06:05.422 15:16:22 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1432131 00:06:05.422 15:16:22 -- event/cpu_locks.sh@153 -- # waitforlisten 1432131 /var/tmp/spdk2.sock 00:06:05.422 15:16:22 -- common/autotest_common.sh@817 -- # '[' -z 1432131 ']' 00:06:05.422 15:16:22 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:05.422 15:16:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:05.422 15:16:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:05.422 15:16:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:05.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:05.422 15:16:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:05.422 15:16:22 -- common/autotest_common.sh@10 -- # set +x 00:06:05.422 [2024-04-26 15:16:22.812492] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:06:05.422 [2024-04-26 15:16:22.812542] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1432131 ] 00:06:05.422 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.683 [2024-04-26 15:16:22.885608] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:05.683 [2024-04-26 15:16:22.885629] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:05.683 [2024-04-26 15:16:22.988918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:05.683 [2024-04-26 15:16:22.988960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:05.683 [2024-04-26 15:16:22.988962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:06.274 15:16:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:06.274 15:16:23 -- common/autotest_common.sh@850 -- # return 0 00:06:06.274 15:16:23 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:06.274 15:16:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:06.274 15:16:23 -- common/autotest_common.sh@10 -- # set +x 00:06:06.274 15:16:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:06.274 15:16:23 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:06.274 15:16:23 -- common/autotest_common.sh@638 -- # local es=0 00:06:06.274 15:16:23 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:06.274 15:16:23 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:06:06.275 15:16:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:06.275 15:16:23 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:06:06.275 15:16:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:06.275 15:16:23 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:06.275 15:16:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:06.275 15:16:23 -- common/autotest_common.sh@10 -- # set +x 00:06:06.275 [2024-04-26 15:16:23.584900] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1431893 has claimed it. 00:06:06.275 request: 00:06:06.275 { 00:06:06.275 "method": "framework_enable_cpumask_locks", 00:06:06.275 "req_id": 1 00:06:06.275 } 00:06:06.275 Got JSON-RPC error response 00:06:06.275 response: 00:06:06.275 { 00:06:06.275 "code": -32603, 00:06:06.275 "message": "Failed to claim CPU core: 2" 00:06:06.275 } 00:06:06.275 15:16:23 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:06:06.275 15:16:23 -- common/autotest_common.sh@641 -- # es=1 00:06:06.275 15:16:23 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:06.275 15:16:23 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:06.275 15:16:23 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:06.275 15:16:23 -- event/cpu_locks.sh@158 -- # waitforlisten 1431893 /var/tmp/spdk.sock 00:06:06.275 15:16:23 -- common/autotest_common.sh@817 -- # '[' -z 1431893 ']' 00:06:06.275 15:16:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.275 15:16:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:06.275 15:16:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.275 15:16:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:06.275 15:16:23 -- common/autotest_common.sh@10 -- # set +x 00:06:06.536 15:16:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:06.536 15:16:23 -- common/autotest_common.sh@850 -- # return 0 00:06:06.536 15:16:23 -- event/cpu_locks.sh@159 -- # waitforlisten 1432131 /var/tmp/spdk2.sock 00:06:06.536 15:16:23 -- common/autotest_common.sh@817 -- # '[' -z 1432131 ']' 00:06:06.536 15:16:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:06.536 15:16:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:06.536 15:16:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:06.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:06.536 15:16:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:06.536 15:16:23 -- common/autotest_common.sh@10 -- # set +x 00:06:06.536 15:16:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:06.536 15:16:23 -- common/autotest_common.sh@850 -- # return 0 00:06:06.536 15:16:23 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:06.536 15:16:23 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:06.536 15:16:23 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:06.536 15:16:23 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:06.536 00:06:06.536 real 0m1.989s 00:06:06.536 user 0m0.765s 00:06:06.536 sys 0m0.159s 00:06:06.536 15:16:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:06.536 15:16:23 -- common/autotest_common.sh@10 -- # set +x 00:06:06.536 ************************************ 00:06:06.536 END TEST locking_overlapped_coremask_via_rpc 00:06:06.536 ************************************ 00:06:06.536 15:16:23 -- event/cpu_locks.sh@174 -- # cleanup 00:06:06.536 15:16:23 -- event/cpu_locks.sh@15 -- # [[ -z 1431893 ]] 00:06:06.536 15:16:23 -- event/cpu_locks.sh@15 -- # killprocess 1431893 00:06:06.536 15:16:23 -- common/autotest_common.sh@936 -- # '[' -z 1431893 ']' 00:06:06.536 15:16:23 -- common/autotest_common.sh@940 -- # kill -0 1431893 00:06:06.536 15:16:23 -- common/autotest_common.sh@941 -- # uname 00:06:06.536 15:16:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:06.536 15:16:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1431893 00:06:06.797 15:16:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:06.797 15:16:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:06.797 15:16:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1431893' 00:06:06.797 killing process with pid 1431893 00:06:06.797 15:16:24 -- common/autotest_common.sh@955 -- # kill 1431893 00:06:06.797 15:16:24 -- common/autotest_common.sh@960 -- # wait 1431893 00:06:06.797 15:16:24 -- event/cpu_locks.sh@16 -- # [[ -z 1432131 ]] 00:06:06.797 15:16:24 -- event/cpu_locks.sh@16 -- # killprocess 1432131 00:06:06.797 15:16:24 -- common/autotest_common.sh@936 -- # '[' -z 1432131 ']' 00:06:06.797 15:16:24 -- common/autotest_common.sh@940 -- # kill -0 1432131 00:06:07.058 15:16:24 -- common/autotest_common.sh@941 -- # uname 00:06:07.058 15:16:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:07.058 15:16:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1432131 00:06:07.058 15:16:24 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:07.058 15:16:24 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:07.058 15:16:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1432131' 00:06:07.058 killing process with pid 1432131 00:06:07.058 15:16:24 -- common/autotest_common.sh@955 -- # kill 1432131 00:06:07.058 15:16:24 -- common/autotest_common.sh@960 -- # wait 1432131 00:06:07.058 15:16:24 -- event/cpu_locks.sh@18 -- # rm -f 00:06:07.058 15:16:24 -- event/cpu_locks.sh@1 -- # cleanup 00:06:07.058 15:16:24 -- event/cpu_locks.sh@15 -- # [[ -z 1431893 ]] 00:06:07.058 15:16:24 -- event/cpu_locks.sh@15 -- # killprocess 1431893 00:06:07.058 15:16:24 -- common/autotest_common.sh@936 -- # '[' -z 1431893 ']' 00:06:07.058 15:16:24 -- common/autotest_common.sh@940 -- # kill -0 1431893 00:06:07.058 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (1431893) - No such process 00:06:07.058 15:16:24 -- common/autotest_common.sh@963 -- # echo 'Process with pid 1431893 is not found' 00:06:07.058 Process with pid 1431893 is not found 00:06:07.058 15:16:24 -- event/cpu_locks.sh@16 -- # [[ -z 1432131 ]] 00:06:07.058 15:16:24 -- event/cpu_locks.sh@16 -- # killprocess 1432131 00:06:07.058 15:16:24 -- common/autotest_common.sh@936 -- # '[' -z 1432131 ']' 00:06:07.058 15:16:24 -- common/autotest_common.sh@940 -- # kill -0 1432131 00:06:07.058 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (1432131) - No such process 00:06:07.058 15:16:24 -- common/autotest_common.sh@963 -- # echo 'Process with pid 1432131 is not found' 00:06:07.058 Process with pid 1432131 is not found 00:06:07.058 15:16:24 -- event/cpu_locks.sh@18 -- # rm -f 00:06:07.058 00:06:07.058 real 0m16.987s 00:06:07.058 user 0m27.824s 00:06:07.058 sys 0m5.119s 00:06:07.058 15:16:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:07.058 15:16:24 -- common/autotest_common.sh@10 -- # set +x 00:06:07.058 ************************************ 00:06:07.058 END TEST cpu_locks 00:06:07.058 ************************************ 00:06:07.319 00:06:07.319 real 0m44.505s 00:06:07.319 user 1m22.957s 00:06:07.319 sys 0m8.560s 00:06:07.319 15:16:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:07.319 15:16:24 -- common/autotest_common.sh@10 -- # set +x 00:06:07.319 ************************************ 00:06:07.319 END TEST event 00:06:07.319 ************************************ 00:06:07.319 15:16:24 -- spdk/autotest.sh@178 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:07.319 15:16:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:07.319 15:16:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:07.319 15:16:24 -- common/autotest_common.sh@10 -- # set +x 00:06:07.319 ************************************ 00:06:07.319 START TEST thread 00:06:07.319 ************************************ 00:06:07.319 15:16:24 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:07.580 * Looking for test storage... 00:06:07.580 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:07.580 15:16:24 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:07.580 15:16:24 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:07.580 15:16:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:07.580 15:16:24 -- common/autotest_common.sh@10 -- # set +x 00:06:07.580 ************************************ 00:06:07.580 START TEST thread_poller_perf 00:06:07.580 ************************************ 00:06:07.580 15:16:24 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:07.580 [2024-04-26 15:16:24.974688] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:06:07.580 [2024-04-26 15:16:24.974783] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1432587 ] 00:06:07.580 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.841 [2024-04-26 15:16:25.038467] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.841 [2024-04-26 15:16:25.099942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.841 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:08.785 ====================================== 00:06:08.785 busy:2407798932 (cyc) 00:06:08.785 total_run_count: 286000 00:06:08.785 tsc_hz: 2400000000 (cyc) 00:06:08.785 ====================================== 00:06:08.785 poller_cost: 8418 (cyc), 3507 (nsec) 00:06:08.785 00:06:08.785 real 0m1.206s 00:06:08.785 user 0m1.138s 00:06:08.785 sys 0m0.064s 00:06:08.785 15:16:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:08.785 15:16:26 -- common/autotest_common.sh@10 -- # set +x 00:06:08.785 ************************************ 00:06:08.785 END TEST thread_poller_perf 00:06:08.785 ************************************ 00:06:08.785 15:16:26 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:08.785 15:16:26 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:08.785 15:16:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:08.785 15:16:26 -- common/autotest_common.sh@10 -- # set +x 00:06:09.046 ************************************ 00:06:09.046 START TEST thread_poller_perf 00:06:09.046 ************************************ 00:06:09.046 15:16:26 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:09.046 [2024-04-26 15:16:26.364011] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:06:09.046 [2024-04-26 15:16:26.364126] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1432941 ] 00:06:09.046 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.046 [2024-04-26 15:16:26.430456] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.046 [2024-04-26 15:16:26.490373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.046 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:10.432 ====================================== 00:06:10.432 busy:2401901892 (cyc) 00:06:10.432 total_run_count: 3813000 00:06:10.432 tsc_hz: 2400000000 (cyc) 00:06:10.432 ====================================== 00:06:10.432 poller_cost: 629 (cyc), 262 (nsec) 00:06:10.432 00:06:10.432 real 0m1.204s 00:06:10.432 user 0m1.132s 00:06:10.432 sys 0m0.068s 00:06:10.432 15:16:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:10.432 15:16:27 -- common/autotest_common.sh@10 -- # set +x 00:06:10.432 ************************************ 00:06:10.432 END TEST thread_poller_perf 00:06:10.432 ************************************ 00:06:10.432 15:16:27 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:10.432 00:06:10.432 real 0m2.870s 00:06:10.432 user 0m2.464s 00:06:10.432 sys 0m0.376s 00:06:10.432 15:16:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:10.432 15:16:27 -- common/autotest_common.sh@10 -- # set +x 00:06:10.432 ************************************ 00:06:10.432 END TEST thread 00:06:10.432 ************************************ 00:06:10.432 15:16:27 -- spdk/autotest.sh@179 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:10.432 15:16:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:10.432 15:16:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:10.432 15:16:27 -- common/autotest_common.sh@10 -- # set +x 00:06:10.432 ************************************ 00:06:10.432 START TEST accel 00:06:10.432 ************************************ 00:06:10.432 15:16:27 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:10.432 * Looking for test storage... 00:06:10.432 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:10.432 15:16:27 -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:10.432 15:16:27 -- accel/accel.sh@82 -- # get_expected_opcs 00:06:10.432 15:16:27 -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:10.432 15:16:27 -- accel/accel.sh@62 -- # spdk_tgt_pid=1433346 00:06:10.432 15:16:27 -- accel/accel.sh@63 -- # waitforlisten 1433346 00:06:10.432 15:16:27 -- common/autotest_common.sh@817 -- # '[' -z 1433346 ']' 00:06:10.432 15:16:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.432 15:16:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:10.432 15:16:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.432 15:16:27 -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:10.432 15:16:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:10.432 15:16:27 -- common/autotest_common.sh@10 -- # set +x 00:06:10.692 15:16:27 -- accel/accel.sh@61 -- # build_accel_config 00:06:10.692 15:16:27 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:10.692 15:16:27 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:10.692 15:16:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.692 15:16:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.692 15:16:27 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:10.692 15:16:27 -- accel/accel.sh@40 -- # local IFS=, 00:06:10.692 15:16:27 -- accel/accel.sh@41 -- # jq -r . 00:06:10.692 [2024-04-26 15:16:27.935517] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:06:10.692 [2024-04-26 15:16:27.935584] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1433346 ] 00:06:10.692 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.692 [2024-04-26 15:16:28.000214] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.692 [2024-04-26 15:16:28.072873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.262 15:16:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:11.262 15:16:28 -- common/autotest_common.sh@850 -- # return 0 00:06:11.262 15:16:28 -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:11.262 15:16:28 -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:11.262 15:16:28 -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:11.262 15:16:28 -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:11.262 15:16:28 -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:11.262 15:16:28 -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:11.262 15:16:28 -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:11.262 15:16:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:11.262 15:16:28 -- common/autotest_common.sh@10 -- # set +x 00:06:11.523 15:16:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:11.523 15:16:28 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:11.523 15:16:28 -- accel/accel.sh@72 -- # IFS== 00:06:11.523 15:16:28 -- accel/accel.sh@72 -- # read -r opc module 00:06:11.523 15:16:28 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:11.523 15:16:28 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:11.523 15:16:28 -- accel/accel.sh@72 -- # IFS== 00:06:11.523 15:16:28 -- accel/accel.sh@72 -- # read -r opc module 00:06:11.523 15:16:28 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:11.523 15:16:28 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:11.523 15:16:28 -- accel/accel.sh@72 -- # IFS== 00:06:11.523 15:16:28 -- accel/accel.sh@72 -- # read -r opc module 00:06:11.523 15:16:28 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:11.523 15:16:28 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:11.523 15:16:28 -- accel/accel.sh@72 -- # IFS== 00:06:11.523 15:16:28 -- accel/accel.sh@72 -- # read -r opc module 00:06:11.523 15:16:28 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:11.523 15:16:28 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:11.523 15:16:28 -- accel/accel.sh@72 -- # IFS== 00:06:11.523 15:16:28 -- accel/accel.sh@72 -- # read -r opc module 00:06:11.523 15:16:28 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:11.523 15:16:28 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:11.523 15:16:28 -- accel/accel.sh@72 -- # IFS== 00:06:11.523 15:16:28 -- accel/accel.sh@72 -- # read -r opc module 00:06:11.523 15:16:28 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:11.523 15:16:28 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:11.523 15:16:28 -- accel/accel.sh@72 -- # IFS== 00:06:11.523 15:16:28 -- accel/accel.sh@72 -- # read -r opc module 00:06:11.523 15:16:28 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:11.523 15:16:28 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:11.523 15:16:28 -- accel/accel.sh@72 -- # IFS== 00:06:11.523 15:16:28 -- accel/accel.sh@72 -- # read -r opc module 00:06:11.523 15:16:28 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:11.523 15:16:28 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:11.523 15:16:28 -- accel/accel.sh@72 -- # IFS== 00:06:11.523 15:16:28 -- accel/accel.sh@72 -- # read -r opc module 00:06:11.523 15:16:28 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:11.523 15:16:28 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:11.523 15:16:28 -- accel/accel.sh@72 -- # IFS== 00:06:11.523 15:16:28 -- accel/accel.sh@72 -- # read -r opc module 00:06:11.523 15:16:28 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:11.523 15:16:28 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:11.523 15:16:28 -- accel/accel.sh@72 -- # IFS== 00:06:11.523 15:16:28 -- accel/accel.sh@72 -- # read -r opc module 00:06:11.523 15:16:28 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:11.523 15:16:28 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:11.523 15:16:28 -- accel/accel.sh@72 -- # IFS== 00:06:11.523 15:16:28 -- accel/accel.sh@72 -- # read -r opc module 00:06:11.523 15:16:28 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:11.523 15:16:28 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:11.523 15:16:28 -- accel/accel.sh@72 -- # IFS== 00:06:11.523 15:16:28 -- accel/accel.sh@72 -- # read -r opc module 00:06:11.523 15:16:28 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:11.523 15:16:28 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:11.523 15:16:28 -- accel/accel.sh@72 -- # IFS== 00:06:11.523 15:16:28 -- accel/accel.sh@72 -- # read -r opc module 00:06:11.523 15:16:28 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:11.523 15:16:28 -- accel/accel.sh@75 -- # killprocess 1433346 00:06:11.523 15:16:28 -- common/autotest_common.sh@936 -- # '[' -z 1433346 ']' 00:06:11.523 15:16:28 -- common/autotest_common.sh@940 -- # kill -0 1433346 00:06:11.523 15:16:28 -- common/autotest_common.sh@941 -- # uname 00:06:11.523 15:16:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:11.523 15:16:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1433346 00:06:11.523 15:16:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:11.523 15:16:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:11.523 15:16:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1433346' 00:06:11.523 killing process with pid 1433346 00:06:11.523 15:16:28 -- common/autotest_common.sh@955 -- # kill 1433346 00:06:11.523 15:16:28 -- common/autotest_common.sh@960 -- # wait 1433346 00:06:11.784 15:16:29 -- accel/accel.sh@76 -- # trap - ERR 00:06:11.784 15:16:29 -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:11.784 15:16:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:11.784 15:16:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:11.784 15:16:29 -- common/autotest_common.sh@10 -- # set +x 00:06:11.784 15:16:29 -- common/autotest_common.sh@1111 -- # accel_perf -h 00:06:11.784 15:16:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:11.784 15:16:29 -- accel/accel.sh@12 -- # build_accel_config 00:06:11.784 15:16:29 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:11.784 15:16:29 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:11.784 15:16:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:11.784 15:16:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:11.784 15:16:29 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:11.784 15:16:29 -- accel/accel.sh@40 -- # local IFS=, 00:06:11.784 15:16:29 -- accel/accel.sh@41 -- # jq -r . 00:06:11.784 15:16:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:11.784 15:16:29 -- common/autotest_common.sh@10 -- # set +x 00:06:12.045 15:16:29 -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:12.045 15:16:29 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:12.045 15:16:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:12.045 15:16:29 -- common/autotest_common.sh@10 -- # set +x 00:06:12.045 ************************************ 00:06:12.045 START TEST accel_missing_filename 00:06:12.045 ************************************ 00:06:12.045 15:16:29 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress 00:06:12.045 15:16:29 -- common/autotest_common.sh@638 -- # local es=0 00:06:12.045 15:16:29 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:12.045 15:16:29 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:06:12.045 15:16:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:12.045 15:16:29 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:06:12.045 15:16:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:12.045 15:16:29 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress 00:06:12.045 15:16:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:12.045 15:16:29 -- accel/accel.sh@12 -- # build_accel_config 00:06:12.045 15:16:29 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:12.045 15:16:29 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:12.045 15:16:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.045 15:16:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.045 15:16:29 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:12.045 15:16:29 -- accel/accel.sh@40 -- # local IFS=, 00:06:12.045 15:16:29 -- accel/accel.sh@41 -- # jq -r . 00:06:12.045 [2024-04-26 15:16:29.416292] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:06:12.045 [2024-04-26 15:16:29.416352] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1433726 ] 00:06:12.045 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.045 [2024-04-26 15:16:29.478261] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.306 [2024-04-26 15:16:29.540865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.306 [2024-04-26 15:16:29.572738] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:12.306 [2024-04-26 15:16:29.609726] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:06:12.306 A filename is required. 00:06:12.306 15:16:29 -- common/autotest_common.sh@641 -- # es=234 00:06:12.306 15:16:29 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:12.306 15:16:29 -- common/autotest_common.sh@650 -- # es=106 00:06:12.306 15:16:29 -- common/autotest_common.sh@651 -- # case "$es" in 00:06:12.306 15:16:29 -- common/autotest_common.sh@658 -- # es=1 00:06:12.306 15:16:29 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:12.306 00:06:12.306 real 0m0.275s 00:06:12.306 user 0m0.215s 00:06:12.306 sys 0m0.101s 00:06:12.306 15:16:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:12.306 15:16:29 -- common/autotest_common.sh@10 -- # set +x 00:06:12.306 ************************************ 00:06:12.306 END TEST accel_missing_filename 00:06:12.306 ************************************ 00:06:12.306 15:16:29 -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:12.306 15:16:29 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:12.306 15:16:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:12.306 15:16:29 -- common/autotest_common.sh@10 -- # set +x 00:06:12.567 ************************************ 00:06:12.567 START TEST accel_compress_verify 00:06:12.567 ************************************ 00:06:12.567 15:16:29 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:12.567 15:16:29 -- common/autotest_common.sh@638 -- # local es=0 00:06:12.567 15:16:29 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:12.567 15:16:29 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:06:12.567 15:16:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:12.567 15:16:29 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:06:12.567 15:16:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:12.567 15:16:29 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:12.567 15:16:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:12.567 15:16:29 -- accel/accel.sh@12 -- # build_accel_config 00:06:12.567 15:16:29 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:12.567 15:16:29 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:12.567 15:16:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.567 15:16:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.567 15:16:29 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:12.567 15:16:29 -- accel/accel.sh@40 -- # local IFS=, 00:06:12.567 15:16:29 -- accel/accel.sh@41 -- # jq -r . 00:06:12.567 [2024-04-26 15:16:29.890862] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:06:12.567 [2024-04-26 15:16:29.890938] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1433766 ] 00:06:12.567 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.567 [2024-04-26 15:16:29.956281] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.828 [2024-04-26 15:16:30.027307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.828 [2024-04-26 15:16:30.059902] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:12.828 [2024-04-26 15:16:30.097465] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:06:12.828 00:06:12.828 Compression does not support the verify option, aborting. 00:06:12.828 15:16:30 -- common/autotest_common.sh@641 -- # es=161 00:06:12.828 15:16:30 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:12.828 15:16:30 -- common/autotest_common.sh@650 -- # es=33 00:06:12.828 15:16:30 -- common/autotest_common.sh@651 -- # case "$es" in 00:06:12.828 15:16:30 -- common/autotest_common.sh@658 -- # es=1 00:06:12.828 15:16:30 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:12.828 00:06:12.828 real 0m0.290s 00:06:12.828 user 0m0.220s 00:06:12.828 sys 0m0.109s 00:06:12.828 15:16:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:12.828 15:16:30 -- common/autotest_common.sh@10 -- # set +x 00:06:12.828 ************************************ 00:06:12.828 END TEST accel_compress_verify 00:06:12.828 ************************************ 00:06:12.828 15:16:30 -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:12.828 15:16:30 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:12.828 15:16:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:12.828 15:16:30 -- common/autotest_common.sh@10 -- # set +x 00:06:13.089 ************************************ 00:06:13.089 START TEST accel_wrong_workload 00:06:13.089 ************************************ 00:06:13.089 15:16:30 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w foobar 00:06:13.089 15:16:30 -- common/autotest_common.sh@638 -- # local es=0 00:06:13.089 15:16:30 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:13.089 15:16:30 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:06:13.089 15:16:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:13.089 15:16:30 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:06:13.089 15:16:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:13.089 15:16:30 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w foobar 00:06:13.089 15:16:30 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:13.089 15:16:30 -- accel/accel.sh@12 -- # build_accel_config 00:06:13.089 15:16:30 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:13.089 15:16:30 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:13.089 15:16:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.089 15:16:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.089 15:16:30 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:13.089 15:16:30 -- accel/accel.sh@40 -- # local IFS=, 00:06:13.089 15:16:30 -- accel/accel.sh@41 -- # jq -r . 00:06:13.089 Unsupported workload type: foobar 00:06:13.089 [2024-04-26 15:16:30.374302] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:13.089 accel_perf options: 00:06:13.089 [-h help message] 00:06:13.089 [-q queue depth per core] 00:06:13.089 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:13.089 [-T number of threads per core 00:06:13.089 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:13.089 [-t time in seconds] 00:06:13.089 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:13.089 [ dif_verify, , dif_generate, dif_generate_copy 00:06:13.089 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:13.089 [-l for compress/decompress workloads, name of uncompressed input file 00:06:13.089 [-S for crc32c workload, use this seed value (default 0) 00:06:13.089 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:13.089 [-f for fill workload, use this BYTE value (default 255) 00:06:13.089 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:13.089 [-y verify result if this switch is on] 00:06:13.089 [-a tasks to allocate per core (default: same value as -q)] 00:06:13.089 Can be used to spread operations across a wider range of memory. 00:06:13.089 15:16:30 -- common/autotest_common.sh@641 -- # es=1 00:06:13.089 15:16:30 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:13.089 15:16:30 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:13.089 15:16:30 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:13.089 00:06:13.089 real 0m0.035s 00:06:13.089 user 0m0.025s 00:06:13.089 sys 0m0.010s 00:06:13.089 15:16:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:13.089 15:16:30 -- common/autotest_common.sh@10 -- # set +x 00:06:13.089 ************************************ 00:06:13.089 END TEST accel_wrong_workload 00:06:13.089 ************************************ 00:06:13.089 Error: writing output failed: Broken pipe 00:06:13.089 15:16:30 -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:13.089 15:16:30 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:13.089 15:16:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:13.089 15:16:30 -- common/autotest_common.sh@10 -- # set +x 00:06:13.351 ************************************ 00:06:13.351 START TEST accel_negative_buffers 00:06:13.351 ************************************ 00:06:13.351 15:16:30 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:13.351 15:16:30 -- common/autotest_common.sh@638 -- # local es=0 00:06:13.351 15:16:30 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:13.351 15:16:30 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:06:13.351 15:16:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:13.351 15:16:30 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:06:13.351 15:16:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:13.351 15:16:30 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w xor -y -x -1 00:06:13.351 15:16:30 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:13.351 15:16:30 -- accel/accel.sh@12 -- # build_accel_config 00:06:13.351 15:16:30 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:13.351 15:16:30 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:13.351 15:16:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.351 15:16:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.351 15:16:30 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:13.351 15:16:30 -- accel/accel.sh@40 -- # local IFS=, 00:06:13.351 15:16:30 -- accel/accel.sh@41 -- # jq -r . 00:06:13.351 -x option must be non-negative. 00:06:13.351 [2024-04-26 15:16:30.611256] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:13.351 accel_perf options: 00:06:13.351 [-h help message] 00:06:13.351 [-q queue depth per core] 00:06:13.351 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:13.351 [-T number of threads per core 00:06:13.351 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:13.351 [-t time in seconds] 00:06:13.351 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:13.351 [ dif_verify, , dif_generate, dif_generate_copy 00:06:13.351 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:13.351 [-l for compress/decompress workloads, name of uncompressed input file 00:06:13.351 [-S for crc32c workload, use this seed value (default 0) 00:06:13.351 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:13.351 [-f for fill workload, use this BYTE value (default 255) 00:06:13.351 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:13.351 [-y verify result if this switch is on] 00:06:13.351 [-a tasks to allocate per core (default: same value as -q)] 00:06:13.351 Can be used to spread operations across a wider range of memory. 00:06:13.351 15:16:30 -- common/autotest_common.sh@641 -- # es=1 00:06:13.351 15:16:30 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:13.351 15:16:30 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:13.351 15:16:30 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:13.351 00:06:13.351 real 0m0.037s 00:06:13.351 user 0m0.025s 00:06:13.351 sys 0m0.011s 00:06:13.351 15:16:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:13.351 15:16:30 -- common/autotest_common.sh@10 -- # set +x 00:06:13.351 ************************************ 00:06:13.351 END TEST accel_negative_buffers 00:06:13.351 ************************************ 00:06:13.351 Error: writing output failed: Broken pipe 00:06:13.351 15:16:30 -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:13.351 15:16:30 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:13.351 15:16:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:13.351 15:16:30 -- common/autotest_common.sh@10 -- # set +x 00:06:13.613 ************************************ 00:06:13.613 START TEST accel_crc32c 00:06:13.613 ************************************ 00:06:13.613 15:16:30 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:13.613 15:16:30 -- accel/accel.sh@16 -- # local accel_opc 00:06:13.613 15:16:30 -- accel/accel.sh@17 -- # local accel_module 00:06:13.613 15:16:30 -- accel/accel.sh@19 -- # IFS=: 00:06:13.613 15:16:30 -- accel/accel.sh@19 -- # read -r var val 00:06:13.613 15:16:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:13.613 15:16:30 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:13.613 15:16:30 -- accel/accel.sh@12 -- # build_accel_config 00:06:13.613 15:16:30 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:13.613 15:16:30 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:13.613 15:16:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.613 15:16:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.613 15:16:30 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:13.613 15:16:30 -- accel/accel.sh@40 -- # local IFS=, 00:06:13.613 15:16:30 -- accel/accel.sh@41 -- # jq -r . 00:06:13.613 [2024-04-26 15:16:30.849446] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:06:13.613 [2024-04-26 15:16:30.849517] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1434158 ] 00:06:13.613 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.613 [2024-04-26 15:16:30.915001] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.613 [2024-04-26 15:16:30.986324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.613 15:16:31 -- accel/accel.sh@20 -- # val= 00:06:13.613 15:16:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.613 15:16:31 -- accel/accel.sh@19 -- # IFS=: 00:06:13.613 15:16:31 -- accel/accel.sh@19 -- # read -r var val 00:06:13.613 15:16:31 -- accel/accel.sh@20 -- # val= 00:06:13.613 15:16:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.613 15:16:31 -- accel/accel.sh@19 -- # IFS=: 00:06:13.613 15:16:31 -- accel/accel.sh@19 -- # read -r var val 00:06:13.613 15:16:31 -- accel/accel.sh@20 -- # val=0x1 00:06:13.613 15:16:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.613 15:16:31 -- accel/accel.sh@19 -- # IFS=: 00:06:13.613 15:16:31 -- accel/accel.sh@19 -- # read -r var val 00:06:13.613 15:16:31 -- accel/accel.sh@20 -- # val= 00:06:13.613 15:16:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.613 15:16:31 -- accel/accel.sh@19 -- # IFS=: 00:06:13.613 15:16:31 -- accel/accel.sh@19 -- # read -r var val 00:06:13.613 15:16:31 -- accel/accel.sh@20 -- # val= 00:06:13.613 15:16:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.613 15:16:31 -- accel/accel.sh@19 -- # IFS=: 00:06:13.613 15:16:31 -- accel/accel.sh@19 -- # read -r var val 00:06:13.613 15:16:31 -- accel/accel.sh@20 -- # val=crc32c 00:06:13.613 15:16:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.613 15:16:31 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:13.613 15:16:31 -- accel/accel.sh@19 -- # IFS=: 00:06:13.613 15:16:31 -- accel/accel.sh@19 -- # read -r var val 00:06:13.613 15:16:31 -- accel/accel.sh@20 -- # val=32 00:06:13.613 15:16:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.613 15:16:31 -- accel/accel.sh@19 -- # IFS=: 00:06:13.613 15:16:31 -- accel/accel.sh@19 -- # read -r var val 00:06:13.613 15:16:31 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:13.613 15:16:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.613 15:16:31 -- accel/accel.sh@19 -- # IFS=: 00:06:13.613 15:16:31 -- accel/accel.sh@19 -- # read -r var val 00:06:13.613 15:16:31 -- accel/accel.sh@20 -- # val= 00:06:13.613 15:16:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.613 15:16:31 -- accel/accel.sh@19 -- # IFS=: 00:06:13.613 15:16:31 -- accel/accel.sh@19 -- # read -r var val 00:06:13.613 15:16:31 -- accel/accel.sh@20 -- # val=software 00:06:13.613 15:16:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.613 15:16:31 -- accel/accel.sh@22 -- # accel_module=software 00:06:13.613 15:16:31 -- accel/accel.sh@19 -- # IFS=: 00:06:13.613 15:16:31 -- accel/accel.sh@19 -- # read -r var val 00:06:13.613 15:16:31 -- accel/accel.sh@20 -- # val=32 00:06:13.613 15:16:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.613 15:16:31 -- accel/accel.sh@19 -- # IFS=: 00:06:13.613 15:16:31 -- accel/accel.sh@19 -- # read -r var val 00:06:13.613 15:16:31 -- accel/accel.sh@20 -- # val=32 00:06:13.613 15:16:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.613 15:16:31 -- accel/accel.sh@19 -- # IFS=: 00:06:13.613 15:16:31 -- accel/accel.sh@19 -- # read -r var val 00:06:13.613 15:16:31 -- accel/accel.sh@20 -- # val=1 00:06:13.613 15:16:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.613 15:16:31 -- accel/accel.sh@19 -- # IFS=: 00:06:13.613 15:16:31 -- accel/accel.sh@19 -- # read -r var val 00:06:13.613 15:16:31 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:13.613 15:16:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.613 15:16:31 -- accel/accel.sh@19 -- # IFS=: 00:06:13.613 15:16:31 -- accel/accel.sh@19 -- # read -r var val 00:06:13.613 15:16:31 -- accel/accel.sh@20 -- # val=Yes 00:06:13.613 15:16:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.613 15:16:31 -- accel/accel.sh@19 -- # IFS=: 00:06:13.613 15:16:31 -- accel/accel.sh@19 -- # read -r var val 00:06:13.614 15:16:31 -- accel/accel.sh@20 -- # val= 00:06:13.614 15:16:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.614 15:16:31 -- accel/accel.sh@19 -- # IFS=: 00:06:13.614 15:16:31 -- accel/accel.sh@19 -- # read -r var val 00:06:13.614 15:16:31 -- accel/accel.sh@20 -- # val= 00:06:13.614 15:16:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.614 15:16:31 -- accel/accel.sh@19 -- # IFS=: 00:06:13.614 15:16:31 -- accel/accel.sh@19 -- # read -r var val 00:06:15.000 15:16:32 -- accel/accel.sh@20 -- # val= 00:06:15.001 15:16:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.001 15:16:32 -- accel/accel.sh@19 -- # IFS=: 00:06:15.001 15:16:32 -- accel/accel.sh@19 -- # read -r var val 00:06:15.001 15:16:32 -- accel/accel.sh@20 -- # val= 00:06:15.001 15:16:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.001 15:16:32 -- accel/accel.sh@19 -- # IFS=: 00:06:15.001 15:16:32 -- accel/accel.sh@19 -- # read -r var val 00:06:15.001 15:16:32 -- accel/accel.sh@20 -- # val= 00:06:15.001 15:16:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.001 15:16:32 -- accel/accel.sh@19 -- # IFS=: 00:06:15.001 15:16:32 -- accel/accel.sh@19 -- # read -r var val 00:06:15.001 15:16:32 -- accel/accel.sh@20 -- # val= 00:06:15.001 15:16:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.001 15:16:32 -- accel/accel.sh@19 -- # IFS=: 00:06:15.001 15:16:32 -- accel/accel.sh@19 -- # read -r var val 00:06:15.001 15:16:32 -- accel/accel.sh@20 -- # val= 00:06:15.001 15:16:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.001 15:16:32 -- accel/accel.sh@19 -- # IFS=: 00:06:15.001 15:16:32 -- accel/accel.sh@19 -- # read -r var val 00:06:15.001 15:16:32 -- accel/accel.sh@20 -- # val= 00:06:15.001 15:16:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.001 15:16:32 -- accel/accel.sh@19 -- # IFS=: 00:06:15.001 15:16:32 -- accel/accel.sh@19 -- # read -r var val 00:06:15.001 15:16:32 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:15.001 15:16:32 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:15.001 15:16:32 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:15.001 00:06:15.001 real 0m1.295s 00:06:15.001 user 0m1.193s 00:06:15.001 sys 0m0.114s 00:06:15.001 15:16:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:15.001 15:16:32 -- common/autotest_common.sh@10 -- # set +x 00:06:15.001 ************************************ 00:06:15.001 END TEST accel_crc32c 00:06:15.001 ************************************ 00:06:15.001 15:16:32 -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:15.001 15:16:32 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:15.001 15:16:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:15.001 15:16:32 -- common/autotest_common.sh@10 -- # set +x 00:06:15.001 ************************************ 00:06:15.001 START TEST accel_crc32c_C2 00:06:15.001 ************************************ 00:06:15.001 15:16:32 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:15.001 15:16:32 -- accel/accel.sh@16 -- # local accel_opc 00:06:15.001 15:16:32 -- accel/accel.sh@17 -- # local accel_module 00:06:15.001 15:16:32 -- accel/accel.sh@19 -- # IFS=: 00:06:15.001 15:16:32 -- accel/accel.sh@19 -- # read -r var val 00:06:15.001 15:16:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:15.001 15:16:32 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:15.001 15:16:32 -- accel/accel.sh@12 -- # build_accel_config 00:06:15.001 15:16:32 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:15.001 15:16:32 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:15.001 15:16:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.001 15:16:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.001 15:16:32 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:15.001 15:16:32 -- accel/accel.sh@40 -- # local IFS=, 00:06:15.001 15:16:32 -- accel/accel.sh@41 -- # jq -r . 00:06:15.001 [2024-04-26 15:16:32.327915] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:06:15.001 [2024-04-26 15:16:32.327980] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1434406 ] 00:06:15.001 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.001 [2024-04-26 15:16:32.390267] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.262 [2024-04-26 15:16:32.456453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.262 15:16:32 -- accel/accel.sh@20 -- # val= 00:06:15.262 15:16:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.262 15:16:32 -- accel/accel.sh@19 -- # IFS=: 00:06:15.262 15:16:32 -- accel/accel.sh@19 -- # read -r var val 00:06:15.262 15:16:32 -- accel/accel.sh@20 -- # val= 00:06:15.262 15:16:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.262 15:16:32 -- accel/accel.sh@19 -- # IFS=: 00:06:15.262 15:16:32 -- accel/accel.sh@19 -- # read -r var val 00:06:15.262 15:16:32 -- accel/accel.sh@20 -- # val=0x1 00:06:15.262 15:16:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.262 15:16:32 -- accel/accel.sh@19 -- # IFS=: 00:06:15.262 15:16:32 -- accel/accel.sh@19 -- # read -r var val 00:06:15.262 15:16:32 -- accel/accel.sh@20 -- # val= 00:06:15.262 15:16:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.262 15:16:32 -- accel/accel.sh@19 -- # IFS=: 00:06:15.262 15:16:32 -- accel/accel.sh@19 -- # read -r var val 00:06:15.262 15:16:32 -- accel/accel.sh@20 -- # val= 00:06:15.262 15:16:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.262 15:16:32 -- accel/accel.sh@19 -- # IFS=: 00:06:15.262 15:16:32 -- accel/accel.sh@19 -- # read -r var val 00:06:15.262 15:16:32 -- accel/accel.sh@20 -- # val=crc32c 00:06:15.262 15:16:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.262 15:16:32 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:15.262 15:16:32 -- accel/accel.sh@19 -- # IFS=: 00:06:15.262 15:16:32 -- accel/accel.sh@19 -- # read -r var val 00:06:15.262 15:16:32 -- accel/accel.sh@20 -- # val=0 00:06:15.262 15:16:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.262 15:16:32 -- accel/accel.sh@19 -- # IFS=: 00:06:15.262 15:16:32 -- accel/accel.sh@19 -- # read -r var val 00:06:15.262 15:16:32 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:15.262 15:16:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.262 15:16:32 -- accel/accel.sh@19 -- # IFS=: 00:06:15.262 15:16:32 -- accel/accel.sh@19 -- # read -r var val 00:06:15.262 15:16:32 -- accel/accel.sh@20 -- # val= 00:06:15.262 15:16:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.262 15:16:32 -- accel/accel.sh@19 -- # IFS=: 00:06:15.262 15:16:32 -- accel/accel.sh@19 -- # read -r var val 00:06:15.262 15:16:32 -- accel/accel.sh@20 -- # val=software 00:06:15.262 15:16:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.262 15:16:32 -- accel/accel.sh@22 -- # accel_module=software 00:06:15.262 15:16:32 -- accel/accel.sh@19 -- # IFS=: 00:06:15.262 15:16:32 -- accel/accel.sh@19 -- # read -r var val 00:06:15.262 15:16:32 -- accel/accel.sh@20 -- # val=32 00:06:15.262 15:16:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.262 15:16:32 -- accel/accel.sh@19 -- # IFS=: 00:06:15.262 15:16:32 -- accel/accel.sh@19 -- # read -r var val 00:06:15.262 15:16:32 -- accel/accel.sh@20 -- # val=32 00:06:15.262 15:16:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.262 15:16:32 -- accel/accel.sh@19 -- # IFS=: 00:06:15.262 15:16:32 -- accel/accel.sh@19 -- # read -r var val 00:06:15.262 15:16:32 -- accel/accel.sh@20 -- # val=1 00:06:15.262 15:16:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.262 15:16:32 -- accel/accel.sh@19 -- # IFS=: 00:06:15.262 15:16:32 -- accel/accel.sh@19 -- # read -r var val 00:06:15.262 15:16:32 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:15.262 15:16:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.262 15:16:32 -- accel/accel.sh@19 -- # IFS=: 00:06:15.262 15:16:32 -- accel/accel.sh@19 -- # read -r var val 00:06:15.262 15:16:32 -- accel/accel.sh@20 -- # val=Yes 00:06:15.262 15:16:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.262 15:16:32 -- accel/accel.sh@19 -- # IFS=: 00:06:15.262 15:16:32 -- accel/accel.sh@19 -- # read -r var val 00:06:15.262 15:16:32 -- accel/accel.sh@20 -- # val= 00:06:15.262 15:16:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.262 15:16:32 -- accel/accel.sh@19 -- # IFS=: 00:06:15.262 15:16:32 -- accel/accel.sh@19 -- # read -r var val 00:06:15.262 15:16:32 -- accel/accel.sh@20 -- # val= 00:06:15.262 15:16:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.262 15:16:32 -- accel/accel.sh@19 -- # IFS=: 00:06:15.263 15:16:32 -- accel/accel.sh@19 -- # read -r var val 00:06:16.208 15:16:33 -- accel/accel.sh@20 -- # val= 00:06:16.208 15:16:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.208 15:16:33 -- accel/accel.sh@19 -- # IFS=: 00:06:16.208 15:16:33 -- accel/accel.sh@19 -- # read -r var val 00:06:16.208 15:16:33 -- accel/accel.sh@20 -- # val= 00:06:16.208 15:16:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.208 15:16:33 -- accel/accel.sh@19 -- # IFS=: 00:06:16.208 15:16:33 -- accel/accel.sh@19 -- # read -r var val 00:06:16.208 15:16:33 -- accel/accel.sh@20 -- # val= 00:06:16.208 15:16:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.208 15:16:33 -- accel/accel.sh@19 -- # IFS=: 00:06:16.208 15:16:33 -- accel/accel.sh@19 -- # read -r var val 00:06:16.208 15:16:33 -- accel/accel.sh@20 -- # val= 00:06:16.208 15:16:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.208 15:16:33 -- accel/accel.sh@19 -- # IFS=: 00:06:16.208 15:16:33 -- accel/accel.sh@19 -- # read -r var val 00:06:16.208 15:16:33 -- accel/accel.sh@20 -- # val= 00:06:16.208 15:16:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.208 15:16:33 -- accel/accel.sh@19 -- # IFS=: 00:06:16.208 15:16:33 -- accel/accel.sh@19 -- # read -r var val 00:06:16.208 15:16:33 -- accel/accel.sh@20 -- # val= 00:06:16.208 15:16:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.208 15:16:33 -- accel/accel.sh@19 -- # IFS=: 00:06:16.208 15:16:33 -- accel/accel.sh@19 -- # read -r var val 00:06:16.208 15:16:33 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:16.208 15:16:33 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:16.208 15:16:33 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:16.208 00:06:16.208 real 0m1.285s 00:06:16.208 user 0m1.190s 00:06:16.208 sys 0m0.106s 00:06:16.208 15:16:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:16.208 15:16:33 -- common/autotest_common.sh@10 -- # set +x 00:06:16.208 ************************************ 00:06:16.208 END TEST accel_crc32c_C2 00:06:16.208 ************************************ 00:06:16.208 15:16:33 -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:16.208 15:16:33 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:16.208 15:16:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:16.208 15:16:33 -- common/autotest_common.sh@10 -- # set +x 00:06:16.544 ************************************ 00:06:16.544 START TEST accel_copy 00:06:16.544 ************************************ 00:06:16.544 15:16:33 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy -y 00:06:16.544 15:16:33 -- accel/accel.sh@16 -- # local accel_opc 00:06:16.544 15:16:33 -- accel/accel.sh@17 -- # local accel_module 00:06:16.544 15:16:33 -- accel/accel.sh@19 -- # IFS=: 00:06:16.544 15:16:33 -- accel/accel.sh@19 -- # read -r var val 00:06:16.544 15:16:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:16.544 15:16:33 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:16.544 15:16:33 -- accel/accel.sh@12 -- # build_accel_config 00:06:16.544 15:16:33 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:16.544 15:16:33 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:16.544 15:16:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.544 15:16:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.544 15:16:33 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:16.544 15:16:33 -- accel/accel.sh@40 -- # local IFS=, 00:06:16.544 15:16:33 -- accel/accel.sh@41 -- # jq -r . 00:06:16.544 [2024-04-26 15:16:33.810371] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:06:16.544 [2024-04-26 15:16:33.810438] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1434653 ] 00:06:16.544 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.544 [2024-04-26 15:16:33.875945] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.544 [2024-04-26 15:16:33.947520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.544 15:16:33 -- accel/accel.sh@20 -- # val= 00:06:16.544 15:16:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.544 15:16:33 -- accel/accel.sh@19 -- # IFS=: 00:06:16.544 15:16:33 -- accel/accel.sh@19 -- # read -r var val 00:06:16.544 15:16:33 -- accel/accel.sh@20 -- # val= 00:06:16.544 15:16:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.544 15:16:33 -- accel/accel.sh@19 -- # IFS=: 00:06:16.544 15:16:33 -- accel/accel.sh@19 -- # read -r var val 00:06:16.544 15:16:33 -- accel/accel.sh@20 -- # val=0x1 00:06:16.544 15:16:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.544 15:16:33 -- accel/accel.sh@19 -- # IFS=: 00:06:16.544 15:16:33 -- accel/accel.sh@19 -- # read -r var val 00:06:16.544 15:16:33 -- accel/accel.sh@20 -- # val= 00:06:16.544 15:16:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.544 15:16:33 -- accel/accel.sh@19 -- # IFS=: 00:06:16.544 15:16:33 -- accel/accel.sh@19 -- # read -r var val 00:06:16.544 15:16:33 -- accel/accel.sh@20 -- # val= 00:06:16.544 15:16:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.544 15:16:33 -- accel/accel.sh@19 -- # IFS=: 00:06:16.544 15:16:33 -- accel/accel.sh@19 -- # read -r var val 00:06:16.544 15:16:33 -- accel/accel.sh@20 -- # val=copy 00:06:16.544 15:16:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.544 15:16:33 -- accel/accel.sh@23 -- # accel_opc=copy 00:06:16.544 15:16:33 -- accel/accel.sh@19 -- # IFS=: 00:06:16.544 15:16:33 -- accel/accel.sh@19 -- # read -r var val 00:06:16.544 15:16:33 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:16.544 15:16:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.544 15:16:33 -- accel/accel.sh@19 -- # IFS=: 00:06:16.544 15:16:33 -- accel/accel.sh@19 -- # read -r var val 00:06:16.544 15:16:33 -- accel/accel.sh@20 -- # val= 00:06:16.544 15:16:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.544 15:16:33 -- accel/accel.sh@19 -- # IFS=: 00:06:16.544 15:16:33 -- accel/accel.sh@19 -- # read -r var val 00:06:16.544 15:16:33 -- accel/accel.sh@20 -- # val=software 00:06:16.544 15:16:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.544 15:16:33 -- accel/accel.sh@22 -- # accel_module=software 00:06:16.544 15:16:33 -- accel/accel.sh@19 -- # IFS=: 00:06:16.544 15:16:33 -- accel/accel.sh@19 -- # read -r var val 00:06:16.544 15:16:33 -- accel/accel.sh@20 -- # val=32 00:06:16.544 15:16:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.544 15:16:33 -- accel/accel.sh@19 -- # IFS=: 00:06:16.544 15:16:33 -- accel/accel.sh@19 -- # read -r var val 00:06:16.544 15:16:33 -- accel/accel.sh@20 -- # val=32 00:06:16.544 15:16:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.544 15:16:33 -- accel/accel.sh@19 -- # IFS=: 00:06:16.544 15:16:33 -- accel/accel.sh@19 -- # read -r var val 00:06:16.544 15:16:33 -- accel/accel.sh@20 -- # val=1 00:06:16.544 15:16:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.544 15:16:33 -- accel/accel.sh@19 -- # IFS=: 00:06:16.544 15:16:33 -- accel/accel.sh@19 -- # read -r var val 00:06:16.544 15:16:33 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:16.544 15:16:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.544 15:16:33 -- accel/accel.sh@19 -- # IFS=: 00:06:16.544 15:16:33 -- accel/accel.sh@19 -- # read -r var val 00:06:16.544 15:16:33 -- accel/accel.sh@20 -- # val=Yes 00:06:16.544 15:16:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.544 15:16:33 -- accel/accel.sh@19 -- # IFS=: 00:06:16.544 15:16:33 -- accel/accel.sh@19 -- # read -r var val 00:06:16.544 15:16:33 -- accel/accel.sh@20 -- # val= 00:06:16.544 15:16:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.544 15:16:33 -- accel/accel.sh@19 -- # IFS=: 00:06:16.544 15:16:33 -- accel/accel.sh@19 -- # read -r var val 00:06:16.544 15:16:33 -- accel/accel.sh@20 -- # val= 00:06:16.544 15:16:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.544 15:16:33 -- accel/accel.sh@19 -- # IFS=: 00:06:16.544 15:16:33 -- accel/accel.sh@19 -- # read -r var val 00:06:17.971 15:16:35 -- accel/accel.sh@20 -- # val= 00:06:17.971 15:16:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.971 15:16:35 -- accel/accel.sh@19 -- # IFS=: 00:06:17.971 15:16:35 -- accel/accel.sh@19 -- # read -r var val 00:06:17.971 15:16:35 -- accel/accel.sh@20 -- # val= 00:06:17.971 15:16:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.971 15:16:35 -- accel/accel.sh@19 -- # IFS=: 00:06:17.971 15:16:35 -- accel/accel.sh@19 -- # read -r var val 00:06:17.971 15:16:35 -- accel/accel.sh@20 -- # val= 00:06:17.971 15:16:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.971 15:16:35 -- accel/accel.sh@19 -- # IFS=: 00:06:17.971 15:16:35 -- accel/accel.sh@19 -- # read -r var val 00:06:17.971 15:16:35 -- accel/accel.sh@20 -- # val= 00:06:17.971 15:16:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.971 15:16:35 -- accel/accel.sh@19 -- # IFS=: 00:06:17.971 15:16:35 -- accel/accel.sh@19 -- # read -r var val 00:06:17.971 15:16:35 -- accel/accel.sh@20 -- # val= 00:06:17.971 15:16:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.971 15:16:35 -- accel/accel.sh@19 -- # IFS=: 00:06:17.971 15:16:35 -- accel/accel.sh@19 -- # read -r var val 00:06:17.971 15:16:35 -- accel/accel.sh@20 -- # val= 00:06:17.971 15:16:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.971 15:16:35 -- accel/accel.sh@19 -- # IFS=: 00:06:17.971 15:16:35 -- accel/accel.sh@19 -- # read -r var val 00:06:17.971 15:16:35 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:17.971 15:16:35 -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:17.971 15:16:35 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:17.971 00:06:17.971 real 0m1.295s 00:06:17.971 user 0m1.198s 00:06:17.971 sys 0m0.108s 00:06:17.971 15:16:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:17.971 15:16:35 -- common/autotest_common.sh@10 -- # set +x 00:06:17.971 ************************************ 00:06:17.971 END TEST accel_copy 00:06:17.971 ************************************ 00:06:17.971 15:16:35 -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:17.971 15:16:35 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:17.971 15:16:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:17.971 15:16:35 -- common/autotest_common.sh@10 -- # set +x 00:06:17.971 ************************************ 00:06:17.971 START TEST accel_fill 00:06:17.971 ************************************ 00:06:17.971 15:16:35 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:17.971 15:16:35 -- accel/accel.sh@16 -- # local accel_opc 00:06:17.971 15:16:35 -- accel/accel.sh@17 -- # local accel_module 00:06:17.971 15:16:35 -- accel/accel.sh@19 -- # IFS=: 00:06:17.971 15:16:35 -- accel/accel.sh@19 -- # read -r var val 00:06:17.971 15:16:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:17.971 15:16:35 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:17.971 15:16:35 -- accel/accel.sh@12 -- # build_accel_config 00:06:17.971 15:16:35 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:17.971 15:16:35 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:17.971 15:16:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.971 15:16:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.971 15:16:35 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:17.971 15:16:35 -- accel/accel.sh@40 -- # local IFS=, 00:06:17.971 15:16:35 -- accel/accel.sh@41 -- # jq -r . 00:06:17.971 [2024-04-26 15:16:35.299656] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:06:17.971 [2024-04-26 15:16:35.299729] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1434936 ] 00:06:17.971 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.971 [2024-04-26 15:16:35.365938] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.232 [2024-04-26 15:16:35.438778] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.232 15:16:35 -- accel/accel.sh@20 -- # val= 00:06:18.232 15:16:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.233 15:16:35 -- accel/accel.sh@19 -- # IFS=: 00:06:18.233 15:16:35 -- accel/accel.sh@19 -- # read -r var val 00:06:18.233 15:16:35 -- accel/accel.sh@20 -- # val= 00:06:18.233 15:16:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.233 15:16:35 -- accel/accel.sh@19 -- # IFS=: 00:06:18.233 15:16:35 -- accel/accel.sh@19 -- # read -r var val 00:06:18.233 15:16:35 -- accel/accel.sh@20 -- # val=0x1 00:06:18.233 15:16:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.233 15:16:35 -- accel/accel.sh@19 -- # IFS=: 00:06:18.233 15:16:35 -- accel/accel.sh@19 -- # read -r var val 00:06:18.233 15:16:35 -- accel/accel.sh@20 -- # val= 00:06:18.233 15:16:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.233 15:16:35 -- accel/accel.sh@19 -- # IFS=: 00:06:18.233 15:16:35 -- accel/accel.sh@19 -- # read -r var val 00:06:18.233 15:16:35 -- accel/accel.sh@20 -- # val= 00:06:18.233 15:16:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.233 15:16:35 -- accel/accel.sh@19 -- # IFS=: 00:06:18.233 15:16:35 -- accel/accel.sh@19 -- # read -r var val 00:06:18.233 15:16:35 -- accel/accel.sh@20 -- # val=fill 00:06:18.233 15:16:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.233 15:16:35 -- accel/accel.sh@23 -- # accel_opc=fill 00:06:18.233 15:16:35 -- accel/accel.sh@19 -- # IFS=: 00:06:18.233 15:16:35 -- accel/accel.sh@19 -- # read -r var val 00:06:18.233 15:16:35 -- accel/accel.sh@20 -- # val=0x80 00:06:18.233 15:16:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.233 15:16:35 -- accel/accel.sh@19 -- # IFS=: 00:06:18.233 15:16:35 -- accel/accel.sh@19 -- # read -r var val 00:06:18.233 15:16:35 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:18.233 15:16:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.233 15:16:35 -- accel/accel.sh@19 -- # IFS=: 00:06:18.233 15:16:35 -- accel/accel.sh@19 -- # read -r var val 00:06:18.233 15:16:35 -- accel/accel.sh@20 -- # val= 00:06:18.233 15:16:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.233 15:16:35 -- accel/accel.sh@19 -- # IFS=: 00:06:18.233 15:16:35 -- accel/accel.sh@19 -- # read -r var val 00:06:18.233 15:16:35 -- accel/accel.sh@20 -- # val=software 00:06:18.233 15:16:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.233 15:16:35 -- accel/accel.sh@22 -- # accel_module=software 00:06:18.233 15:16:35 -- accel/accel.sh@19 -- # IFS=: 00:06:18.233 15:16:35 -- accel/accel.sh@19 -- # read -r var val 00:06:18.233 15:16:35 -- accel/accel.sh@20 -- # val=64 00:06:18.233 15:16:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.233 15:16:35 -- accel/accel.sh@19 -- # IFS=: 00:06:18.233 15:16:35 -- accel/accel.sh@19 -- # read -r var val 00:06:18.233 15:16:35 -- accel/accel.sh@20 -- # val=64 00:06:18.233 15:16:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.233 15:16:35 -- accel/accel.sh@19 -- # IFS=: 00:06:18.233 15:16:35 -- accel/accel.sh@19 -- # read -r var val 00:06:18.233 15:16:35 -- accel/accel.sh@20 -- # val=1 00:06:18.233 15:16:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.233 15:16:35 -- accel/accel.sh@19 -- # IFS=: 00:06:18.233 15:16:35 -- accel/accel.sh@19 -- # read -r var val 00:06:18.233 15:16:35 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:18.233 15:16:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.233 15:16:35 -- accel/accel.sh@19 -- # IFS=: 00:06:18.233 15:16:35 -- accel/accel.sh@19 -- # read -r var val 00:06:18.233 15:16:35 -- accel/accel.sh@20 -- # val=Yes 00:06:18.233 15:16:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.233 15:16:35 -- accel/accel.sh@19 -- # IFS=: 00:06:18.233 15:16:35 -- accel/accel.sh@19 -- # read -r var val 00:06:18.233 15:16:35 -- accel/accel.sh@20 -- # val= 00:06:18.233 15:16:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.233 15:16:35 -- accel/accel.sh@19 -- # IFS=: 00:06:18.233 15:16:35 -- accel/accel.sh@19 -- # read -r var val 00:06:18.233 15:16:35 -- accel/accel.sh@20 -- # val= 00:06:18.233 15:16:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.233 15:16:35 -- accel/accel.sh@19 -- # IFS=: 00:06:18.233 15:16:35 -- accel/accel.sh@19 -- # read -r var val 00:06:19.173 15:16:36 -- accel/accel.sh@20 -- # val= 00:06:19.173 15:16:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.174 15:16:36 -- accel/accel.sh@19 -- # IFS=: 00:06:19.174 15:16:36 -- accel/accel.sh@19 -- # read -r var val 00:06:19.174 15:16:36 -- accel/accel.sh@20 -- # val= 00:06:19.174 15:16:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.174 15:16:36 -- accel/accel.sh@19 -- # IFS=: 00:06:19.174 15:16:36 -- accel/accel.sh@19 -- # read -r var val 00:06:19.174 15:16:36 -- accel/accel.sh@20 -- # val= 00:06:19.174 15:16:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.174 15:16:36 -- accel/accel.sh@19 -- # IFS=: 00:06:19.174 15:16:36 -- accel/accel.sh@19 -- # read -r var val 00:06:19.174 15:16:36 -- accel/accel.sh@20 -- # val= 00:06:19.174 15:16:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.174 15:16:36 -- accel/accel.sh@19 -- # IFS=: 00:06:19.174 15:16:36 -- accel/accel.sh@19 -- # read -r var val 00:06:19.174 15:16:36 -- accel/accel.sh@20 -- # val= 00:06:19.174 15:16:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.174 15:16:36 -- accel/accel.sh@19 -- # IFS=: 00:06:19.174 15:16:36 -- accel/accel.sh@19 -- # read -r var val 00:06:19.174 15:16:36 -- accel/accel.sh@20 -- # val= 00:06:19.174 15:16:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.174 15:16:36 -- accel/accel.sh@19 -- # IFS=: 00:06:19.174 15:16:36 -- accel/accel.sh@19 -- # read -r var val 00:06:19.174 15:16:36 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:19.174 15:16:36 -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:19.174 15:16:36 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:19.174 00:06:19.174 real 0m1.297s 00:06:19.174 user 0m1.196s 00:06:19.174 sys 0m0.112s 00:06:19.174 15:16:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:19.174 15:16:36 -- common/autotest_common.sh@10 -- # set +x 00:06:19.174 ************************************ 00:06:19.174 END TEST accel_fill 00:06:19.174 ************************************ 00:06:19.174 15:16:36 -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:19.174 15:16:36 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:19.174 15:16:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:19.174 15:16:36 -- common/autotest_common.sh@10 -- # set +x 00:06:19.434 ************************************ 00:06:19.434 START TEST accel_copy_crc32c 00:06:19.434 ************************************ 00:06:19.434 15:16:36 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y 00:06:19.434 15:16:36 -- accel/accel.sh@16 -- # local accel_opc 00:06:19.434 15:16:36 -- accel/accel.sh@17 -- # local accel_module 00:06:19.434 15:16:36 -- accel/accel.sh@19 -- # IFS=: 00:06:19.434 15:16:36 -- accel/accel.sh@19 -- # read -r var val 00:06:19.434 15:16:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:19.434 15:16:36 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:19.434 15:16:36 -- accel/accel.sh@12 -- # build_accel_config 00:06:19.434 15:16:36 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:19.434 15:16:36 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:19.434 15:16:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.434 15:16:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.434 15:16:36 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:19.434 15:16:36 -- accel/accel.sh@40 -- # local IFS=, 00:06:19.434 15:16:36 -- accel/accel.sh@41 -- # jq -r . 00:06:19.434 [2024-04-26 15:16:36.788947] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:06:19.434 [2024-04-26 15:16:36.789031] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1435292 ] 00:06:19.434 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.434 [2024-04-26 15:16:36.860007] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.695 [2024-04-26 15:16:36.924514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.695 15:16:36 -- accel/accel.sh@20 -- # val= 00:06:19.695 15:16:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.695 15:16:36 -- accel/accel.sh@19 -- # IFS=: 00:06:19.695 15:16:36 -- accel/accel.sh@19 -- # read -r var val 00:06:19.695 15:16:36 -- accel/accel.sh@20 -- # val= 00:06:19.695 15:16:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.695 15:16:36 -- accel/accel.sh@19 -- # IFS=: 00:06:19.695 15:16:36 -- accel/accel.sh@19 -- # read -r var val 00:06:19.695 15:16:36 -- accel/accel.sh@20 -- # val=0x1 00:06:19.695 15:16:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.695 15:16:36 -- accel/accel.sh@19 -- # IFS=: 00:06:19.695 15:16:36 -- accel/accel.sh@19 -- # read -r var val 00:06:19.695 15:16:36 -- accel/accel.sh@20 -- # val= 00:06:19.695 15:16:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.695 15:16:36 -- accel/accel.sh@19 -- # IFS=: 00:06:19.695 15:16:36 -- accel/accel.sh@19 -- # read -r var val 00:06:19.695 15:16:36 -- accel/accel.sh@20 -- # val= 00:06:19.695 15:16:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.696 15:16:36 -- accel/accel.sh@19 -- # IFS=: 00:06:19.696 15:16:36 -- accel/accel.sh@19 -- # read -r var val 00:06:19.696 15:16:36 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:19.696 15:16:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.696 15:16:36 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:19.696 15:16:36 -- accel/accel.sh@19 -- # IFS=: 00:06:19.696 15:16:36 -- accel/accel.sh@19 -- # read -r var val 00:06:19.696 15:16:36 -- accel/accel.sh@20 -- # val=0 00:06:19.696 15:16:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.696 15:16:36 -- accel/accel.sh@19 -- # IFS=: 00:06:19.696 15:16:36 -- accel/accel.sh@19 -- # read -r var val 00:06:19.696 15:16:36 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:19.696 15:16:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.696 15:16:36 -- accel/accel.sh@19 -- # IFS=: 00:06:19.696 15:16:36 -- accel/accel.sh@19 -- # read -r var val 00:06:19.696 15:16:36 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:19.696 15:16:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.696 15:16:36 -- accel/accel.sh@19 -- # IFS=: 00:06:19.696 15:16:36 -- accel/accel.sh@19 -- # read -r var val 00:06:19.696 15:16:36 -- accel/accel.sh@20 -- # val= 00:06:19.696 15:16:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.696 15:16:36 -- accel/accel.sh@19 -- # IFS=: 00:06:19.696 15:16:36 -- accel/accel.sh@19 -- # read -r var val 00:06:19.696 15:16:36 -- accel/accel.sh@20 -- # val=software 00:06:19.696 15:16:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.696 15:16:36 -- accel/accel.sh@22 -- # accel_module=software 00:06:19.696 15:16:36 -- accel/accel.sh@19 -- # IFS=: 00:06:19.696 15:16:36 -- accel/accel.sh@19 -- # read -r var val 00:06:19.696 15:16:36 -- accel/accel.sh@20 -- # val=32 00:06:19.696 15:16:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.696 15:16:36 -- accel/accel.sh@19 -- # IFS=: 00:06:19.696 15:16:36 -- accel/accel.sh@19 -- # read -r var val 00:06:19.696 15:16:36 -- accel/accel.sh@20 -- # val=32 00:06:19.696 15:16:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.696 15:16:36 -- accel/accel.sh@19 -- # IFS=: 00:06:19.696 15:16:36 -- accel/accel.sh@19 -- # read -r var val 00:06:19.696 15:16:36 -- accel/accel.sh@20 -- # val=1 00:06:19.696 15:16:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.696 15:16:36 -- accel/accel.sh@19 -- # IFS=: 00:06:19.696 15:16:36 -- accel/accel.sh@19 -- # read -r var val 00:06:19.696 15:16:36 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:19.696 15:16:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.696 15:16:36 -- accel/accel.sh@19 -- # IFS=: 00:06:19.696 15:16:36 -- accel/accel.sh@19 -- # read -r var val 00:06:19.696 15:16:36 -- accel/accel.sh@20 -- # val=Yes 00:06:19.696 15:16:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.696 15:16:36 -- accel/accel.sh@19 -- # IFS=: 00:06:19.696 15:16:36 -- accel/accel.sh@19 -- # read -r var val 00:06:19.696 15:16:36 -- accel/accel.sh@20 -- # val= 00:06:19.696 15:16:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.696 15:16:36 -- accel/accel.sh@19 -- # IFS=: 00:06:19.696 15:16:36 -- accel/accel.sh@19 -- # read -r var val 00:06:19.696 15:16:36 -- accel/accel.sh@20 -- # val= 00:06:19.696 15:16:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.696 15:16:36 -- accel/accel.sh@19 -- # IFS=: 00:06:19.696 15:16:36 -- accel/accel.sh@19 -- # read -r var val 00:06:20.640 15:16:38 -- accel/accel.sh@20 -- # val= 00:06:20.640 15:16:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.640 15:16:38 -- accel/accel.sh@19 -- # IFS=: 00:06:20.640 15:16:38 -- accel/accel.sh@19 -- # read -r var val 00:06:20.640 15:16:38 -- accel/accel.sh@20 -- # val= 00:06:20.640 15:16:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.640 15:16:38 -- accel/accel.sh@19 -- # IFS=: 00:06:20.640 15:16:38 -- accel/accel.sh@19 -- # read -r var val 00:06:20.640 15:16:38 -- accel/accel.sh@20 -- # val= 00:06:20.640 15:16:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.640 15:16:38 -- accel/accel.sh@19 -- # IFS=: 00:06:20.640 15:16:38 -- accel/accel.sh@19 -- # read -r var val 00:06:20.640 15:16:38 -- accel/accel.sh@20 -- # val= 00:06:20.640 15:16:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.640 15:16:38 -- accel/accel.sh@19 -- # IFS=: 00:06:20.640 15:16:38 -- accel/accel.sh@19 -- # read -r var val 00:06:20.640 15:16:38 -- accel/accel.sh@20 -- # val= 00:06:20.640 15:16:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.640 15:16:38 -- accel/accel.sh@19 -- # IFS=: 00:06:20.640 15:16:38 -- accel/accel.sh@19 -- # read -r var val 00:06:20.640 15:16:38 -- accel/accel.sh@20 -- # val= 00:06:20.640 15:16:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.640 15:16:38 -- accel/accel.sh@19 -- # IFS=: 00:06:20.640 15:16:38 -- accel/accel.sh@19 -- # read -r var val 00:06:20.640 15:16:38 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:20.640 15:16:38 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:20.640 15:16:38 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:20.640 00:06:20.640 real 0m1.292s 00:06:20.640 user 0m1.190s 00:06:20.640 sys 0m0.113s 00:06:20.640 15:16:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:20.640 15:16:38 -- common/autotest_common.sh@10 -- # set +x 00:06:20.640 ************************************ 00:06:20.640 END TEST accel_copy_crc32c 00:06:20.640 ************************************ 00:06:20.901 15:16:38 -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:20.901 15:16:38 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:20.901 15:16:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:20.901 15:16:38 -- common/autotest_common.sh@10 -- # set +x 00:06:20.901 ************************************ 00:06:20.901 START TEST accel_copy_crc32c_C2 00:06:20.901 ************************************ 00:06:20.901 15:16:38 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:20.901 15:16:38 -- accel/accel.sh@16 -- # local accel_opc 00:06:20.901 15:16:38 -- accel/accel.sh@17 -- # local accel_module 00:06:20.901 15:16:38 -- accel/accel.sh@19 -- # IFS=: 00:06:20.901 15:16:38 -- accel/accel.sh@19 -- # read -r var val 00:06:20.901 15:16:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:20.901 15:16:38 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:20.901 15:16:38 -- accel/accel.sh@12 -- # build_accel_config 00:06:20.901 15:16:38 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:20.901 15:16:38 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:20.901 15:16:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.901 15:16:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.901 15:16:38 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:20.902 15:16:38 -- accel/accel.sh@40 -- # local IFS=, 00:06:20.902 15:16:38 -- accel/accel.sh@41 -- # jq -r . 00:06:20.902 [2024-04-26 15:16:38.276723] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:06:20.902 [2024-04-26 15:16:38.276819] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1435654 ] 00:06:20.902 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.902 [2024-04-26 15:16:38.342430] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.163 [2024-04-26 15:16:38.412664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.163 15:16:38 -- accel/accel.sh@20 -- # val= 00:06:21.163 15:16:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.163 15:16:38 -- accel/accel.sh@19 -- # IFS=: 00:06:21.163 15:16:38 -- accel/accel.sh@19 -- # read -r var val 00:06:21.163 15:16:38 -- accel/accel.sh@20 -- # val= 00:06:21.163 15:16:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.163 15:16:38 -- accel/accel.sh@19 -- # IFS=: 00:06:21.163 15:16:38 -- accel/accel.sh@19 -- # read -r var val 00:06:21.163 15:16:38 -- accel/accel.sh@20 -- # val=0x1 00:06:21.163 15:16:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.163 15:16:38 -- accel/accel.sh@19 -- # IFS=: 00:06:21.163 15:16:38 -- accel/accel.sh@19 -- # read -r var val 00:06:21.163 15:16:38 -- accel/accel.sh@20 -- # val= 00:06:21.163 15:16:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.163 15:16:38 -- accel/accel.sh@19 -- # IFS=: 00:06:21.163 15:16:38 -- accel/accel.sh@19 -- # read -r var val 00:06:21.163 15:16:38 -- accel/accel.sh@20 -- # val= 00:06:21.163 15:16:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.163 15:16:38 -- accel/accel.sh@19 -- # IFS=: 00:06:21.163 15:16:38 -- accel/accel.sh@19 -- # read -r var val 00:06:21.163 15:16:38 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:21.163 15:16:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.163 15:16:38 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:21.163 15:16:38 -- accel/accel.sh@19 -- # IFS=: 00:06:21.163 15:16:38 -- accel/accel.sh@19 -- # read -r var val 00:06:21.163 15:16:38 -- accel/accel.sh@20 -- # val=0 00:06:21.163 15:16:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.163 15:16:38 -- accel/accel.sh@19 -- # IFS=: 00:06:21.163 15:16:38 -- accel/accel.sh@19 -- # read -r var val 00:06:21.163 15:16:38 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:21.163 15:16:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.163 15:16:38 -- accel/accel.sh@19 -- # IFS=: 00:06:21.163 15:16:38 -- accel/accel.sh@19 -- # read -r var val 00:06:21.163 15:16:38 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:21.163 15:16:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.163 15:16:38 -- accel/accel.sh@19 -- # IFS=: 00:06:21.163 15:16:38 -- accel/accel.sh@19 -- # read -r var val 00:06:21.163 15:16:38 -- accel/accel.sh@20 -- # val= 00:06:21.163 15:16:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.163 15:16:38 -- accel/accel.sh@19 -- # IFS=: 00:06:21.163 15:16:38 -- accel/accel.sh@19 -- # read -r var val 00:06:21.163 15:16:38 -- accel/accel.sh@20 -- # val=software 00:06:21.163 15:16:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.163 15:16:38 -- accel/accel.sh@22 -- # accel_module=software 00:06:21.163 15:16:38 -- accel/accel.sh@19 -- # IFS=: 00:06:21.163 15:16:38 -- accel/accel.sh@19 -- # read -r var val 00:06:21.163 15:16:38 -- accel/accel.sh@20 -- # val=32 00:06:21.163 15:16:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.163 15:16:38 -- accel/accel.sh@19 -- # IFS=: 00:06:21.163 15:16:38 -- accel/accel.sh@19 -- # read -r var val 00:06:21.163 15:16:38 -- accel/accel.sh@20 -- # val=32 00:06:21.163 15:16:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.163 15:16:38 -- accel/accel.sh@19 -- # IFS=: 00:06:21.163 15:16:38 -- accel/accel.sh@19 -- # read -r var val 00:06:21.163 15:16:38 -- accel/accel.sh@20 -- # val=1 00:06:21.163 15:16:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.163 15:16:38 -- accel/accel.sh@19 -- # IFS=: 00:06:21.163 15:16:38 -- accel/accel.sh@19 -- # read -r var val 00:06:21.163 15:16:38 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:21.163 15:16:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.163 15:16:38 -- accel/accel.sh@19 -- # IFS=: 00:06:21.163 15:16:38 -- accel/accel.sh@19 -- # read -r var val 00:06:21.163 15:16:38 -- accel/accel.sh@20 -- # val=Yes 00:06:21.163 15:16:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.163 15:16:38 -- accel/accel.sh@19 -- # IFS=: 00:06:21.163 15:16:38 -- accel/accel.sh@19 -- # read -r var val 00:06:21.163 15:16:38 -- accel/accel.sh@20 -- # val= 00:06:21.163 15:16:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.163 15:16:38 -- accel/accel.sh@19 -- # IFS=: 00:06:21.163 15:16:38 -- accel/accel.sh@19 -- # read -r var val 00:06:21.163 15:16:38 -- accel/accel.sh@20 -- # val= 00:06:21.163 15:16:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.163 15:16:38 -- accel/accel.sh@19 -- # IFS=: 00:06:21.163 15:16:38 -- accel/accel.sh@19 -- # read -r var val 00:06:22.108 15:16:39 -- accel/accel.sh@20 -- # val= 00:06:22.108 15:16:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.108 15:16:39 -- accel/accel.sh@19 -- # IFS=: 00:06:22.108 15:16:39 -- accel/accel.sh@19 -- # read -r var val 00:06:22.108 15:16:39 -- accel/accel.sh@20 -- # val= 00:06:22.108 15:16:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.108 15:16:39 -- accel/accel.sh@19 -- # IFS=: 00:06:22.108 15:16:39 -- accel/accel.sh@19 -- # read -r var val 00:06:22.108 15:16:39 -- accel/accel.sh@20 -- # val= 00:06:22.108 15:16:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.108 15:16:39 -- accel/accel.sh@19 -- # IFS=: 00:06:22.108 15:16:39 -- accel/accel.sh@19 -- # read -r var val 00:06:22.108 15:16:39 -- accel/accel.sh@20 -- # val= 00:06:22.108 15:16:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.108 15:16:39 -- accel/accel.sh@19 -- # IFS=: 00:06:22.108 15:16:39 -- accel/accel.sh@19 -- # read -r var val 00:06:22.108 15:16:39 -- accel/accel.sh@20 -- # val= 00:06:22.108 15:16:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.108 15:16:39 -- accel/accel.sh@19 -- # IFS=: 00:06:22.108 15:16:39 -- accel/accel.sh@19 -- # read -r var val 00:06:22.108 15:16:39 -- accel/accel.sh@20 -- # val= 00:06:22.108 15:16:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.108 15:16:39 -- accel/accel.sh@19 -- # IFS=: 00:06:22.108 15:16:39 -- accel/accel.sh@19 -- # read -r var val 00:06:22.108 15:16:39 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:22.108 15:16:39 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:22.108 15:16:39 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:22.108 00:06:22.108 real 0m1.295s 00:06:22.108 user 0m1.199s 00:06:22.108 sys 0m0.107s 00:06:22.108 15:16:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:22.108 15:16:39 -- common/autotest_common.sh@10 -- # set +x 00:06:22.108 ************************************ 00:06:22.108 END TEST accel_copy_crc32c_C2 00:06:22.108 ************************************ 00:06:22.369 15:16:39 -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:22.369 15:16:39 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:22.369 15:16:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:22.369 15:16:39 -- common/autotest_common.sh@10 -- # set +x 00:06:22.369 ************************************ 00:06:22.369 START TEST accel_dualcast 00:06:22.369 ************************************ 00:06:22.369 15:16:39 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dualcast -y 00:06:22.369 15:16:39 -- accel/accel.sh@16 -- # local accel_opc 00:06:22.369 15:16:39 -- accel/accel.sh@17 -- # local accel_module 00:06:22.369 15:16:39 -- accel/accel.sh@19 -- # IFS=: 00:06:22.369 15:16:39 -- accel/accel.sh@19 -- # read -r var val 00:06:22.369 15:16:39 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:22.370 15:16:39 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:22.370 15:16:39 -- accel/accel.sh@12 -- # build_accel_config 00:06:22.370 15:16:39 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:22.370 15:16:39 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:22.370 15:16:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.370 15:16:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.370 15:16:39 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:22.370 15:16:39 -- accel/accel.sh@40 -- # local IFS=, 00:06:22.370 15:16:39 -- accel/accel.sh@41 -- # jq -r . 00:06:22.370 [2024-04-26 15:16:39.767800] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:06:22.370 [2024-04-26 15:16:39.767896] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1436011 ] 00:06:22.370 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.631 [2024-04-26 15:16:39.834064] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.631 [2024-04-26 15:16:39.905819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.631 15:16:39 -- accel/accel.sh@20 -- # val= 00:06:22.631 15:16:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.631 15:16:39 -- accel/accel.sh@19 -- # IFS=: 00:06:22.631 15:16:39 -- accel/accel.sh@19 -- # read -r var val 00:06:22.631 15:16:39 -- accel/accel.sh@20 -- # val= 00:06:22.631 15:16:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.631 15:16:39 -- accel/accel.sh@19 -- # IFS=: 00:06:22.631 15:16:39 -- accel/accel.sh@19 -- # read -r var val 00:06:22.631 15:16:39 -- accel/accel.sh@20 -- # val=0x1 00:06:22.631 15:16:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.631 15:16:39 -- accel/accel.sh@19 -- # IFS=: 00:06:22.631 15:16:39 -- accel/accel.sh@19 -- # read -r var val 00:06:22.631 15:16:39 -- accel/accel.sh@20 -- # val= 00:06:22.631 15:16:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.631 15:16:39 -- accel/accel.sh@19 -- # IFS=: 00:06:22.631 15:16:39 -- accel/accel.sh@19 -- # read -r var val 00:06:22.631 15:16:39 -- accel/accel.sh@20 -- # val= 00:06:22.631 15:16:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.631 15:16:39 -- accel/accel.sh@19 -- # IFS=: 00:06:22.631 15:16:39 -- accel/accel.sh@19 -- # read -r var val 00:06:22.631 15:16:39 -- accel/accel.sh@20 -- # val=dualcast 00:06:22.631 15:16:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.631 15:16:39 -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:22.631 15:16:39 -- accel/accel.sh@19 -- # IFS=: 00:06:22.631 15:16:39 -- accel/accel.sh@19 -- # read -r var val 00:06:22.631 15:16:39 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:22.631 15:16:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.631 15:16:39 -- accel/accel.sh@19 -- # IFS=: 00:06:22.631 15:16:39 -- accel/accel.sh@19 -- # read -r var val 00:06:22.631 15:16:39 -- accel/accel.sh@20 -- # val= 00:06:22.631 15:16:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.631 15:16:39 -- accel/accel.sh@19 -- # IFS=: 00:06:22.631 15:16:39 -- accel/accel.sh@19 -- # read -r var val 00:06:22.631 15:16:39 -- accel/accel.sh@20 -- # val=software 00:06:22.631 15:16:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.631 15:16:39 -- accel/accel.sh@22 -- # accel_module=software 00:06:22.631 15:16:39 -- accel/accel.sh@19 -- # IFS=: 00:06:22.632 15:16:39 -- accel/accel.sh@19 -- # read -r var val 00:06:22.632 15:16:39 -- accel/accel.sh@20 -- # val=32 00:06:22.632 15:16:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.632 15:16:39 -- accel/accel.sh@19 -- # IFS=: 00:06:22.632 15:16:39 -- accel/accel.sh@19 -- # read -r var val 00:06:22.632 15:16:39 -- accel/accel.sh@20 -- # val=32 00:06:22.632 15:16:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.632 15:16:39 -- accel/accel.sh@19 -- # IFS=: 00:06:22.632 15:16:39 -- accel/accel.sh@19 -- # read -r var val 00:06:22.632 15:16:39 -- accel/accel.sh@20 -- # val=1 00:06:22.632 15:16:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.632 15:16:39 -- accel/accel.sh@19 -- # IFS=: 00:06:22.632 15:16:39 -- accel/accel.sh@19 -- # read -r var val 00:06:22.632 15:16:39 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:22.632 15:16:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.632 15:16:39 -- accel/accel.sh@19 -- # IFS=: 00:06:22.632 15:16:39 -- accel/accel.sh@19 -- # read -r var val 00:06:22.632 15:16:39 -- accel/accel.sh@20 -- # val=Yes 00:06:22.632 15:16:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.632 15:16:39 -- accel/accel.sh@19 -- # IFS=: 00:06:22.632 15:16:39 -- accel/accel.sh@19 -- # read -r var val 00:06:22.632 15:16:39 -- accel/accel.sh@20 -- # val= 00:06:22.632 15:16:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.632 15:16:39 -- accel/accel.sh@19 -- # IFS=: 00:06:22.632 15:16:39 -- accel/accel.sh@19 -- # read -r var val 00:06:22.632 15:16:39 -- accel/accel.sh@20 -- # val= 00:06:22.632 15:16:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.632 15:16:39 -- accel/accel.sh@19 -- # IFS=: 00:06:22.632 15:16:39 -- accel/accel.sh@19 -- # read -r var val 00:06:24.017 15:16:41 -- accel/accel.sh@20 -- # val= 00:06:24.017 15:16:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.017 15:16:41 -- accel/accel.sh@19 -- # IFS=: 00:06:24.017 15:16:41 -- accel/accel.sh@19 -- # read -r var val 00:06:24.017 15:16:41 -- accel/accel.sh@20 -- # val= 00:06:24.017 15:16:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.017 15:16:41 -- accel/accel.sh@19 -- # IFS=: 00:06:24.017 15:16:41 -- accel/accel.sh@19 -- # read -r var val 00:06:24.017 15:16:41 -- accel/accel.sh@20 -- # val= 00:06:24.017 15:16:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.017 15:16:41 -- accel/accel.sh@19 -- # IFS=: 00:06:24.018 15:16:41 -- accel/accel.sh@19 -- # read -r var val 00:06:24.018 15:16:41 -- accel/accel.sh@20 -- # val= 00:06:24.018 15:16:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.018 15:16:41 -- accel/accel.sh@19 -- # IFS=: 00:06:24.018 15:16:41 -- accel/accel.sh@19 -- # read -r var val 00:06:24.018 15:16:41 -- accel/accel.sh@20 -- # val= 00:06:24.018 15:16:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.018 15:16:41 -- accel/accel.sh@19 -- # IFS=: 00:06:24.018 15:16:41 -- accel/accel.sh@19 -- # read -r var val 00:06:24.018 15:16:41 -- accel/accel.sh@20 -- # val= 00:06:24.018 15:16:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.018 15:16:41 -- accel/accel.sh@19 -- # IFS=: 00:06:24.018 15:16:41 -- accel/accel.sh@19 -- # read -r var val 00:06:24.018 15:16:41 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:24.018 15:16:41 -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:24.018 15:16:41 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:24.018 00:06:24.018 real 0m1.297s 00:06:24.018 user 0m1.194s 00:06:24.018 sys 0m0.113s 00:06:24.018 15:16:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:24.018 15:16:41 -- common/autotest_common.sh@10 -- # set +x 00:06:24.018 ************************************ 00:06:24.018 END TEST accel_dualcast 00:06:24.018 ************************************ 00:06:24.018 15:16:41 -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:24.018 15:16:41 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:24.018 15:16:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:24.018 15:16:41 -- common/autotest_common.sh@10 -- # set +x 00:06:24.018 ************************************ 00:06:24.018 START TEST accel_compare 00:06:24.018 ************************************ 00:06:24.018 15:16:41 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compare -y 00:06:24.018 15:16:41 -- accel/accel.sh@16 -- # local accel_opc 00:06:24.018 15:16:41 -- accel/accel.sh@17 -- # local accel_module 00:06:24.018 15:16:41 -- accel/accel.sh@19 -- # IFS=: 00:06:24.018 15:16:41 -- accel/accel.sh@19 -- # read -r var val 00:06:24.018 15:16:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:24.018 15:16:41 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:24.018 15:16:41 -- accel/accel.sh@12 -- # build_accel_config 00:06:24.018 15:16:41 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:24.018 15:16:41 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:24.018 15:16:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.018 15:16:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.018 15:16:41 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:24.018 15:16:41 -- accel/accel.sh@40 -- # local IFS=, 00:06:24.018 15:16:41 -- accel/accel.sh@41 -- # jq -r . 00:06:24.018 [2024-04-26 15:16:41.259873] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:06:24.018 [2024-04-26 15:16:41.259948] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1436372 ] 00:06:24.018 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.018 [2024-04-26 15:16:41.324778] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.018 [2024-04-26 15:16:41.396192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.018 15:16:41 -- accel/accel.sh@20 -- # val= 00:06:24.018 15:16:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.018 15:16:41 -- accel/accel.sh@19 -- # IFS=: 00:06:24.018 15:16:41 -- accel/accel.sh@19 -- # read -r var val 00:06:24.018 15:16:41 -- accel/accel.sh@20 -- # val= 00:06:24.018 15:16:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.018 15:16:41 -- accel/accel.sh@19 -- # IFS=: 00:06:24.018 15:16:41 -- accel/accel.sh@19 -- # read -r var val 00:06:24.018 15:16:41 -- accel/accel.sh@20 -- # val=0x1 00:06:24.018 15:16:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.018 15:16:41 -- accel/accel.sh@19 -- # IFS=: 00:06:24.018 15:16:41 -- accel/accel.sh@19 -- # read -r var val 00:06:24.018 15:16:41 -- accel/accel.sh@20 -- # val= 00:06:24.018 15:16:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.018 15:16:41 -- accel/accel.sh@19 -- # IFS=: 00:06:24.018 15:16:41 -- accel/accel.sh@19 -- # read -r var val 00:06:24.018 15:16:41 -- accel/accel.sh@20 -- # val= 00:06:24.018 15:16:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.018 15:16:41 -- accel/accel.sh@19 -- # IFS=: 00:06:24.018 15:16:41 -- accel/accel.sh@19 -- # read -r var val 00:06:24.018 15:16:41 -- accel/accel.sh@20 -- # val=compare 00:06:24.018 15:16:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.018 15:16:41 -- accel/accel.sh@23 -- # accel_opc=compare 00:06:24.018 15:16:41 -- accel/accel.sh@19 -- # IFS=: 00:06:24.018 15:16:41 -- accel/accel.sh@19 -- # read -r var val 00:06:24.018 15:16:41 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:24.018 15:16:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.018 15:16:41 -- accel/accel.sh@19 -- # IFS=: 00:06:24.018 15:16:41 -- accel/accel.sh@19 -- # read -r var val 00:06:24.018 15:16:41 -- accel/accel.sh@20 -- # val= 00:06:24.018 15:16:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.018 15:16:41 -- accel/accel.sh@19 -- # IFS=: 00:06:24.018 15:16:41 -- accel/accel.sh@19 -- # read -r var val 00:06:24.018 15:16:41 -- accel/accel.sh@20 -- # val=software 00:06:24.018 15:16:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.018 15:16:41 -- accel/accel.sh@22 -- # accel_module=software 00:06:24.018 15:16:41 -- accel/accel.sh@19 -- # IFS=: 00:06:24.018 15:16:41 -- accel/accel.sh@19 -- # read -r var val 00:06:24.018 15:16:41 -- accel/accel.sh@20 -- # val=32 00:06:24.018 15:16:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.018 15:16:41 -- accel/accel.sh@19 -- # IFS=: 00:06:24.018 15:16:41 -- accel/accel.sh@19 -- # read -r var val 00:06:24.018 15:16:41 -- accel/accel.sh@20 -- # val=32 00:06:24.018 15:16:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.018 15:16:41 -- accel/accel.sh@19 -- # IFS=: 00:06:24.018 15:16:41 -- accel/accel.sh@19 -- # read -r var val 00:06:24.018 15:16:41 -- accel/accel.sh@20 -- # val=1 00:06:24.018 15:16:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.018 15:16:41 -- accel/accel.sh@19 -- # IFS=: 00:06:24.018 15:16:41 -- accel/accel.sh@19 -- # read -r var val 00:06:24.018 15:16:41 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:24.018 15:16:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.018 15:16:41 -- accel/accel.sh@19 -- # IFS=: 00:06:24.018 15:16:41 -- accel/accel.sh@19 -- # read -r var val 00:06:24.018 15:16:41 -- accel/accel.sh@20 -- # val=Yes 00:06:24.018 15:16:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.018 15:16:41 -- accel/accel.sh@19 -- # IFS=: 00:06:24.018 15:16:41 -- accel/accel.sh@19 -- # read -r var val 00:06:24.018 15:16:41 -- accel/accel.sh@20 -- # val= 00:06:24.018 15:16:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.018 15:16:41 -- accel/accel.sh@19 -- # IFS=: 00:06:24.018 15:16:41 -- accel/accel.sh@19 -- # read -r var val 00:06:24.018 15:16:41 -- accel/accel.sh@20 -- # val= 00:06:24.018 15:16:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.018 15:16:41 -- accel/accel.sh@19 -- # IFS=: 00:06:24.018 15:16:41 -- accel/accel.sh@19 -- # read -r var val 00:06:25.405 15:16:42 -- accel/accel.sh@20 -- # val= 00:06:25.405 15:16:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.405 15:16:42 -- accel/accel.sh@19 -- # IFS=: 00:06:25.405 15:16:42 -- accel/accel.sh@19 -- # read -r var val 00:06:25.405 15:16:42 -- accel/accel.sh@20 -- # val= 00:06:25.405 15:16:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.405 15:16:42 -- accel/accel.sh@19 -- # IFS=: 00:06:25.405 15:16:42 -- accel/accel.sh@19 -- # read -r var val 00:06:25.405 15:16:42 -- accel/accel.sh@20 -- # val= 00:06:25.405 15:16:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.405 15:16:42 -- accel/accel.sh@19 -- # IFS=: 00:06:25.405 15:16:42 -- accel/accel.sh@19 -- # read -r var val 00:06:25.405 15:16:42 -- accel/accel.sh@20 -- # val= 00:06:25.405 15:16:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.405 15:16:42 -- accel/accel.sh@19 -- # IFS=: 00:06:25.405 15:16:42 -- accel/accel.sh@19 -- # read -r var val 00:06:25.405 15:16:42 -- accel/accel.sh@20 -- # val= 00:06:25.405 15:16:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.405 15:16:42 -- accel/accel.sh@19 -- # IFS=: 00:06:25.405 15:16:42 -- accel/accel.sh@19 -- # read -r var val 00:06:25.405 15:16:42 -- accel/accel.sh@20 -- # val= 00:06:25.405 15:16:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.405 15:16:42 -- accel/accel.sh@19 -- # IFS=: 00:06:25.405 15:16:42 -- accel/accel.sh@19 -- # read -r var val 00:06:25.405 15:16:42 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:25.405 15:16:42 -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:25.405 15:16:42 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:25.405 00:06:25.405 real 0m1.295s 00:06:25.405 user 0m1.201s 00:06:25.405 sys 0m0.104s 00:06:25.405 15:16:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:25.405 15:16:42 -- common/autotest_common.sh@10 -- # set +x 00:06:25.405 ************************************ 00:06:25.405 END TEST accel_compare 00:06:25.405 ************************************ 00:06:25.405 15:16:42 -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:25.405 15:16:42 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:25.405 15:16:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:25.405 15:16:42 -- common/autotest_common.sh@10 -- # set +x 00:06:25.405 ************************************ 00:06:25.405 START TEST accel_xor 00:06:25.406 ************************************ 00:06:25.406 15:16:42 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y 00:06:25.406 15:16:42 -- accel/accel.sh@16 -- # local accel_opc 00:06:25.406 15:16:42 -- accel/accel.sh@17 -- # local accel_module 00:06:25.406 15:16:42 -- accel/accel.sh@19 -- # IFS=: 00:06:25.406 15:16:42 -- accel/accel.sh@19 -- # read -r var val 00:06:25.406 15:16:42 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:25.406 15:16:42 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:25.406 15:16:42 -- accel/accel.sh@12 -- # build_accel_config 00:06:25.406 15:16:42 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:25.406 15:16:42 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:25.406 15:16:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.406 15:16:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.406 15:16:42 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:25.406 15:16:42 -- accel/accel.sh@40 -- # local IFS=, 00:06:25.406 15:16:42 -- accel/accel.sh@41 -- # jq -r . 00:06:25.406 [2024-04-26 15:16:42.751930] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:06:25.406 [2024-04-26 15:16:42.751999] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1436727 ] 00:06:25.406 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.406 [2024-04-26 15:16:42.817093] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.667 [2024-04-26 15:16:42.888352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.667 15:16:42 -- accel/accel.sh@20 -- # val= 00:06:25.667 15:16:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.667 15:16:42 -- accel/accel.sh@19 -- # IFS=: 00:06:25.667 15:16:42 -- accel/accel.sh@19 -- # read -r var val 00:06:25.667 15:16:42 -- accel/accel.sh@20 -- # val= 00:06:25.667 15:16:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.667 15:16:42 -- accel/accel.sh@19 -- # IFS=: 00:06:25.667 15:16:42 -- accel/accel.sh@19 -- # read -r var val 00:06:25.667 15:16:42 -- accel/accel.sh@20 -- # val=0x1 00:06:25.667 15:16:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.667 15:16:42 -- accel/accel.sh@19 -- # IFS=: 00:06:25.667 15:16:42 -- accel/accel.sh@19 -- # read -r var val 00:06:25.667 15:16:42 -- accel/accel.sh@20 -- # val= 00:06:25.667 15:16:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.667 15:16:42 -- accel/accel.sh@19 -- # IFS=: 00:06:25.667 15:16:42 -- accel/accel.sh@19 -- # read -r var val 00:06:25.667 15:16:42 -- accel/accel.sh@20 -- # val= 00:06:25.667 15:16:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.667 15:16:42 -- accel/accel.sh@19 -- # IFS=: 00:06:25.667 15:16:42 -- accel/accel.sh@19 -- # read -r var val 00:06:25.667 15:16:42 -- accel/accel.sh@20 -- # val=xor 00:06:25.667 15:16:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.667 15:16:42 -- accel/accel.sh@23 -- # accel_opc=xor 00:06:25.667 15:16:42 -- accel/accel.sh@19 -- # IFS=: 00:06:25.667 15:16:42 -- accel/accel.sh@19 -- # read -r var val 00:06:25.667 15:16:42 -- accel/accel.sh@20 -- # val=2 00:06:25.667 15:16:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.667 15:16:42 -- accel/accel.sh@19 -- # IFS=: 00:06:25.667 15:16:42 -- accel/accel.sh@19 -- # read -r var val 00:06:25.667 15:16:42 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:25.667 15:16:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.667 15:16:42 -- accel/accel.sh@19 -- # IFS=: 00:06:25.667 15:16:42 -- accel/accel.sh@19 -- # read -r var val 00:06:25.667 15:16:42 -- accel/accel.sh@20 -- # val= 00:06:25.667 15:16:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.667 15:16:42 -- accel/accel.sh@19 -- # IFS=: 00:06:25.667 15:16:42 -- accel/accel.sh@19 -- # read -r var val 00:06:25.667 15:16:42 -- accel/accel.sh@20 -- # val=software 00:06:25.667 15:16:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.667 15:16:42 -- accel/accel.sh@22 -- # accel_module=software 00:06:25.667 15:16:42 -- accel/accel.sh@19 -- # IFS=: 00:06:25.667 15:16:42 -- accel/accel.sh@19 -- # read -r var val 00:06:25.667 15:16:42 -- accel/accel.sh@20 -- # val=32 00:06:25.667 15:16:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.667 15:16:42 -- accel/accel.sh@19 -- # IFS=: 00:06:25.667 15:16:42 -- accel/accel.sh@19 -- # read -r var val 00:06:25.667 15:16:42 -- accel/accel.sh@20 -- # val=32 00:06:25.667 15:16:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.667 15:16:42 -- accel/accel.sh@19 -- # IFS=: 00:06:25.667 15:16:42 -- accel/accel.sh@19 -- # read -r var val 00:06:25.667 15:16:42 -- accel/accel.sh@20 -- # val=1 00:06:25.667 15:16:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.667 15:16:42 -- accel/accel.sh@19 -- # IFS=: 00:06:25.667 15:16:42 -- accel/accel.sh@19 -- # read -r var val 00:06:25.667 15:16:42 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:25.667 15:16:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.667 15:16:42 -- accel/accel.sh@19 -- # IFS=: 00:06:25.667 15:16:42 -- accel/accel.sh@19 -- # read -r var val 00:06:25.667 15:16:42 -- accel/accel.sh@20 -- # val=Yes 00:06:25.667 15:16:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.667 15:16:42 -- accel/accel.sh@19 -- # IFS=: 00:06:25.667 15:16:42 -- accel/accel.sh@19 -- # read -r var val 00:06:25.667 15:16:42 -- accel/accel.sh@20 -- # val= 00:06:25.667 15:16:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.667 15:16:42 -- accel/accel.sh@19 -- # IFS=: 00:06:25.667 15:16:42 -- accel/accel.sh@19 -- # read -r var val 00:06:25.667 15:16:42 -- accel/accel.sh@20 -- # val= 00:06:25.667 15:16:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.667 15:16:42 -- accel/accel.sh@19 -- # IFS=: 00:06:25.667 15:16:42 -- accel/accel.sh@19 -- # read -r var val 00:06:26.611 15:16:44 -- accel/accel.sh@20 -- # val= 00:06:26.611 15:16:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.611 15:16:44 -- accel/accel.sh@19 -- # IFS=: 00:06:26.611 15:16:44 -- accel/accel.sh@19 -- # read -r var val 00:06:26.611 15:16:44 -- accel/accel.sh@20 -- # val= 00:06:26.611 15:16:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.611 15:16:44 -- accel/accel.sh@19 -- # IFS=: 00:06:26.612 15:16:44 -- accel/accel.sh@19 -- # read -r var val 00:06:26.612 15:16:44 -- accel/accel.sh@20 -- # val= 00:06:26.612 15:16:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.612 15:16:44 -- accel/accel.sh@19 -- # IFS=: 00:06:26.612 15:16:44 -- accel/accel.sh@19 -- # read -r var val 00:06:26.612 15:16:44 -- accel/accel.sh@20 -- # val= 00:06:26.612 15:16:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.612 15:16:44 -- accel/accel.sh@19 -- # IFS=: 00:06:26.612 15:16:44 -- accel/accel.sh@19 -- # read -r var val 00:06:26.612 15:16:44 -- accel/accel.sh@20 -- # val= 00:06:26.612 15:16:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.612 15:16:44 -- accel/accel.sh@19 -- # IFS=: 00:06:26.612 15:16:44 -- accel/accel.sh@19 -- # read -r var val 00:06:26.612 15:16:44 -- accel/accel.sh@20 -- # val= 00:06:26.612 15:16:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.612 15:16:44 -- accel/accel.sh@19 -- # IFS=: 00:06:26.612 15:16:44 -- accel/accel.sh@19 -- # read -r var val 00:06:26.612 15:16:44 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:26.612 15:16:44 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:26.612 15:16:44 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:26.612 00:06:26.612 real 0m1.295s 00:06:26.612 user 0m1.204s 00:06:26.612 sys 0m0.102s 00:06:26.612 15:16:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:26.612 15:16:44 -- common/autotest_common.sh@10 -- # set +x 00:06:26.612 ************************************ 00:06:26.612 END TEST accel_xor 00:06:26.612 ************************************ 00:06:26.612 15:16:44 -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:26.612 15:16:44 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:26.612 15:16:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:26.612 15:16:44 -- common/autotest_common.sh@10 -- # set +x 00:06:26.874 ************************************ 00:06:26.874 START TEST accel_xor 00:06:26.874 ************************************ 00:06:26.874 15:16:44 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y -x 3 00:06:26.874 15:16:44 -- accel/accel.sh@16 -- # local accel_opc 00:06:26.874 15:16:44 -- accel/accel.sh@17 -- # local accel_module 00:06:26.874 15:16:44 -- accel/accel.sh@19 -- # IFS=: 00:06:26.874 15:16:44 -- accel/accel.sh@19 -- # read -r var val 00:06:26.874 15:16:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:26.874 15:16:44 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:26.874 15:16:44 -- accel/accel.sh@12 -- # build_accel_config 00:06:26.874 15:16:44 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:26.874 15:16:44 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:26.874 15:16:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.874 15:16:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.874 15:16:44 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:26.874 15:16:44 -- accel/accel.sh@40 -- # local IFS=, 00:06:26.874 15:16:44 -- accel/accel.sh@41 -- # jq -r . 00:06:26.874 [2024-04-26 15:16:44.245233] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:06:26.874 [2024-04-26 15:16:44.245308] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1436977 ] 00:06:26.874 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.874 [2024-04-26 15:16:44.310955] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.136 [2024-04-26 15:16:44.383132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.136 15:16:44 -- accel/accel.sh@20 -- # val= 00:06:27.136 15:16:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.136 15:16:44 -- accel/accel.sh@19 -- # IFS=: 00:06:27.136 15:16:44 -- accel/accel.sh@19 -- # read -r var val 00:06:27.136 15:16:44 -- accel/accel.sh@20 -- # val= 00:06:27.136 15:16:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.136 15:16:44 -- accel/accel.sh@19 -- # IFS=: 00:06:27.136 15:16:44 -- accel/accel.sh@19 -- # read -r var val 00:06:27.136 15:16:44 -- accel/accel.sh@20 -- # val=0x1 00:06:27.136 15:16:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.136 15:16:44 -- accel/accel.sh@19 -- # IFS=: 00:06:27.136 15:16:44 -- accel/accel.sh@19 -- # read -r var val 00:06:27.136 15:16:44 -- accel/accel.sh@20 -- # val= 00:06:27.136 15:16:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.136 15:16:44 -- accel/accel.sh@19 -- # IFS=: 00:06:27.136 15:16:44 -- accel/accel.sh@19 -- # read -r var val 00:06:27.136 15:16:44 -- accel/accel.sh@20 -- # val= 00:06:27.136 15:16:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.136 15:16:44 -- accel/accel.sh@19 -- # IFS=: 00:06:27.136 15:16:44 -- accel/accel.sh@19 -- # read -r var val 00:06:27.136 15:16:44 -- accel/accel.sh@20 -- # val=xor 00:06:27.136 15:16:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.136 15:16:44 -- accel/accel.sh@23 -- # accel_opc=xor 00:06:27.136 15:16:44 -- accel/accel.sh@19 -- # IFS=: 00:06:27.136 15:16:44 -- accel/accel.sh@19 -- # read -r var val 00:06:27.136 15:16:44 -- accel/accel.sh@20 -- # val=3 00:06:27.136 15:16:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.136 15:16:44 -- accel/accel.sh@19 -- # IFS=: 00:06:27.136 15:16:44 -- accel/accel.sh@19 -- # read -r var val 00:06:27.136 15:16:44 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:27.136 15:16:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.136 15:16:44 -- accel/accel.sh@19 -- # IFS=: 00:06:27.136 15:16:44 -- accel/accel.sh@19 -- # read -r var val 00:06:27.136 15:16:44 -- accel/accel.sh@20 -- # val= 00:06:27.136 15:16:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.136 15:16:44 -- accel/accel.sh@19 -- # IFS=: 00:06:27.136 15:16:44 -- accel/accel.sh@19 -- # read -r var val 00:06:27.136 15:16:44 -- accel/accel.sh@20 -- # val=software 00:06:27.136 15:16:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.136 15:16:44 -- accel/accel.sh@22 -- # accel_module=software 00:06:27.136 15:16:44 -- accel/accel.sh@19 -- # IFS=: 00:06:27.136 15:16:44 -- accel/accel.sh@19 -- # read -r var val 00:06:27.136 15:16:44 -- accel/accel.sh@20 -- # val=32 00:06:27.136 15:16:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.136 15:16:44 -- accel/accel.sh@19 -- # IFS=: 00:06:27.136 15:16:44 -- accel/accel.sh@19 -- # read -r var val 00:06:27.136 15:16:44 -- accel/accel.sh@20 -- # val=32 00:06:27.136 15:16:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.136 15:16:44 -- accel/accel.sh@19 -- # IFS=: 00:06:27.136 15:16:44 -- accel/accel.sh@19 -- # read -r var val 00:06:27.136 15:16:44 -- accel/accel.sh@20 -- # val=1 00:06:27.136 15:16:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.136 15:16:44 -- accel/accel.sh@19 -- # IFS=: 00:06:27.136 15:16:44 -- accel/accel.sh@19 -- # read -r var val 00:06:27.136 15:16:44 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:27.136 15:16:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.136 15:16:44 -- accel/accel.sh@19 -- # IFS=: 00:06:27.136 15:16:44 -- accel/accel.sh@19 -- # read -r var val 00:06:27.136 15:16:44 -- accel/accel.sh@20 -- # val=Yes 00:06:27.136 15:16:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.136 15:16:44 -- accel/accel.sh@19 -- # IFS=: 00:06:27.136 15:16:44 -- accel/accel.sh@19 -- # read -r var val 00:06:27.136 15:16:44 -- accel/accel.sh@20 -- # val= 00:06:27.136 15:16:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.136 15:16:44 -- accel/accel.sh@19 -- # IFS=: 00:06:27.136 15:16:44 -- accel/accel.sh@19 -- # read -r var val 00:06:27.136 15:16:44 -- accel/accel.sh@20 -- # val= 00:06:27.136 15:16:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.136 15:16:44 -- accel/accel.sh@19 -- # IFS=: 00:06:27.136 15:16:44 -- accel/accel.sh@19 -- # read -r var val 00:06:28.078 15:16:45 -- accel/accel.sh@20 -- # val= 00:06:28.078 15:16:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.078 15:16:45 -- accel/accel.sh@19 -- # IFS=: 00:06:28.078 15:16:45 -- accel/accel.sh@19 -- # read -r var val 00:06:28.078 15:16:45 -- accel/accel.sh@20 -- # val= 00:06:28.078 15:16:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.078 15:16:45 -- accel/accel.sh@19 -- # IFS=: 00:06:28.078 15:16:45 -- accel/accel.sh@19 -- # read -r var val 00:06:28.078 15:16:45 -- accel/accel.sh@20 -- # val= 00:06:28.078 15:16:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.078 15:16:45 -- accel/accel.sh@19 -- # IFS=: 00:06:28.078 15:16:45 -- accel/accel.sh@19 -- # read -r var val 00:06:28.078 15:16:45 -- accel/accel.sh@20 -- # val= 00:06:28.078 15:16:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.078 15:16:45 -- accel/accel.sh@19 -- # IFS=: 00:06:28.078 15:16:45 -- accel/accel.sh@19 -- # read -r var val 00:06:28.078 15:16:45 -- accel/accel.sh@20 -- # val= 00:06:28.078 15:16:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.078 15:16:45 -- accel/accel.sh@19 -- # IFS=: 00:06:28.078 15:16:45 -- accel/accel.sh@19 -- # read -r var val 00:06:28.078 15:16:45 -- accel/accel.sh@20 -- # val= 00:06:28.078 15:16:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.078 15:16:45 -- accel/accel.sh@19 -- # IFS=: 00:06:28.078 15:16:45 -- accel/accel.sh@19 -- # read -r var val 00:06:28.078 15:16:45 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:28.078 15:16:45 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:28.078 15:16:45 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:28.078 00:06:28.078 real 0m1.298s 00:06:28.078 user 0m1.197s 00:06:28.078 sys 0m0.112s 00:06:28.078 15:16:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:28.078 15:16:45 -- common/autotest_common.sh@10 -- # set +x 00:06:28.078 ************************************ 00:06:28.078 END TEST accel_xor 00:06:28.078 ************************************ 00:06:28.339 15:16:45 -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:28.339 15:16:45 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:28.339 15:16:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:28.339 15:16:45 -- common/autotest_common.sh@10 -- # set +x 00:06:28.339 ************************************ 00:06:28.339 START TEST accel_dif_verify 00:06:28.339 ************************************ 00:06:28.339 15:16:45 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_verify 00:06:28.339 15:16:45 -- accel/accel.sh@16 -- # local accel_opc 00:06:28.339 15:16:45 -- accel/accel.sh@17 -- # local accel_module 00:06:28.339 15:16:45 -- accel/accel.sh@19 -- # IFS=: 00:06:28.339 15:16:45 -- accel/accel.sh@19 -- # read -r var val 00:06:28.339 15:16:45 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:28.339 15:16:45 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:28.339 15:16:45 -- accel/accel.sh@12 -- # build_accel_config 00:06:28.339 15:16:45 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:28.339 15:16:45 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:28.339 15:16:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.339 15:16:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.339 15:16:45 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:28.339 15:16:45 -- accel/accel.sh@40 -- # local IFS=, 00:06:28.339 15:16:45 -- accel/accel.sh@41 -- # jq -r . 00:06:28.339 [2024-04-26 15:16:45.716662] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:06:28.339 [2024-04-26 15:16:45.716748] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1437238 ] 00:06:28.339 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.339 [2024-04-26 15:16:45.784103] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.600 [2024-04-26 15:16:45.856740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.600 15:16:45 -- accel/accel.sh@20 -- # val= 00:06:28.600 15:16:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.600 15:16:45 -- accel/accel.sh@19 -- # IFS=: 00:06:28.600 15:16:45 -- accel/accel.sh@19 -- # read -r var val 00:06:28.600 15:16:45 -- accel/accel.sh@20 -- # val= 00:06:28.600 15:16:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.600 15:16:45 -- accel/accel.sh@19 -- # IFS=: 00:06:28.600 15:16:45 -- accel/accel.sh@19 -- # read -r var val 00:06:28.600 15:16:45 -- accel/accel.sh@20 -- # val=0x1 00:06:28.600 15:16:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.600 15:16:45 -- accel/accel.sh@19 -- # IFS=: 00:06:28.600 15:16:45 -- accel/accel.sh@19 -- # read -r var val 00:06:28.601 15:16:45 -- accel/accel.sh@20 -- # val= 00:06:28.601 15:16:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.601 15:16:45 -- accel/accel.sh@19 -- # IFS=: 00:06:28.601 15:16:45 -- accel/accel.sh@19 -- # read -r var val 00:06:28.601 15:16:45 -- accel/accel.sh@20 -- # val= 00:06:28.601 15:16:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.601 15:16:45 -- accel/accel.sh@19 -- # IFS=: 00:06:28.601 15:16:45 -- accel/accel.sh@19 -- # read -r var val 00:06:28.601 15:16:45 -- accel/accel.sh@20 -- # val=dif_verify 00:06:28.601 15:16:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.601 15:16:45 -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:28.601 15:16:45 -- accel/accel.sh@19 -- # IFS=: 00:06:28.601 15:16:45 -- accel/accel.sh@19 -- # read -r var val 00:06:28.601 15:16:45 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:28.601 15:16:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.601 15:16:45 -- accel/accel.sh@19 -- # IFS=: 00:06:28.601 15:16:45 -- accel/accel.sh@19 -- # read -r var val 00:06:28.601 15:16:45 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:28.601 15:16:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.601 15:16:45 -- accel/accel.sh@19 -- # IFS=: 00:06:28.601 15:16:45 -- accel/accel.sh@19 -- # read -r var val 00:06:28.601 15:16:45 -- accel/accel.sh@20 -- # val='512 bytes' 00:06:28.601 15:16:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.601 15:16:45 -- accel/accel.sh@19 -- # IFS=: 00:06:28.601 15:16:45 -- accel/accel.sh@19 -- # read -r var val 00:06:28.601 15:16:45 -- accel/accel.sh@20 -- # val='8 bytes' 00:06:28.601 15:16:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.601 15:16:45 -- accel/accel.sh@19 -- # IFS=: 00:06:28.601 15:16:45 -- accel/accel.sh@19 -- # read -r var val 00:06:28.601 15:16:45 -- accel/accel.sh@20 -- # val= 00:06:28.601 15:16:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.601 15:16:45 -- accel/accel.sh@19 -- # IFS=: 00:06:28.601 15:16:45 -- accel/accel.sh@19 -- # read -r var val 00:06:28.601 15:16:45 -- accel/accel.sh@20 -- # val=software 00:06:28.601 15:16:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.601 15:16:45 -- accel/accel.sh@22 -- # accel_module=software 00:06:28.601 15:16:45 -- accel/accel.sh@19 -- # IFS=: 00:06:28.601 15:16:45 -- accel/accel.sh@19 -- # read -r var val 00:06:28.601 15:16:45 -- accel/accel.sh@20 -- # val=32 00:06:28.601 15:16:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.601 15:16:45 -- accel/accel.sh@19 -- # IFS=: 00:06:28.601 15:16:45 -- accel/accel.sh@19 -- # read -r var val 00:06:28.601 15:16:45 -- accel/accel.sh@20 -- # val=32 00:06:28.601 15:16:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.601 15:16:45 -- accel/accel.sh@19 -- # IFS=: 00:06:28.601 15:16:45 -- accel/accel.sh@19 -- # read -r var val 00:06:28.601 15:16:45 -- accel/accel.sh@20 -- # val=1 00:06:28.601 15:16:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.601 15:16:45 -- accel/accel.sh@19 -- # IFS=: 00:06:28.601 15:16:45 -- accel/accel.sh@19 -- # read -r var val 00:06:28.601 15:16:45 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:28.601 15:16:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.601 15:16:45 -- accel/accel.sh@19 -- # IFS=: 00:06:28.601 15:16:45 -- accel/accel.sh@19 -- # read -r var val 00:06:28.601 15:16:45 -- accel/accel.sh@20 -- # val=No 00:06:28.601 15:16:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.601 15:16:45 -- accel/accel.sh@19 -- # IFS=: 00:06:28.601 15:16:45 -- accel/accel.sh@19 -- # read -r var val 00:06:28.601 15:16:45 -- accel/accel.sh@20 -- # val= 00:06:28.601 15:16:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.601 15:16:45 -- accel/accel.sh@19 -- # IFS=: 00:06:28.601 15:16:45 -- accel/accel.sh@19 -- # read -r var val 00:06:28.601 15:16:45 -- accel/accel.sh@20 -- # val= 00:06:28.601 15:16:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.601 15:16:45 -- accel/accel.sh@19 -- # IFS=: 00:06:28.601 15:16:45 -- accel/accel.sh@19 -- # read -r var val 00:06:29.545 15:16:46 -- accel/accel.sh@20 -- # val= 00:06:29.545 15:16:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.545 15:16:46 -- accel/accel.sh@19 -- # IFS=: 00:06:29.545 15:16:46 -- accel/accel.sh@19 -- # read -r var val 00:06:29.545 15:16:46 -- accel/accel.sh@20 -- # val= 00:06:29.545 15:16:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.545 15:16:46 -- accel/accel.sh@19 -- # IFS=: 00:06:29.545 15:16:46 -- accel/accel.sh@19 -- # read -r var val 00:06:29.545 15:16:46 -- accel/accel.sh@20 -- # val= 00:06:29.545 15:16:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.545 15:16:46 -- accel/accel.sh@19 -- # IFS=: 00:06:29.545 15:16:46 -- accel/accel.sh@19 -- # read -r var val 00:06:29.545 15:16:46 -- accel/accel.sh@20 -- # val= 00:06:29.545 15:16:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.545 15:16:46 -- accel/accel.sh@19 -- # IFS=: 00:06:29.545 15:16:46 -- accel/accel.sh@19 -- # read -r var val 00:06:29.545 15:16:46 -- accel/accel.sh@20 -- # val= 00:06:29.545 15:16:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.545 15:16:46 -- accel/accel.sh@19 -- # IFS=: 00:06:29.545 15:16:46 -- accel/accel.sh@19 -- # read -r var val 00:06:29.545 15:16:46 -- accel/accel.sh@20 -- # val= 00:06:29.545 15:16:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.545 15:16:46 -- accel/accel.sh@19 -- # IFS=: 00:06:29.545 15:16:46 -- accel/accel.sh@19 -- # read -r var val 00:06:29.545 15:16:46 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:29.545 15:16:46 -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:29.545 15:16:46 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:29.545 00:06:29.545 real 0m1.298s 00:06:29.545 user 0m1.194s 00:06:29.545 sys 0m0.116s 00:06:29.545 15:16:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:29.545 15:16:46 -- common/autotest_common.sh@10 -- # set +x 00:06:29.545 ************************************ 00:06:29.545 END TEST accel_dif_verify 00:06:29.545 ************************************ 00:06:29.806 15:16:47 -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:29.806 15:16:47 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:29.806 15:16:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:29.806 15:16:47 -- common/autotest_common.sh@10 -- # set +x 00:06:29.806 ************************************ 00:06:29.806 START TEST accel_dif_generate 00:06:29.806 ************************************ 00:06:29.806 15:16:47 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate 00:06:29.806 15:16:47 -- accel/accel.sh@16 -- # local accel_opc 00:06:29.806 15:16:47 -- accel/accel.sh@17 -- # local accel_module 00:06:29.806 15:16:47 -- accel/accel.sh@19 -- # IFS=: 00:06:29.806 15:16:47 -- accel/accel.sh@19 -- # read -r var val 00:06:29.806 15:16:47 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:29.806 15:16:47 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:29.806 15:16:47 -- accel/accel.sh@12 -- # build_accel_config 00:06:29.806 15:16:47 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:29.806 15:16:47 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:29.806 15:16:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.806 15:16:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.806 15:16:47 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:29.806 15:16:47 -- accel/accel.sh@40 -- # local IFS=, 00:06:29.806 15:16:47 -- accel/accel.sh@41 -- # jq -r . 00:06:29.806 [2024-04-26 15:16:47.199758] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:06:29.807 [2024-04-26 15:16:47.199967] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1437508 ] 00:06:29.807 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.068 [2024-04-26 15:16:47.263599] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.068 [2024-04-26 15:16:47.327663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.068 15:16:47 -- accel/accel.sh@20 -- # val= 00:06:30.068 15:16:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.068 15:16:47 -- accel/accel.sh@19 -- # IFS=: 00:06:30.068 15:16:47 -- accel/accel.sh@19 -- # read -r var val 00:06:30.068 15:16:47 -- accel/accel.sh@20 -- # val= 00:06:30.068 15:16:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.068 15:16:47 -- accel/accel.sh@19 -- # IFS=: 00:06:30.068 15:16:47 -- accel/accel.sh@19 -- # read -r var val 00:06:30.068 15:16:47 -- accel/accel.sh@20 -- # val=0x1 00:06:30.068 15:16:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.068 15:16:47 -- accel/accel.sh@19 -- # IFS=: 00:06:30.068 15:16:47 -- accel/accel.sh@19 -- # read -r var val 00:06:30.068 15:16:47 -- accel/accel.sh@20 -- # val= 00:06:30.068 15:16:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.068 15:16:47 -- accel/accel.sh@19 -- # IFS=: 00:06:30.068 15:16:47 -- accel/accel.sh@19 -- # read -r var val 00:06:30.068 15:16:47 -- accel/accel.sh@20 -- # val= 00:06:30.068 15:16:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.068 15:16:47 -- accel/accel.sh@19 -- # IFS=: 00:06:30.068 15:16:47 -- accel/accel.sh@19 -- # read -r var val 00:06:30.068 15:16:47 -- accel/accel.sh@20 -- # val=dif_generate 00:06:30.068 15:16:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.068 15:16:47 -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:30.068 15:16:47 -- accel/accel.sh@19 -- # IFS=: 00:06:30.068 15:16:47 -- accel/accel.sh@19 -- # read -r var val 00:06:30.068 15:16:47 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:30.068 15:16:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.068 15:16:47 -- accel/accel.sh@19 -- # IFS=: 00:06:30.068 15:16:47 -- accel/accel.sh@19 -- # read -r var val 00:06:30.068 15:16:47 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:30.068 15:16:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.068 15:16:47 -- accel/accel.sh@19 -- # IFS=: 00:06:30.068 15:16:47 -- accel/accel.sh@19 -- # read -r var val 00:06:30.068 15:16:47 -- accel/accel.sh@20 -- # val='512 bytes' 00:06:30.068 15:16:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.068 15:16:47 -- accel/accel.sh@19 -- # IFS=: 00:06:30.068 15:16:47 -- accel/accel.sh@19 -- # read -r var val 00:06:30.068 15:16:47 -- accel/accel.sh@20 -- # val='8 bytes' 00:06:30.068 15:16:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.069 15:16:47 -- accel/accel.sh@19 -- # IFS=: 00:06:30.069 15:16:47 -- accel/accel.sh@19 -- # read -r var val 00:06:30.069 15:16:47 -- accel/accel.sh@20 -- # val= 00:06:30.069 15:16:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.069 15:16:47 -- accel/accel.sh@19 -- # IFS=: 00:06:30.069 15:16:47 -- accel/accel.sh@19 -- # read -r var val 00:06:30.069 15:16:47 -- accel/accel.sh@20 -- # val=software 00:06:30.069 15:16:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.069 15:16:47 -- accel/accel.sh@22 -- # accel_module=software 00:06:30.069 15:16:47 -- accel/accel.sh@19 -- # IFS=: 00:06:30.069 15:16:47 -- accel/accel.sh@19 -- # read -r var val 00:06:30.069 15:16:47 -- accel/accel.sh@20 -- # val=32 00:06:30.069 15:16:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.069 15:16:47 -- accel/accel.sh@19 -- # IFS=: 00:06:30.069 15:16:47 -- accel/accel.sh@19 -- # read -r var val 00:06:30.069 15:16:47 -- accel/accel.sh@20 -- # val=32 00:06:30.069 15:16:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.069 15:16:47 -- accel/accel.sh@19 -- # IFS=: 00:06:30.069 15:16:47 -- accel/accel.sh@19 -- # read -r var val 00:06:30.069 15:16:47 -- accel/accel.sh@20 -- # val=1 00:06:30.069 15:16:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.069 15:16:47 -- accel/accel.sh@19 -- # IFS=: 00:06:30.069 15:16:47 -- accel/accel.sh@19 -- # read -r var val 00:06:30.069 15:16:47 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:30.069 15:16:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.069 15:16:47 -- accel/accel.sh@19 -- # IFS=: 00:06:30.069 15:16:47 -- accel/accel.sh@19 -- # read -r var val 00:06:30.069 15:16:47 -- accel/accel.sh@20 -- # val=No 00:06:30.069 15:16:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.069 15:16:47 -- accel/accel.sh@19 -- # IFS=: 00:06:30.069 15:16:47 -- accel/accel.sh@19 -- # read -r var val 00:06:30.069 15:16:47 -- accel/accel.sh@20 -- # val= 00:06:30.069 15:16:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.069 15:16:47 -- accel/accel.sh@19 -- # IFS=: 00:06:30.069 15:16:47 -- accel/accel.sh@19 -- # read -r var val 00:06:30.069 15:16:47 -- accel/accel.sh@20 -- # val= 00:06:30.069 15:16:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.069 15:16:47 -- accel/accel.sh@19 -- # IFS=: 00:06:30.069 15:16:47 -- accel/accel.sh@19 -- # read -r var val 00:06:31.013 15:16:48 -- accel/accel.sh@20 -- # val= 00:06:31.013 15:16:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.013 15:16:48 -- accel/accel.sh@19 -- # IFS=: 00:06:31.013 15:16:48 -- accel/accel.sh@19 -- # read -r var val 00:06:31.013 15:16:48 -- accel/accel.sh@20 -- # val= 00:06:31.013 15:16:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.013 15:16:48 -- accel/accel.sh@19 -- # IFS=: 00:06:31.013 15:16:48 -- accel/accel.sh@19 -- # read -r var val 00:06:31.013 15:16:48 -- accel/accel.sh@20 -- # val= 00:06:31.013 15:16:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.013 15:16:48 -- accel/accel.sh@19 -- # IFS=: 00:06:31.013 15:16:48 -- accel/accel.sh@19 -- # read -r var val 00:06:31.013 15:16:48 -- accel/accel.sh@20 -- # val= 00:06:31.013 15:16:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.013 15:16:48 -- accel/accel.sh@19 -- # IFS=: 00:06:31.013 15:16:48 -- accel/accel.sh@19 -- # read -r var val 00:06:31.013 15:16:48 -- accel/accel.sh@20 -- # val= 00:06:31.013 15:16:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.013 15:16:48 -- accel/accel.sh@19 -- # IFS=: 00:06:31.013 15:16:48 -- accel/accel.sh@19 -- # read -r var val 00:06:31.013 15:16:48 -- accel/accel.sh@20 -- # val= 00:06:31.013 15:16:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.013 15:16:48 -- accel/accel.sh@19 -- # IFS=: 00:06:31.013 15:16:48 -- accel/accel.sh@19 -- # read -r var val 00:06:31.013 15:16:48 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:31.013 15:16:48 -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:31.013 15:16:48 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:31.013 00:06:31.013 real 0m1.286s 00:06:31.013 user 0m1.193s 00:06:31.013 sys 0m0.104s 00:06:31.013 15:16:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:31.013 15:16:48 -- common/autotest_common.sh@10 -- # set +x 00:06:31.013 ************************************ 00:06:31.013 END TEST accel_dif_generate 00:06:31.013 ************************************ 00:06:31.275 15:16:48 -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:31.275 15:16:48 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:31.275 15:16:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:31.275 15:16:48 -- common/autotest_common.sh@10 -- # set +x 00:06:31.275 ************************************ 00:06:31.275 START TEST accel_dif_generate_copy 00:06:31.275 ************************************ 00:06:31.275 15:16:48 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate_copy 00:06:31.275 15:16:48 -- accel/accel.sh@16 -- # local accel_opc 00:06:31.275 15:16:48 -- accel/accel.sh@17 -- # local accel_module 00:06:31.275 15:16:48 -- accel/accel.sh@19 -- # IFS=: 00:06:31.275 15:16:48 -- accel/accel.sh@19 -- # read -r var val 00:06:31.275 15:16:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:31.275 15:16:48 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:31.275 15:16:48 -- accel/accel.sh@12 -- # build_accel_config 00:06:31.275 15:16:48 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:31.275 15:16:48 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:31.275 15:16:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.275 15:16:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.275 15:16:48 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:31.275 15:16:48 -- accel/accel.sh@40 -- # local IFS=, 00:06:31.275 15:16:48 -- accel/accel.sh@41 -- # jq -r . 00:06:31.275 [2024-04-26 15:16:48.664074] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:06:31.275 [2024-04-26 15:16:48.664162] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1437851 ] 00:06:31.275 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.536 [2024-04-26 15:16:48.726042] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.536 [2024-04-26 15:16:48.789405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.536 15:16:48 -- accel/accel.sh@20 -- # val= 00:06:31.536 15:16:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.536 15:16:48 -- accel/accel.sh@19 -- # IFS=: 00:06:31.536 15:16:48 -- accel/accel.sh@19 -- # read -r var val 00:06:31.536 15:16:48 -- accel/accel.sh@20 -- # val= 00:06:31.536 15:16:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.536 15:16:48 -- accel/accel.sh@19 -- # IFS=: 00:06:31.536 15:16:48 -- accel/accel.sh@19 -- # read -r var val 00:06:31.536 15:16:48 -- accel/accel.sh@20 -- # val=0x1 00:06:31.536 15:16:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.536 15:16:48 -- accel/accel.sh@19 -- # IFS=: 00:06:31.536 15:16:48 -- accel/accel.sh@19 -- # read -r var val 00:06:31.536 15:16:48 -- accel/accel.sh@20 -- # val= 00:06:31.536 15:16:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.536 15:16:48 -- accel/accel.sh@19 -- # IFS=: 00:06:31.536 15:16:48 -- accel/accel.sh@19 -- # read -r var val 00:06:31.536 15:16:48 -- accel/accel.sh@20 -- # val= 00:06:31.536 15:16:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.536 15:16:48 -- accel/accel.sh@19 -- # IFS=: 00:06:31.536 15:16:48 -- accel/accel.sh@19 -- # read -r var val 00:06:31.536 15:16:48 -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:31.536 15:16:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.536 15:16:48 -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:31.536 15:16:48 -- accel/accel.sh@19 -- # IFS=: 00:06:31.536 15:16:48 -- accel/accel.sh@19 -- # read -r var val 00:06:31.536 15:16:48 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:31.536 15:16:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.536 15:16:48 -- accel/accel.sh@19 -- # IFS=: 00:06:31.536 15:16:48 -- accel/accel.sh@19 -- # read -r var val 00:06:31.536 15:16:48 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:31.536 15:16:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.536 15:16:48 -- accel/accel.sh@19 -- # IFS=: 00:06:31.536 15:16:48 -- accel/accel.sh@19 -- # read -r var val 00:06:31.536 15:16:48 -- accel/accel.sh@20 -- # val= 00:06:31.536 15:16:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.536 15:16:48 -- accel/accel.sh@19 -- # IFS=: 00:06:31.536 15:16:48 -- accel/accel.sh@19 -- # read -r var val 00:06:31.536 15:16:48 -- accel/accel.sh@20 -- # val=software 00:06:31.537 15:16:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.537 15:16:48 -- accel/accel.sh@22 -- # accel_module=software 00:06:31.537 15:16:48 -- accel/accel.sh@19 -- # IFS=: 00:06:31.537 15:16:48 -- accel/accel.sh@19 -- # read -r var val 00:06:31.537 15:16:48 -- accel/accel.sh@20 -- # val=32 00:06:31.537 15:16:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.537 15:16:48 -- accel/accel.sh@19 -- # IFS=: 00:06:31.537 15:16:48 -- accel/accel.sh@19 -- # read -r var val 00:06:31.537 15:16:48 -- accel/accel.sh@20 -- # val=32 00:06:31.537 15:16:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.537 15:16:48 -- accel/accel.sh@19 -- # IFS=: 00:06:31.537 15:16:48 -- accel/accel.sh@19 -- # read -r var val 00:06:31.537 15:16:48 -- accel/accel.sh@20 -- # val=1 00:06:31.537 15:16:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.537 15:16:48 -- accel/accel.sh@19 -- # IFS=: 00:06:31.537 15:16:48 -- accel/accel.sh@19 -- # read -r var val 00:06:31.537 15:16:48 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:31.537 15:16:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.537 15:16:48 -- accel/accel.sh@19 -- # IFS=: 00:06:31.537 15:16:48 -- accel/accel.sh@19 -- # read -r var val 00:06:31.537 15:16:48 -- accel/accel.sh@20 -- # val=No 00:06:31.537 15:16:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.537 15:16:48 -- accel/accel.sh@19 -- # IFS=: 00:06:31.537 15:16:48 -- accel/accel.sh@19 -- # read -r var val 00:06:31.537 15:16:48 -- accel/accel.sh@20 -- # val= 00:06:31.537 15:16:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.537 15:16:48 -- accel/accel.sh@19 -- # IFS=: 00:06:31.537 15:16:48 -- accel/accel.sh@19 -- # read -r var val 00:06:31.537 15:16:48 -- accel/accel.sh@20 -- # val= 00:06:31.537 15:16:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.537 15:16:48 -- accel/accel.sh@19 -- # IFS=: 00:06:31.537 15:16:48 -- accel/accel.sh@19 -- # read -r var val 00:06:32.480 15:16:49 -- accel/accel.sh@20 -- # val= 00:06:32.480 15:16:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.480 15:16:49 -- accel/accel.sh@19 -- # IFS=: 00:06:32.480 15:16:49 -- accel/accel.sh@19 -- # read -r var val 00:06:32.480 15:16:49 -- accel/accel.sh@20 -- # val= 00:06:32.480 15:16:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.480 15:16:49 -- accel/accel.sh@19 -- # IFS=: 00:06:32.480 15:16:49 -- accel/accel.sh@19 -- # read -r var val 00:06:32.480 15:16:49 -- accel/accel.sh@20 -- # val= 00:06:32.480 15:16:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.480 15:16:49 -- accel/accel.sh@19 -- # IFS=: 00:06:32.480 15:16:49 -- accel/accel.sh@19 -- # read -r var val 00:06:32.480 15:16:49 -- accel/accel.sh@20 -- # val= 00:06:32.480 15:16:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.480 15:16:49 -- accel/accel.sh@19 -- # IFS=: 00:06:32.480 15:16:49 -- accel/accel.sh@19 -- # read -r var val 00:06:32.480 15:16:49 -- accel/accel.sh@20 -- # val= 00:06:32.480 15:16:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.480 15:16:49 -- accel/accel.sh@19 -- # IFS=: 00:06:32.480 15:16:49 -- accel/accel.sh@19 -- # read -r var val 00:06:32.480 15:16:49 -- accel/accel.sh@20 -- # val= 00:06:32.480 15:16:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.480 15:16:49 -- accel/accel.sh@19 -- # IFS=: 00:06:32.480 15:16:49 -- accel/accel.sh@19 -- # read -r var val 00:06:32.480 15:16:49 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:32.480 15:16:49 -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:32.480 15:16:49 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:32.480 00:06:32.480 real 0m1.283s 00:06:32.480 user 0m1.197s 00:06:32.480 sys 0m0.097s 00:06:32.480 15:16:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:32.480 15:16:49 -- common/autotest_common.sh@10 -- # set +x 00:06:32.480 ************************************ 00:06:32.480 END TEST accel_dif_generate_copy 00:06:32.480 ************************************ 00:06:32.741 15:16:49 -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:32.741 15:16:49 -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:32.741 15:16:49 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:32.741 15:16:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:32.741 15:16:49 -- common/autotest_common.sh@10 -- # set +x 00:06:32.741 ************************************ 00:06:32.741 START TEST accel_comp 00:06:32.741 ************************************ 00:06:32.741 15:16:50 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:32.741 15:16:50 -- accel/accel.sh@16 -- # local accel_opc 00:06:32.741 15:16:50 -- accel/accel.sh@17 -- # local accel_module 00:06:32.741 15:16:50 -- accel/accel.sh@19 -- # IFS=: 00:06:32.741 15:16:50 -- accel/accel.sh@19 -- # read -r var val 00:06:32.741 15:16:50 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:32.741 15:16:50 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:32.741 15:16:50 -- accel/accel.sh@12 -- # build_accel_config 00:06:32.741 15:16:50 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:32.741 15:16:50 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:32.741 15:16:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.741 15:16:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.741 15:16:50 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:32.741 15:16:50 -- accel/accel.sh@40 -- # local IFS=, 00:06:32.741 15:16:50 -- accel/accel.sh@41 -- # jq -r . 00:06:32.741 [2024-04-26 15:16:50.129606] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:06:32.741 [2024-04-26 15:16:50.129665] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1438210 ] 00:06:32.741 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.002 [2024-04-26 15:16:50.190534] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.002 [2024-04-26 15:16:50.253573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.002 15:16:50 -- accel/accel.sh@20 -- # val= 00:06:33.002 15:16:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.002 15:16:50 -- accel/accel.sh@19 -- # IFS=: 00:06:33.002 15:16:50 -- accel/accel.sh@19 -- # read -r var val 00:06:33.002 15:16:50 -- accel/accel.sh@20 -- # val= 00:06:33.002 15:16:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.002 15:16:50 -- accel/accel.sh@19 -- # IFS=: 00:06:33.002 15:16:50 -- accel/accel.sh@19 -- # read -r var val 00:06:33.002 15:16:50 -- accel/accel.sh@20 -- # val= 00:06:33.002 15:16:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.002 15:16:50 -- accel/accel.sh@19 -- # IFS=: 00:06:33.002 15:16:50 -- accel/accel.sh@19 -- # read -r var val 00:06:33.002 15:16:50 -- accel/accel.sh@20 -- # val=0x1 00:06:33.002 15:16:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.002 15:16:50 -- accel/accel.sh@19 -- # IFS=: 00:06:33.002 15:16:50 -- accel/accel.sh@19 -- # read -r var val 00:06:33.002 15:16:50 -- accel/accel.sh@20 -- # val= 00:06:33.002 15:16:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.002 15:16:50 -- accel/accel.sh@19 -- # IFS=: 00:06:33.002 15:16:50 -- accel/accel.sh@19 -- # read -r var val 00:06:33.002 15:16:50 -- accel/accel.sh@20 -- # val= 00:06:33.002 15:16:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.002 15:16:50 -- accel/accel.sh@19 -- # IFS=: 00:06:33.002 15:16:50 -- accel/accel.sh@19 -- # read -r var val 00:06:33.002 15:16:50 -- accel/accel.sh@20 -- # val=compress 00:06:33.002 15:16:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.002 15:16:50 -- accel/accel.sh@23 -- # accel_opc=compress 00:06:33.002 15:16:50 -- accel/accel.sh@19 -- # IFS=: 00:06:33.002 15:16:50 -- accel/accel.sh@19 -- # read -r var val 00:06:33.002 15:16:50 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:33.002 15:16:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.002 15:16:50 -- accel/accel.sh@19 -- # IFS=: 00:06:33.002 15:16:50 -- accel/accel.sh@19 -- # read -r var val 00:06:33.002 15:16:50 -- accel/accel.sh@20 -- # val= 00:06:33.002 15:16:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.002 15:16:50 -- accel/accel.sh@19 -- # IFS=: 00:06:33.002 15:16:50 -- accel/accel.sh@19 -- # read -r var val 00:06:33.002 15:16:50 -- accel/accel.sh@20 -- # val=software 00:06:33.002 15:16:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.002 15:16:50 -- accel/accel.sh@22 -- # accel_module=software 00:06:33.002 15:16:50 -- accel/accel.sh@19 -- # IFS=: 00:06:33.002 15:16:50 -- accel/accel.sh@19 -- # read -r var val 00:06:33.002 15:16:50 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:33.002 15:16:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.002 15:16:50 -- accel/accel.sh@19 -- # IFS=: 00:06:33.002 15:16:50 -- accel/accel.sh@19 -- # read -r var val 00:06:33.002 15:16:50 -- accel/accel.sh@20 -- # val=32 00:06:33.002 15:16:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.002 15:16:50 -- accel/accel.sh@19 -- # IFS=: 00:06:33.002 15:16:50 -- accel/accel.sh@19 -- # read -r var val 00:06:33.002 15:16:50 -- accel/accel.sh@20 -- # val=32 00:06:33.002 15:16:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.002 15:16:50 -- accel/accel.sh@19 -- # IFS=: 00:06:33.002 15:16:50 -- accel/accel.sh@19 -- # read -r var val 00:06:33.002 15:16:50 -- accel/accel.sh@20 -- # val=1 00:06:33.002 15:16:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.002 15:16:50 -- accel/accel.sh@19 -- # IFS=: 00:06:33.002 15:16:50 -- accel/accel.sh@19 -- # read -r var val 00:06:33.002 15:16:50 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:33.002 15:16:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.002 15:16:50 -- accel/accel.sh@19 -- # IFS=: 00:06:33.002 15:16:50 -- accel/accel.sh@19 -- # read -r var val 00:06:33.002 15:16:50 -- accel/accel.sh@20 -- # val=No 00:06:33.002 15:16:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.002 15:16:50 -- accel/accel.sh@19 -- # IFS=: 00:06:33.002 15:16:50 -- accel/accel.sh@19 -- # read -r var val 00:06:33.002 15:16:50 -- accel/accel.sh@20 -- # val= 00:06:33.002 15:16:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.002 15:16:50 -- accel/accel.sh@19 -- # IFS=: 00:06:33.002 15:16:50 -- accel/accel.sh@19 -- # read -r var val 00:06:33.002 15:16:50 -- accel/accel.sh@20 -- # val= 00:06:33.002 15:16:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.002 15:16:50 -- accel/accel.sh@19 -- # IFS=: 00:06:33.002 15:16:50 -- accel/accel.sh@19 -- # read -r var val 00:06:33.944 15:16:51 -- accel/accel.sh@20 -- # val= 00:06:33.944 15:16:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.944 15:16:51 -- accel/accel.sh@19 -- # IFS=: 00:06:33.944 15:16:51 -- accel/accel.sh@19 -- # read -r var val 00:06:33.944 15:16:51 -- accel/accel.sh@20 -- # val= 00:06:33.944 15:16:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.944 15:16:51 -- accel/accel.sh@19 -- # IFS=: 00:06:33.944 15:16:51 -- accel/accel.sh@19 -- # read -r var val 00:06:33.944 15:16:51 -- accel/accel.sh@20 -- # val= 00:06:33.944 15:16:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.944 15:16:51 -- accel/accel.sh@19 -- # IFS=: 00:06:33.944 15:16:51 -- accel/accel.sh@19 -- # read -r var val 00:06:33.944 15:16:51 -- accel/accel.sh@20 -- # val= 00:06:33.944 15:16:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.944 15:16:51 -- accel/accel.sh@19 -- # IFS=: 00:06:33.944 15:16:51 -- accel/accel.sh@19 -- # read -r var val 00:06:33.944 15:16:51 -- accel/accel.sh@20 -- # val= 00:06:33.944 15:16:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.944 15:16:51 -- accel/accel.sh@19 -- # IFS=: 00:06:33.944 15:16:51 -- accel/accel.sh@19 -- # read -r var val 00:06:33.944 15:16:51 -- accel/accel.sh@20 -- # val= 00:06:33.944 15:16:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.944 15:16:51 -- accel/accel.sh@19 -- # IFS=: 00:06:33.944 15:16:51 -- accel/accel.sh@19 -- # read -r var val 00:06:33.944 15:16:51 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:33.944 15:16:51 -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:33.944 15:16:51 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:33.944 00:06:33.944 real 0m1.282s 00:06:33.944 user 0m1.193s 00:06:33.944 sys 0m0.101s 00:06:33.944 15:16:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:33.944 15:16:51 -- common/autotest_common.sh@10 -- # set +x 00:06:33.944 ************************************ 00:06:33.944 END TEST accel_comp 00:06:33.944 ************************************ 00:06:34.205 15:16:51 -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:34.205 15:16:51 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:34.205 15:16:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:34.205 15:16:51 -- common/autotest_common.sh@10 -- # set +x 00:06:34.205 ************************************ 00:06:34.205 START TEST accel_decomp 00:06:34.205 ************************************ 00:06:34.205 15:16:51 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:34.205 15:16:51 -- accel/accel.sh@16 -- # local accel_opc 00:06:34.205 15:16:51 -- accel/accel.sh@17 -- # local accel_module 00:06:34.205 15:16:51 -- accel/accel.sh@19 -- # IFS=: 00:06:34.205 15:16:51 -- accel/accel.sh@19 -- # read -r var val 00:06:34.205 15:16:51 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:34.205 15:16:51 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:34.205 15:16:51 -- accel/accel.sh@12 -- # build_accel_config 00:06:34.205 15:16:51 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:34.205 15:16:51 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:34.205 15:16:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.205 15:16:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.205 15:16:51 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:34.205 15:16:51 -- accel/accel.sh@40 -- # local IFS=, 00:06:34.205 15:16:51 -- accel/accel.sh@41 -- # jq -r . 00:06:34.205 [2024-04-26 15:16:51.592262] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:06:34.205 [2024-04-26 15:16:51.592324] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1438566 ] 00:06:34.205 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.465 [2024-04-26 15:16:51.654875] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.465 [2024-04-26 15:16:51.721298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.465 15:16:51 -- accel/accel.sh@20 -- # val= 00:06:34.465 15:16:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.465 15:16:51 -- accel/accel.sh@19 -- # IFS=: 00:06:34.465 15:16:51 -- accel/accel.sh@19 -- # read -r var val 00:06:34.465 15:16:51 -- accel/accel.sh@20 -- # val= 00:06:34.465 15:16:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.465 15:16:51 -- accel/accel.sh@19 -- # IFS=: 00:06:34.465 15:16:51 -- accel/accel.sh@19 -- # read -r var val 00:06:34.465 15:16:51 -- accel/accel.sh@20 -- # val= 00:06:34.465 15:16:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.465 15:16:51 -- accel/accel.sh@19 -- # IFS=: 00:06:34.465 15:16:51 -- accel/accel.sh@19 -- # read -r var val 00:06:34.465 15:16:51 -- accel/accel.sh@20 -- # val=0x1 00:06:34.465 15:16:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.465 15:16:51 -- accel/accel.sh@19 -- # IFS=: 00:06:34.465 15:16:51 -- accel/accel.sh@19 -- # read -r var val 00:06:34.465 15:16:51 -- accel/accel.sh@20 -- # val= 00:06:34.465 15:16:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.465 15:16:51 -- accel/accel.sh@19 -- # IFS=: 00:06:34.465 15:16:51 -- accel/accel.sh@19 -- # read -r var val 00:06:34.465 15:16:51 -- accel/accel.sh@20 -- # val= 00:06:34.465 15:16:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.465 15:16:51 -- accel/accel.sh@19 -- # IFS=: 00:06:34.465 15:16:51 -- accel/accel.sh@19 -- # read -r var val 00:06:34.465 15:16:51 -- accel/accel.sh@20 -- # val=decompress 00:06:34.465 15:16:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.465 15:16:51 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:34.465 15:16:51 -- accel/accel.sh@19 -- # IFS=: 00:06:34.465 15:16:51 -- accel/accel.sh@19 -- # read -r var val 00:06:34.465 15:16:51 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:34.465 15:16:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.465 15:16:51 -- accel/accel.sh@19 -- # IFS=: 00:06:34.465 15:16:51 -- accel/accel.sh@19 -- # read -r var val 00:06:34.465 15:16:51 -- accel/accel.sh@20 -- # val= 00:06:34.465 15:16:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.465 15:16:51 -- accel/accel.sh@19 -- # IFS=: 00:06:34.465 15:16:51 -- accel/accel.sh@19 -- # read -r var val 00:06:34.465 15:16:51 -- accel/accel.sh@20 -- # val=software 00:06:34.465 15:16:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.465 15:16:51 -- accel/accel.sh@22 -- # accel_module=software 00:06:34.465 15:16:51 -- accel/accel.sh@19 -- # IFS=: 00:06:34.465 15:16:51 -- accel/accel.sh@19 -- # read -r var val 00:06:34.465 15:16:51 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:34.465 15:16:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.465 15:16:51 -- accel/accel.sh@19 -- # IFS=: 00:06:34.465 15:16:51 -- accel/accel.sh@19 -- # read -r var val 00:06:34.465 15:16:51 -- accel/accel.sh@20 -- # val=32 00:06:34.466 15:16:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.466 15:16:51 -- accel/accel.sh@19 -- # IFS=: 00:06:34.466 15:16:51 -- accel/accel.sh@19 -- # read -r var val 00:06:34.466 15:16:51 -- accel/accel.sh@20 -- # val=32 00:06:34.466 15:16:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.466 15:16:51 -- accel/accel.sh@19 -- # IFS=: 00:06:34.466 15:16:51 -- accel/accel.sh@19 -- # read -r var val 00:06:34.466 15:16:51 -- accel/accel.sh@20 -- # val=1 00:06:34.466 15:16:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.466 15:16:51 -- accel/accel.sh@19 -- # IFS=: 00:06:34.466 15:16:51 -- accel/accel.sh@19 -- # read -r var val 00:06:34.466 15:16:51 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:34.466 15:16:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.466 15:16:51 -- accel/accel.sh@19 -- # IFS=: 00:06:34.466 15:16:51 -- accel/accel.sh@19 -- # read -r var val 00:06:34.466 15:16:51 -- accel/accel.sh@20 -- # val=Yes 00:06:34.466 15:16:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.466 15:16:51 -- accel/accel.sh@19 -- # IFS=: 00:06:34.466 15:16:51 -- accel/accel.sh@19 -- # read -r var val 00:06:34.466 15:16:51 -- accel/accel.sh@20 -- # val= 00:06:34.466 15:16:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.466 15:16:51 -- accel/accel.sh@19 -- # IFS=: 00:06:34.466 15:16:51 -- accel/accel.sh@19 -- # read -r var val 00:06:34.466 15:16:51 -- accel/accel.sh@20 -- # val= 00:06:34.466 15:16:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.466 15:16:51 -- accel/accel.sh@19 -- # IFS=: 00:06:34.466 15:16:51 -- accel/accel.sh@19 -- # read -r var val 00:06:35.407 15:16:52 -- accel/accel.sh@20 -- # val= 00:06:35.407 15:16:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.407 15:16:52 -- accel/accel.sh@19 -- # IFS=: 00:06:35.407 15:16:52 -- accel/accel.sh@19 -- # read -r var val 00:06:35.407 15:16:52 -- accel/accel.sh@20 -- # val= 00:06:35.407 15:16:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.407 15:16:52 -- accel/accel.sh@19 -- # IFS=: 00:06:35.407 15:16:52 -- accel/accel.sh@19 -- # read -r var val 00:06:35.407 15:16:52 -- accel/accel.sh@20 -- # val= 00:06:35.407 15:16:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.407 15:16:52 -- accel/accel.sh@19 -- # IFS=: 00:06:35.407 15:16:52 -- accel/accel.sh@19 -- # read -r var val 00:06:35.407 15:16:52 -- accel/accel.sh@20 -- # val= 00:06:35.407 15:16:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.407 15:16:52 -- accel/accel.sh@19 -- # IFS=: 00:06:35.407 15:16:52 -- accel/accel.sh@19 -- # read -r var val 00:06:35.407 15:16:52 -- accel/accel.sh@20 -- # val= 00:06:35.407 15:16:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.407 15:16:52 -- accel/accel.sh@19 -- # IFS=: 00:06:35.407 15:16:52 -- accel/accel.sh@19 -- # read -r var val 00:06:35.407 15:16:52 -- accel/accel.sh@20 -- # val= 00:06:35.407 15:16:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.407 15:16:52 -- accel/accel.sh@19 -- # IFS=: 00:06:35.407 15:16:52 -- accel/accel.sh@19 -- # read -r var val 00:06:35.407 15:16:52 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:35.407 15:16:52 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:35.407 15:16:52 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:35.407 00:06:35.407 real 0m1.289s 00:06:35.407 user 0m1.200s 00:06:35.407 sys 0m0.101s 00:06:35.407 15:16:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:35.407 15:16:52 -- common/autotest_common.sh@10 -- # set +x 00:06:35.407 ************************************ 00:06:35.407 END TEST accel_decomp 00:06:35.407 ************************************ 00:06:35.668 15:16:52 -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:35.668 15:16:52 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:35.668 15:16:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:35.668 15:16:52 -- common/autotest_common.sh@10 -- # set +x 00:06:35.668 ************************************ 00:06:35.668 START TEST accel_decmop_full 00:06:35.668 ************************************ 00:06:35.668 15:16:53 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:35.668 15:16:53 -- accel/accel.sh@16 -- # local accel_opc 00:06:35.668 15:16:53 -- accel/accel.sh@17 -- # local accel_module 00:06:35.668 15:16:53 -- accel/accel.sh@19 -- # IFS=: 00:06:35.668 15:16:53 -- accel/accel.sh@19 -- # read -r var val 00:06:35.668 15:16:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:35.668 15:16:53 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:35.668 15:16:53 -- accel/accel.sh@12 -- # build_accel_config 00:06:35.668 15:16:53 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:35.668 15:16:53 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:35.668 15:16:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.668 15:16:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.668 15:16:53 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:35.668 15:16:53 -- accel/accel.sh@40 -- # local IFS=, 00:06:35.668 15:16:53 -- accel/accel.sh@41 -- # jq -r . 00:06:35.668 [2024-04-26 15:16:53.061285] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:06:35.668 [2024-04-26 15:16:53.061348] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1438922 ] 00:06:35.668 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.927 [2024-04-26 15:16:53.125808] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.927 [2024-04-26 15:16:53.197482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.927 15:16:53 -- accel/accel.sh@20 -- # val= 00:06:35.927 15:16:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.927 15:16:53 -- accel/accel.sh@19 -- # IFS=: 00:06:35.927 15:16:53 -- accel/accel.sh@19 -- # read -r var val 00:06:35.927 15:16:53 -- accel/accel.sh@20 -- # val= 00:06:35.927 15:16:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.927 15:16:53 -- accel/accel.sh@19 -- # IFS=: 00:06:35.927 15:16:53 -- accel/accel.sh@19 -- # read -r var val 00:06:35.927 15:16:53 -- accel/accel.sh@20 -- # val= 00:06:35.927 15:16:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.927 15:16:53 -- accel/accel.sh@19 -- # IFS=: 00:06:35.927 15:16:53 -- accel/accel.sh@19 -- # read -r var val 00:06:35.927 15:16:53 -- accel/accel.sh@20 -- # val=0x1 00:06:35.927 15:16:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.927 15:16:53 -- accel/accel.sh@19 -- # IFS=: 00:06:35.927 15:16:53 -- accel/accel.sh@19 -- # read -r var val 00:06:35.927 15:16:53 -- accel/accel.sh@20 -- # val= 00:06:35.927 15:16:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.927 15:16:53 -- accel/accel.sh@19 -- # IFS=: 00:06:35.927 15:16:53 -- accel/accel.sh@19 -- # read -r var val 00:06:35.927 15:16:53 -- accel/accel.sh@20 -- # val= 00:06:35.927 15:16:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.927 15:16:53 -- accel/accel.sh@19 -- # IFS=: 00:06:35.927 15:16:53 -- accel/accel.sh@19 -- # read -r var val 00:06:35.927 15:16:53 -- accel/accel.sh@20 -- # val=decompress 00:06:35.927 15:16:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.927 15:16:53 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:35.927 15:16:53 -- accel/accel.sh@19 -- # IFS=: 00:06:35.927 15:16:53 -- accel/accel.sh@19 -- # read -r var val 00:06:35.927 15:16:53 -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:35.927 15:16:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.927 15:16:53 -- accel/accel.sh@19 -- # IFS=: 00:06:35.927 15:16:53 -- accel/accel.sh@19 -- # read -r var val 00:06:35.927 15:16:53 -- accel/accel.sh@20 -- # val= 00:06:35.927 15:16:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.927 15:16:53 -- accel/accel.sh@19 -- # IFS=: 00:06:35.927 15:16:53 -- accel/accel.sh@19 -- # read -r var val 00:06:35.927 15:16:53 -- accel/accel.sh@20 -- # val=software 00:06:35.927 15:16:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.927 15:16:53 -- accel/accel.sh@22 -- # accel_module=software 00:06:35.927 15:16:53 -- accel/accel.sh@19 -- # IFS=: 00:06:35.927 15:16:53 -- accel/accel.sh@19 -- # read -r var val 00:06:35.927 15:16:53 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:35.927 15:16:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.927 15:16:53 -- accel/accel.sh@19 -- # IFS=: 00:06:35.927 15:16:53 -- accel/accel.sh@19 -- # read -r var val 00:06:35.927 15:16:53 -- accel/accel.sh@20 -- # val=32 00:06:35.927 15:16:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.927 15:16:53 -- accel/accel.sh@19 -- # IFS=: 00:06:35.927 15:16:53 -- accel/accel.sh@19 -- # read -r var val 00:06:35.927 15:16:53 -- accel/accel.sh@20 -- # val=32 00:06:35.927 15:16:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.927 15:16:53 -- accel/accel.sh@19 -- # IFS=: 00:06:35.927 15:16:53 -- accel/accel.sh@19 -- # read -r var val 00:06:35.927 15:16:53 -- accel/accel.sh@20 -- # val=1 00:06:35.927 15:16:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.927 15:16:53 -- accel/accel.sh@19 -- # IFS=: 00:06:35.927 15:16:53 -- accel/accel.sh@19 -- # read -r var val 00:06:35.927 15:16:53 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:35.927 15:16:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.927 15:16:53 -- accel/accel.sh@19 -- # IFS=: 00:06:35.927 15:16:53 -- accel/accel.sh@19 -- # read -r var val 00:06:35.927 15:16:53 -- accel/accel.sh@20 -- # val=Yes 00:06:35.927 15:16:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.927 15:16:53 -- accel/accel.sh@19 -- # IFS=: 00:06:35.927 15:16:53 -- accel/accel.sh@19 -- # read -r var val 00:06:35.927 15:16:53 -- accel/accel.sh@20 -- # val= 00:06:35.927 15:16:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.927 15:16:53 -- accel/accel.sh@19 -- # IFS=: 00:06:35.927 15:16:53 -- accel/accel.sh@19 -- # read -r var val 00:06:35.927 15:16:53 -- accel/accel.sh@20 -- # val= 00:06:35.927 15:16:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.927 15:16:53 -- accel/accel.sh@19 -- # IFS=: 00:06:35.927 15:16:53 -- accel/accel.sh@19 -- # read -r var val 00:06:37.310 15:16:54 -- accel/accel.sh@20 -- # val= 00:06:37.310 15:16:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.310 15:16:54 -- accel/accel.sh@19 -- # IFS=: 00:06:37.310 15:16:54 -- accel/accel.sh@19 -- # read -r var val 00:06:37.310 15:16:54 -- accel/accel.sh@20 -- # val= 00:06:37.310 15:16:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.310 15:16:54 -- accel/accel.sh@19 -- # IFS=: 00:06:37.310 15:16:54 -- accel/accel.sh@19 -- # read -r var val 00:06:37.310 15:16:54 -- accel/accel.sh@20 -- # val= 00:06:37.310 15:16:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.310 15:16:54 -- accel/accel.sh@19 -- # IFS=: 00:06:37.310 15:16:54 -- accel/accel.sh@19 -- # read -r var val 00:06:37.310 15:16:54 -- accel/accel.sh@20 -- # val= 00:06:37.310 15:16:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.310 15:16:54 -- accel/accel.sh@19 -- # IFS=: 00:06:37.310 15:16:54 -- accel/accel.sh@19 -- # read -r var val 00:06:37.310 15:16:54 -- accel/accel.sh@20 -- # val= 00:06:37.310 15:16:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.310 15:16:54 -- accel/accel.sh@19 -- # IFS=: 00:06:37.310 15:16:54 -- accel/accel.sh@19 -- # read -r var val 00:06:37.310 15:16:54 -- accel/accel.sh@20 -- # val= 00:06:37.310 15:16:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.310 15:16:54 -- accel/accel.sh@19 -- # IFS=: 00:06:37.310 15:16:54 -- accel/accel.sh@19 -- # read -r var val 00:06:37.310 15:16:54 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:37.310 15:16:54 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:37.310 15:16:54 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:37.310 00:06:37.310 real 0m1.311s 00:06:37.310 user 0m1.223s 00:06:37.310 sys 0m0.100s 00:06:37.310 15:16:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:37.310 15:16:54 -- common/autotest_common.sh@10 -- # set +x 00:06:37.310 ************************************ 00:06:37.310 END TEST accel_decmop_full 00:06:37.310 ************************************ 00:06:37.310 15:16:54 -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:37.310 15:16:54 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:37.310 15:16:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:37.310 15:16:54 -- common/autotest_common.sh@10 -- # set +x 00:06:37.310 ************************************ 00:06:37.310 START TEST accel_decomp_mcore 00:06:37.310 ************************************ 00:06:37.310 15:16:54 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:37.310 15:16:54 -- accel/accel.sh@16 -- # local accel_opc 00:06:37.310 15:16:54 -- accel/accel.sh@17 -- # local accel_module 00:06:37.310 15:16:54 -- accel/accel.sh@19 -- # IFS=: 00:06:37.310 15:16:54 -- accel/accel.sh@19 -- # read -r var val 00:06:37.310 15:16:54 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:37.310 15:16:54 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:37.310 15:16:54 -- accel/accel.sh@12 -- # build_accel_config 00:06:37.310 15:16:54 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:37.310 15:16:54 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:37.310 15:16:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.310 15:16:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.310 15:16:54 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:37.310 15:16:54 -- accel/accel.sh@40 -- # local IFS=, 00:06:37.310 15:16:54 -- accel/accel.sh@41 -- # jq -r . 00:06:37.310 [2024-04-26 15:16:54.538792] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:06:37.310 [2024-04-26 15:16:54.538895] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1439262 ] 00:06:37.310 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.310 [2024-04-26 15:16:54.602426] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:37.310 [2024-04-26 15:16:54.670928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.310 [2024-04-26 15:16:54.671042] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:37.310 [2024-04-26 15:16:54.671196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.310 [2024-04-26 15:16:54.671196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:37.310 15:16:54 -- accel/accel.sh@20 -- # val= 00:06:37.310 15:16:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.310 15:16:54 -- accel/accel.sh@19 -- # IFS=: 00:06:37.310 15:16:54 -- accel/accel.sh@19 -- # read -r var val 00:06:37.310 15:16:54 -- accel/accel.sh@20 -- # val= 00:06:37.310 15:16:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.310 15:16:54 -- accel/accel.sh@19 -- # IFS=: 00:06:37.310 15:16:54 -- accel/accel.sh@19 -- # read -r var val 00:06:37.310 15:16:54 -- accel/accel.sh@20 -- # val= 00:06:37.310 15:16:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.310 15:16:54 -- accel/accel.sh@19 -- # IFS=: 00:06:37.310 15:16:54 -- accel/accel.sh@19 -- # read -r var val 00:06:37.310 15:16:54 -- accel/accel.sh@20 -- # val=0xf 00:06:37.310 15:16:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.310 15:16:54 -- accel/accel.sh@19 -- # IFS=: 00:06:37.310 15:16:54 -- accel/accel.sh@19 -- # read -r var val 00:06:37.310 15:16:54 -- accel/accel.sh@20 -- # val= 00:06:37.310 15:16:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.310 15:16:54 -- accel/accel.sh@19 -- # IFS=: 00:06:37.310 15:16:54 -- accel/accel.sh@19 -- # read -r var val 00:06:37.310 15:16:54 -- accel/accel.sh@20 -- # val= 00:06:37.310 15:16:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.310 15:16:54 -- accel/accel.sh@19 -- # IFS=: 00:06:37.310 15:16:54 -- accel/accel.sh@19 -- # read -r var val 00:06:37.310 15:16:54 -- accel/accel.sh@20 -- # val=decompress 00:06:37.310 15:16:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.310 15:16:54 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:37.310 15:16:54 -- accel/accel.sh@19 -- # IFS=: 00:06:37.310 15:16:54 -- accel/accel.sh@19 -- # read -r var val 00:06:37.310 15:16:54 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:37.310 15:16:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.310 15:16:54 -- accel/accel.sh@19 -- # IFS=: 00:06:37.310 15:16:54 -- accel/accel.sh@19 -- # read -r var val 00:06:37.310 15:16:54 -- accel/accel.sh@20 -- # val= 00:06:37.310 15:16:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.310 15:16:54 -- accel/accel.sh@19 -- # IFS=: 00:06:37.310 15:16:54 -- accel/accel.sh@19 -- # read -r var val 00:06:37.310 15:16:54 -- accel/accel.sh@20 -- # val=software 00:06:37.310 15:16:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.310 15:16:54 -- accel/accel.sh@22 -- # accel_module=software 00:06:37.310 15:16:54 -- accel/accel.sh@19 -- # IFS=: 00:06:37.310 15:16:54 -- accel/accel.sh@19 -- # read -r var val 00:06:37.311 15:16:54 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:37.311 15:16:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.311 15:16:54 -- accel/accel.sh@19 -- # IFS=: 00:06:37.311 15:16:54 -- accel/accel.sh@19 -- # read -r var val 00:06:37.311 15:16:54 -- accel/accel.sh@20 -- # val=32 00:06:37.311 15:16:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.311 15:16:54 -- accel/accel.sh@19 -- # IFS=: 00:06:37.311 15:16:54 -- accel/accel.sh@19 -- # read -r var val 00:06:37.311 15:16:54 -- accel/accel.sh@20 -- # val=32 00:06:37.311 15:16:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.311 15:16:54 -- accel/accel.sh@19 -- # IFS=: 00:06:37.311 15:16:54 -- accel/accel.sh@19 -- # read -r var val 00:06:37.311 15:16:54 -- accel/accel.sh@20 -- # val=1 00:06:37.311 15:16:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.311 15:16:54 -- accel/accel.sh@19 -- # IFS=: 00:06:37.311 15:16:54 -- accel/accel.sh@19 -- # read -r var val 00:06:37.311 15:16:54 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:37.311 15:16:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.311 15:16:54 -- accel/accel.sh@19 -- # IFS=: 00:06:37.311 15:16:54 -- accel/accel.sh@19 -- # read -r var val 00:06:37.311 15:16:54 -- accel/accel.sh@20 -- # val=Yes 00:06:37.311 15:16:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.311 15:16:54 -- accel/accel.sh@19 -- # IFS=: 00:06:37.311 15:16:54 -- accel/accel.sh@19 -- # read -r var val 00:06:37.311 15:16:54 -- accel/accel.sh@20 -- # val= 00:06:37.311 15:16:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.311 15:16:54 -- accel/accel.sh@19 -- # IFS=: 00:06:37.311 15:16:54 -- accel/accel.sh@19 -- # read -r var val 00:06:37.311 15:16:54 -- accel/accel.sh@20 -- # val= 00:06:37.311 15:16:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.311 15:16:54 -- accel/accel.sh@19 -- # IFS=: 00:06:37.311 15:16:54 -- accel/accel.sh@19 -- # read -r var val 00:06:38.696 15:16:55 -- accel/accel.sh@20 -- # val= 00:06:38.696 15:16:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.696 15:16:55 -- accel/accel.sh@19 -- # IFS=: 00:06:38.696 15:16:55 -- accel/accel.sh@19 -- # read -r var val 00:06:38.696 15:16:55 -- accel/accel.sh@20 -- # val= 00:06:38.696 15:16:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.696 15:16:55 -- accel/accel.sh@19 -- # IFS=: 00:06:38.696 15:16:55 -- accel/accel.sh@19 -- # read -r var val 00:06:38.696 15:16:55 -- accel/accel.sh@20 -- # val= 00:06:38.696 15:16:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.696 15:16:55 -- accel/accel.sh@19 -- # IFS=: 00:06:38.696 15:16:55 -- accel/accel.sh@19 -- # read -r var val 00:06:38.696 15:16:55 -- accel/accel.sh@20 -- # val= 00:06:38.696 15:16:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.696 15:16:55 -- accel/accel.sh@19 -- # IFS=: 00:06:38.696 15:16:55 -- accel/accel.sh@19 -- # read -r var val 00:06:38.696 15:16:55 -- accel/accel.sh@20 -- # val= 00:06:38.696 15:16:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.696 15:16:55 -- accel/accel.sh@19 -- # IFS=: 00:06:38.696 15:16:55 -- accel/accel.sh@19 -- # read -r var val 00:06:38.696 15:16:55 -- accel/accel.sh@20 -- # val= 00:06:38.696 15:16:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.696 15:16:55 -- accel/accel.sh@19 -- # IFS=: 00:06:38.696 15:16:55 -- accel/accel.sh@19 -- # read -r var val 00:06:38.696 15:16:55 -- accel/accel.sh@20 -- # val= 00:06:38.696 15:16:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.696 15:16:55 -- accel/accel.sh@19 -- # IFS=: 00:06:38.696 15:16:55 -- accel/accel.sh@19 -- # read -r var val 00:06:38.696 15:16:55 -- accel/accel.sh@20 -- # val= 00:06:38.696 15:16:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.696 15:16:55 -- accel/accel.sh@19 -- # IFS=: 00:06:38.696 15:16:55 -- accel/accel.sh@19 -- # read -r var val 00:06:38.696 15:16:55 -- accel/accel.sh@20 -- # val= 00:06:38.696 15:16:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.696 15:16:55 -- accel/accel.sh@19 -- # IFS=: 00:06:38.696 15:16:55 -- accel/accel.sh@19 -- # read -r var val 00:06:38.696 15:16:55 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:38.696 15:16:55 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:38.696 15:16:55 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:38.696 00:06:38.696 real 0m1.300s 00:06:38.696 user 0m4.439s 00:06:38.696 sys 0m0.109s 00:06:38.696 15:16:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:38.696 15:16:55 -- common/autotest_common.sh@10 -- # set +x 00:06:38.696 ************************************ 00:06:38.696 END TEST accel_decomp_mcore 00:06:38.696 ************************************ 00:06:38.696 15:16:55 -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:38.696 15:16:55 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:38.696 15:16:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:38.696 15:16:55 -- common/autotest_common.sh@10 -- # set +x 00:06:38.696 ************************************ 00:06:38.696 START TEST accel_decomp_full_mcore 00:06:38.696 ************************************ 00:06:38.696 15:16:56 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:38.696 15:16:56 -- accel/accel.sh@16 -- # local accel_opc 00:06:38.696 15:16:56 -- accel/accel.sh@17 -- # local accel_module 00:06:38.697 15:16:56 -- accel/accel.sh@19 -- # IFS=: 00:06:38.697 15:16:56 -- accel/accel.sh@19 -- # read -r var val 00:06:38.697 15:16:56 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:38.697 15:16:56 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:38.697 15:16:56 -- accel/accel.sh@12 -- # build_accel_config 00:06:38.697 15:16:56 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:38.697 15:16:56 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:38.697 15:16:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.697 15:16:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.697 15:16:56 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:38.697 15:16:56 -- accel/accel.sh@40 -- # local IFS=, 00:06:38.697 15:16:56 -- accel/accel.sh@41 -- # jq -r . 00:06:38.697 [2024-04-26 15:16:56.038910] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:06:38.697 [2024-04-26 15:16:56.038986] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1439522 ] 00:06:38.697 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.697 [2024-04-26 15:16:56.105591] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:38.958 [2024-04-26 15:16:56.181834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.958 [2024-04-26 15:16:56.181979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:38.958 [2024-04-26 15:16:56.182313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:38.958 [2024-04-26 15:16:56.182314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.958 15:16:56 -- accel/accel.sh@20 -- # val= 00:06:38.958 15:16:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.958 15:16:56 -- accel/accel.sh@19 -- # IFS=: 00:06:38.958 15:16:56 -- accel/accel.sh@19 -- # read -r var val 00:06:38.958 15:16:56 -- accel/accel.sh@20 -- # val= 00:06:38.958 15:16:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.958 15:16:56 -- accel/accel.sh@19 -- # IFS=: 00:06:38.958 15:16:56 -- accel/accel.sh@19 -- # read -r var val 00:06:38.958 15:16:56 -- accel/accel.sh@20 -- # val= 00:06:38.958 15:16:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.958 15:16:56 -- accel/accel.sh@19 -- # IFS=: 00:06:38.958 15:16:56 -- accel/accel.sh@19 -- # read -r var val 00:06:38.958 15:16:56 -- accel/accel.sh@20 -- # val=0xf 00:06:38.958 15:16:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.958 15:16:56 -- accel/accel.sh@19 -- # IFS=: 00:06:38.958 15:16:56 -- accel/accel.sh@19 -- # read -r var val 00:06:38.958 15:16:56 -- accel/accel.sh@20 -- # val= 00:06:38.958 15:16:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.958 15:16:56 -- accel/accel.sh@19 -- # IFS=: 00:06:38.958 15:16:56 -- accel/accel.sh@19 -- # read -r var val 00:06:38.958 15:16:56 -- accel/accel.sh@20 -- # val= 00:06:38.958 15:16:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.958 15:16:56 -- accel/accel.sh@19 -- # IFS=: 00:06:38.959 15:16:56 -- accel/accel.sh@19 -- # read -r var val 00:06:38.959 15:16:56 -- accel/accel.sh@20 -- # val=decompress 00:06:38.959 15:16:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.959 15:16:56 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:38.959 15:16:56 -- accel/accel.sh@19 -- # IFS=: 00:06:38.959 15:16:56 -- accel/accel.sh@19 -- # read -r var val 00:06:38.959 15:16:56 -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:38.959 15:16:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.959 15:16:56 -- accel/accel.sh@19 -- # IFS=: 00:06:38.959 15:16:56 -- accel/accel.sh@19 -- # read -r var val 00:06:38.959 15:16:56 -- accel/accel.sh@20 -- # val= 00:06:38.959 15:16:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.959 15:16:56 -- accel/accel.sh@19 -- # IFS=: 00:06:38.959 15:16:56 -- accel/accel.sh@19 -- # read -r var val 00:06:38.959 15:16:56 -- accel/accel.sh@20 -- # val=software 00:06:38.959 15:16:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.959 15:16:56 -- accel/accel.sh@22 -- # accel_module=software 00:06:38.959 15:16:56 -- accel/accel.sh@19 -- # IFS=: 00:06:38.959 15:16:56 -- accel/accel.sh@19 -- # read -r var val 00:06:38.959 15:16:56 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:38.959 15:16:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.959 15:16:56 -- accel/accel.sh@19 -- # IFS=: 00:06:38.959 15:16:56 -- accel/accel.sh@19 -- # read -r var val 00:06:38.959 15:16:56 -- accel/accel.sh@20 -- # val=32 00:06:38.959 15:16:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.959 15:16:56 -- accel/accel.sh@19 -- # IFS=: 00:06:38.959 15:16:56 -- accel/accel.sh@19 -- # read -r var val 00:06:38.959 15:16:56 -- accel/accel.sh@20 -- # val=32 00:06:38.959 15:16:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.959 15:16:56 -- accel/accel.sh@19 -- # IFS=: 00:06:38.959 15:16:56 -- accel/accel.sh@19 -- # read -r var val 00:06:38.959 15:16:56 -- accel/accel.sh@20 -- # val=1 00:06:38.959 15:16:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.959 15:16:56 -- accel/accel.sh@19 -- # IFS=: 00:06:38.959 15:16:56 -- accel/accel.sh@19 -- # read -r var val 00:06:38.959 15:16:56 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:38.959 15:16:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.959 15:16:56 -- accel/accel.sh@19 -- # IFS=: 00:06:38.959 15:16:56 -- accel/accel.sh@19 -- # read -r var val 00:06:38.959 15:16:56 -- accel/accel.sh@20 -- # val=Yes 00:06:38.959 15:16:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.959 15:16:56 -- accel/accel.sh@19 -- # IFS=: 00:06:38.959 15:16:56 -- accel/accel.sh@19 -- # read -r var val 00:06:38.959 15:16:56 -- accel/accel.sh@20 -- # val= 00:06:38.959 15:16:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.959 15:16:56 -- accel/accel.sh@19 -- # IFS=: 00:06:38.959 15:16:56 -- accel/accel.sh@19 -- # read -r var val 00:06:38.959 15:16:56 -- accel/accel.sh@20 -- # val= 00:06:38.959 15:16:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.959 15:16:56 -- accel/accel.sh@19 -- # IFS=: 00:06:38.959 15:16:56 -- accel/accel.sh@19 -- # read -r var val 00:06:39.898 15:16:57 -- accel/accel.sh@20 -- # val= 00:06:39.898 15:16:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.898 15:16:57 -- accel/accel.sh@19 -- # IFS=: 00:06:39.898 15:16:57 -- accel/accel.sh@19 -- # read -r var val 00:06:39.898 15:16:57 -- accel/accel.sh@20 -- # val= 00:06:39.898 15:16:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.898 15:16:57 -- accel/accel.sh@19 -- # IFS=: 00:06:39.898 15:16:57 -- accel/accel.sh@19 -- # read -r var val 00:06:39.898 15:16:57 -- accel/accel.sh@20 -- # val= 00:06:39.898 15:16:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.898 15:16:57 -- accel/accel.sh@19 -- # IFS=: 00:06:39.898 15:16:57 -- accel/accel.sh@19 -- # read -r var val 00:06:39.898 15:16:57 -- accel/accel.sh@20 -- # val= 00:06:39.898 15:16:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.898 15:16:57 -- accel/accel.sh@19 -- # IFS=: 00:06:39.898 15:16:57 -- accel/accel.sh@19 -- # read -r var val 00:06:39.898 15:16:57 -- accel/accel.sh@20 -- # val= 00:06:39.898 15:16:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.898 15:16:57 -- accel/accel.sh@19 -- # IFS=: 00:06:39.898 15:16:57 -- accel/accel.sh@19 -- # read -r var val 00:06:39.898 15:16:57 -- accel/accel.sh@20 -- # val= 00:06:39.898 15:16:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.898 15:16:57 -- accel/accel.sh@19 -- # IFS=: 00:06:39.898 15:16:57 -- accel/accel.sh@19 -- # read -r var val 00:06:39.898 15:16:57 -- accel/accel.sh@20 -- # val= 00:06:39.898 15:16:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.898 15:16:57 -- accel/accel.sh@19 -- # IFS=: 00:06:39.898 15:16:57 -- accel/accel.sh@19 -- # read -r var val 00:06:39.898 15:16:57 -- accel/accel.sh@20 -- # val= 00:06:39.898 15:16:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.898 15:16:57 -- accel/accel.sh@19 -- # IFS=: 00:06:39.898 15:16:57 -- accel/accel.sh@19 -- # read -r var val 00:06:39.898 15:16:57 -- accel/accel.sh@20 -- # val= 00:06:39.898 15:16:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.898 15:16:57 -- accel/accel.sh@19 -- # IFS=: 00:06:39.898 15:16:57 -- accel/accel.sh@19 -- # read -r var val 00:06:39.898 15:16:57 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:39.898 15:16:57 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:39.898 15:16:57 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:39.898 00:06:39.898 real 0m1.327s 00:06:39.898 user 0m4.503s 00:06:39.898 sys 0m0.119s 00:06:39.898 15:16:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:39.898 15:16:57 -- common/autotest_common.sh@10 -- # set +x 00:06:39.898 ************************************ 00:06:39.898 END TEST accel_decomp_full_mcore 00:06:39.898 ************************************ 00:06:40.158 15:16:57 -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:40.158 15:16:57 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:40.158 15:16:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:40.158 15:16:57 -- common/autotest_common.sh@10 -- # set +x 00:06:40.158 ************************************ 00:06:40.158 START TEST accel_decomp_mthread 00:06:40.158 ************************************ 00:06:40.158 15:16:57 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:40.158 15:16:57 -- accel/accel.sh@16 -- # local accel_opc 00:06:40.158 15:16:57 -- accel/accel.sh@17 -- # local accel_module 00:06:40.158 15:16:57 -- accel/accel.sh@19 -- # IFS=: 00:06:40.158 15:16:57 -- accel/accel.sh@19 -- # read -r var val 00:06:40.158 15:16:57 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:40.158 15:16:57 -- accel/accel.sh@12 -- # build_accel_config 00:06:40.158 15:16:57 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:40.158 15:16:57 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:40.158 15:16:57 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:40.158 15:16:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.158 15:16:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.158 15:16:57 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:40.158 15:16:57 -- accel/accel.sh@40 -- # local IFS=, 00:06:40.158 15:16:57 -- accel/accel.sh@41 -- # jq -r . 00:06:40.158 [2024-04-26 15:16:57.559811] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:06:40.158 [2024-04-26 15:16:57.559880] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1439804 ] 00:06:40.158 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.419 [2024-04-26 15:16:57.621388] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.419 [2024-04-26 15:16:57.685328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.419 15:16:57 -- accel/accel.sh@20 -- # val= 00:06:40.419 15:16:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.419 15:16:57 -- accel/accel.sh@19 -- # IFS=: 00:06:40.419 15:16:57 -- accel/accel.sh@19 -- # read -r var val 00:06:40.419 15:16:57 -- accel/accel.sh@20 -- # val= 00:06:40.419 15:16:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.419 15:16:57 -- accel/accel.sh@19 -- # IFS=: 00:06:40.419 15:16:57 -- accel/accel.sh@19 -- # read -r var val 00:06:40.419 15:16:57 -- accel/accel.sh@20 -- # val= 00:06:40.419 15:16:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.419 15:16:57 -- accel/accel.sh@19 -- # IFS=: 00:06:40.419 15:16:57 -- accel/accel.sh@19 -- # read -r var val 00:06:40.419 15:16:57 -- accel/accel.sh@20 -- # val=0x1 00:06:40.419 15:16:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.419 15:16:57 -- accel/accel.sh@19 -- # IFS=: 00:06:40.419 15:16:57 -- accel/accel.sh@19 -- # read -r var val 00:06:40.419 15:16:57 -- accel/accel.sh@20 -- # val= 00:06:40.419 15:16:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.419 15:16:57 -- accel/accel.sh@19 -- # IFS=: 00:06:40.419 15:16:57 -- accel/accel.sh@19 -- # read -r var val 00:06:40.419 15:16:57 -- accel/accel.sh@20 -- # val= 00:06:40.419 15:16:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.419 15:16:57 -- accel/accel.sh@19 -- # IFS=: 00:06:40.419 15:16:57 -- accel/accel.sh@19 -- # read -r var val 00:06:40.419 15:16:57 -- accel/accel.sh@20 -- # val=decompress 00:06:40.419 15:16:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.419 15:16:57 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:40.419 15:16:57 -- accel/accel.sh@19 -- # IFS=: 00:06:40.419 15:16:57 -- accel/accel.sh@19 -- # read -r var val 00:06:40.419 15:16:57 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:40.419 15:16:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.419 15:16:57 -- accel/accel.sh@19 -- # IFS=: 00:06:40.419 15:16:57 -- accel/accel.sh@19 -- # read -r var val 00:06:40.419 15:16:57 -- accel/accel.sh@20 -- # val= 00:06:40.419 15:16:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.419 15:16:57 -- accel/accel.sh@19 -- # IFS=: 00:06:40.419 15:16:57 -- accel/accel.sh@19 -- # read -r var val 00:06:40.419 15:16:57 -- accel/accel.sh@20 -- # val=software 00:06:40.419 15:16:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.419 15:16:57 -- accel/accel.sh@22 -- # accel_module=software 00:06:40.419 15:16:57 -- accel/accel.sh@19 -- # IFS=: 00:06:40.419 15:16:57 -- accel/accel.sh@19 -- # read -r var val 00:06:40.419 15:16:57 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:40.419 15:16:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.419 15:16:57 -- accel/accel.sh@19 -- # IFS=: 00:06:40.419 15:16:57 -- accel/accel.sh@19 -- # read -r var val 00:06:40.419 15:16:57 -- accel/accel.sh@20 -- # val=32 00:06:40.419 15:16:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.419 15:16:57 -- accel/accel.sh@19 -- # IFS=: 00:06:40.419 15:16:57 -- accel/accel.sh@19 -- # read -r var val 00:06:40.419 15:16:57 -- accel/accel.sh@20 -- # val=32 00:06:40.419 15:16:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.419 15:16:57 -- accel/accel.sh@19 -- # IFS=: 00:06:40.420 15:16:57 -- accel/accel.sh@19 -- # read -r var val 00:06:40.420 15:16:57 -- accel/accel.sh@20 -- # val=2 00:06:40.420 15:16:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.420 15:16:57 -- accel/accel.sh@19 -- # IFS=: 00:06:40.420 15:16:57 -- accel/accel.sh@19 -- # read -r var val 00:06:40.420 15:16:57 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:40.420 15:16:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.420 15:16:57 -- accel/accel.sh@19 -- # IFS=: 00:06:40.420 15:16:57 -- accel/accel.sh@19 -- # read -r var val 00:06:40.420 15:16:57 -- accel/accel.sh@20 -- # val=Yes 00:06:40.420 15:16:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.420 15:16:57 -- accel/accel.sh@19 -- # IFS=: 00:06:40.420 15:16:57 -- accel/accel.sh@19 -- # read -r var val 00:06:40.420 15:16:57 -- accel/accel.sh@20 -- # val= 00:06:40.420 15:16:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.420 15:16:57 -- accel/accel.sh@19 -- # IFS=: 00:06:40.420 15:16:57 -- accel/accel.sh@19 -- # read -r var val 00:06:40.420 15:16:57 -- accel/accel.sh@20 -- # val= 00:06:40.420 15:16:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.420 15:16:57 -- accel/accel.sh@19 -- # IFS=: 00:06:40.420 15:16:57 -- accel/accel.sh@19 -- # read -r var val 00:06:41.805 15:16:58 -- accel/accel.sh@20 -- # val= 00:06:41.805 15:16:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.805 15:16:58 -- accel/accel.sh@19 -- # IFS=: 00:06:41.805 15:16:58 -- accel/accel.sh@19 -- # read -r var val 00:06:41.805 15:16:58 -- accel/accel.sh@20 -- # val= 00:06:41.805 15:16:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.805 15:16:58 -- accel/accel.sh@19 -- # IFS=: 00:06:41.805 15:16:58 -- accel/accel.sh@19 -- # read -r var val 00:06:41.805 15:16:58 -- accel/accel.sh@20 -- # val= 00:06:41.805 15:16:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.805 15:16:58 -- accel/accel.sh@19 -- # IFS=: 00:06:41.805 15:16:58 -- accel/accel.sh@19 -- # read -r var val 00:06:41.805 15:16:58 -- accel/accel.sh@20 -- # val= 00:06:41.805 15:16:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.805 15:16:58 -- accel/accel.sh@19 -- # IFS=: 00:06:41.805 15:16:58 -- accel/accel.sh@19 -- # read -r var val 00:06:41.805 15:16:58 -- accel/accel.sh@20 -- # val= 00:06:41.805 15:16:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.805 15:16:58 -- accel/accel.sh@19 -- # IFS=: 00:06:41.805 15:16:58 -- accel/accel.sh@19 -- # read -r var val 00:06:41.805 15:16:58 -- accel/accel.sh@20 -- # val= 00:06:41.805 15:16:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.805 15:16:58 -- accel/accel.sh@19 -- # IFS=: 00:06:41.805 15:16:58 -- accel/accel.sh@19 -- # read -r var val 00:06:41.805 15:16:58 -- accel/accel.sh@20 -- # val= 00:06:41.806 15:16:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.806 15:16:58 -- accel/accel.sh@19 -- # IFS=: 00:06:41.806 15:16:58 -- accel/accel.sh@19 -- # read -r var val 00:06:41.806 15:16:58 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:41.806 15:16:58 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:41.806 15:16:58 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:41.806 00:06:41.806 real 0m1.289s 00:06:41.806 user 0m1.208s 00:06:41.806 sys 0m0.095s 00:06:41.806 15:16:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:41.806 15:16:58 -- common/autotest_common.sh@10 -- # set +x 00:06:41.806 ************************************ 00:06:41.806 END TEST accel_decomp_mthread 00:06:41.806 ************************************ 00:06:41.806 15:16:58 -- accel/accel.sh@122 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:41.806 15:16:58 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:41.806 15:16:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:41.806 15:16:58 -- common/autotest_common.sh@10 -- # set +x 00:06:41.806 ************************************ 00:06:41.806 START TEST accel_deomp_full_mthread 00:06:41.806 ************************************ 00:06:41.806 15:16:59 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:41.806 15:16:59 -- accel/accel.sh@16 -- # local accel_opc 00:06:41.806 15:16:59 -- accel/accel.sh@17 -- # local accel_module 00:06:41.806 15:16:59 -- accel/accel.sh@19 -- # IFS=: 00:06:41.806 15:16:59 -- accel/accel.sh@19 -- # read -r var val 00:06:41.806 15:16:59 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:41.806 15:16:59 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:41.806 15:16:59 -- accel/accel.sh@12 -- # build_accel_config 00:06:41.806 15:16:59 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:41.806 15:16:59 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:41.806 15:16:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.806 15:16:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.806 15:16:59 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:41.806 15:16:59 -- accel/accel.sh@40 -- # local IFS=, 00:06:41.806 15:16:59 -- accel/accel.sh@41 -- # jq -r . 00:06:41.806 [2024-04-26 15:16:59.044464] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:06:41.806 [2024-04-26 15:16:59.044533] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1440067 ] 00:06:41.806 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.806 [2024-04-26 15:16:59.109573] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.806 [2024-04-26 15:16:59.179236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.806 15:16:59 -- accel/accel.sh@20 -- # val= 00:06:41.806 15:16:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.806 15:16:59 -- accel/accel.sh@19 -- # IFS=: 00:06:41.806 15:16:59 -- accel/accel.sh@19 -- # read -r var val 00:06:41.806 15:16:59 -- accel/accel.sh@20 -- # val= 00:06:41.806 15:16:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.806 15:16:59 -- accel/accel.sh@19 -- # IFS=: 00:06:41.806 15:16:59 -- accel/accel.sh@19 -- # read -r var val 00:06:41.806 15:16:59 -- accel/accel.sh@20 -- # val= 00:06:41.806 15:16:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.806 15:16:59 -- accel/accel.sh@19 -- # IFS=: 00:06:41.806 15:16:59 -- accel/accel.sh@19 -- # read -r var val 00:06:41.806 15:16:59 -- accel/accel.sh@20 -- # val=0x1 00:06:41.806 15:16:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.806 15:16:59 -- accel/accel.sh@19 -- # IFS=: 00:06:41.806 15:16:59 -- accel/accel.sh@19 -- # read -r var val 00:06:41.806 15:16:59 -- accel/accel.sh@20 -- # val= 00:06:41.806 15:16:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.806 15:16:59 -- accel/accel.sh@19 -- # IFS=: 00:06:41.806 15:16:59 -- accel/accel.sh@19 -- # read -r var val 00:06:41.806 15:16:59 -- accel/accel.sh@20 -- # val= 00:06:41.806 15:16:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.806 15:16:59 -- accel/accel.sh@19 -- # IFS=: 00:06:41.806 15:16:59 -- accel/accel.sh@19 -- # read -r var val 00:06:41.806 15:16:59 -- accel/accel.sh@20 -- # val=decompress 00:06:41.806 15:16:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.806 15:16:59 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:41.806 15:16:59 -- accel/accel.sh@19 -- # IFS=: 00:06:41.806 15:16:59 -- accel/accel.sh@19 -- # read -r var val 00:06:41.806 15:16:59 -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:41.806 15:16:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.806 15:16:59 -- accel/accel.sh@19 -- # IFS=: 00:06:41.806 15:16:59 -- accel/accel.sh@19 -- # read -r var val 00:06:41.806 15:16:59 -- accel/accel.sh@20 -- # val= 00:06:41.806 15:16:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.806 15:16:59 -- accel/accel.sh@19 -- # IFS=: 00:06:41.806 15:16:59 -- accel/accel.sh@19 -- # read -r var val 00:06:41.806 15:16:59 -- accel/accel.sh@20 -- # val=software 00:06:41.806 15:16:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.806 15:16:59 -- accel/accel.sh@22 -- # accel_module=software 00:06:41.806 15:16:59 -- accel/accel.sh@19 -- # IFS=: 00:06:41.806 15:16:59 -- accel/accel.sh@19 -- # read -r var val 00:06:41.806 15:16:59 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:41.806 15:16:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.806 15:16:59 -- accel/accel.sh@19 -- # IFS=: 00:06:41.806 15:16:59 -- accel/accel.sh@19 -- # read -r var val 00:06:41.806 15:16:59 -- accel/accel.sh@20 -- # val=32 00:06:41.806 15:16:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.806 15:16:59 -- accel/accel.sh@19 -- # IFS=: 00:06:41.806 15:16:59 -- accel/accel.sh@19 -- # read -r var val 00:06:41.806 15:16:59 -- accel/accel.sh@20 -- # val=32 00:06:41.806 15:16:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.806 15:16:59 -- accel/accel.sh@19 -- # IFS=: 00:06:41.806 15:16:59 -- accel/accel.sh@19 -- # read -r var val 00:06:41.806 15:16:59 -- accel/accel.sh@20 -- # val=2 00:06:41.806 15:16:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.806 15:16:59 -- accel/accel.sh@19 -- # IFS=: 00:06:41.806 15:16:59 -- accel/accel.sh@19 -- # read -r var val 00:06:41.806 15:16:59 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:41.806 15:16:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.806 15:16:59 -- accel/accel.sh@19 -- # IFS=: 00:06:41.806 15:16:59 -- accel/accel.sh@19 -- # read -r var val 00:06:41.806 15:16:59 -- accel/accel.sh@20 -- # val=Yes 00:06:41.806 15:16:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.806 15:16:59 -- accel/accel.sh@19 -- # IFS=: 00:06:41.806 15:16:59 -- accel/accel.sh@19 -- # read -r var val 00:06:41.806 15:16:59 -- accel/accel.sh@20 -- # val= 00:06:41.806 15:16:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.806 15:16:59 -- accel/accel.sh@19 -- # IFS=: 00:06:41.806 15:16:59 -- accel/accel.sh@19 -- # read -r var val 00:06:41.806 15:16:59 -- accel/accel.sh@20 -- # val= 00:06:41.806 15:16:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.806 15:16:59 -- accel/accel.sh@19 -- # IFS=: 00:06:41.806 15:16:59 -- accel/accel.sh@19 -- # read -r var val 00:06:43.193 15:17:00 -- accel/accel.sh@20 -- # val= 00:06:43.193 15:17:00 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.193 15:17:00 -- accel/accel.sh@19 -- # IFS=: 00:06:43.193 15:17:00 -- accel/accel.sh@19 -- # read -r var val 00:06:43.193 15:17:00 -- accel/accel.sh@20 -- # val= 00:06:43.193 15:17:00 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.193 15:17:00 -- accel/accel.sh@19 -- # IFS=: 00:06:43.193 15:17:00 -- accel/accel.sh@19 -- # read -r var val 00:06:43.193 15:17:00 -- accel/accel.sh@20 -- # val= 00:06:43.193 15:17:00 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.193 15:17:00 -- accel/accel.sh@19 -- # IFS=: 00:06:43.193 15:17:00 -- accel/accel.sh@19 -- # read -r var val 00:06:43.193 15:17:00 -- accel/accel.sh@20 -- # val= 00:06:43.193 15:17:00 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.193 15:17:00 -- accel/accel.sh@19 -- # IFS=: 00:06:43.193 15:17:00 -- accel/accel.sh@19 -- # read -r var val 00:06:43.193 15:17:00 -- accel/accel.sh@20 -- # val= 00:06:43.193 15:17:00 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.193 15:17:00 -- accel/accel.sh@19 -- # IFS=: 00:06:43.193 15:17:00 -- accel/accel.sh@19 -- # read -r var val 00:06:43.193 15:17:00 -- accel/accel.sh@20 -- # val= 00:06:43.193 15:17:00 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.193 15:17:00 -- accel/accel.sh@19 -- # IFS=: 00:06:43.193 15:17:00 -- accel/accel.sh@19 -- # read -r var val 00:06:43.193 15:17:00 -- accel/accel.sh@20 -- # val= 00:06:43.193 15:17:00 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.193 15:17:00 -- accel/accel.sh@19 -- # IFS=: 00:06:43.193 15:17:00 -- accel/accel.sh@19 -- # read -r var val 00:06:43.193 15:17:00 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:43.193 15:17:00 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:43.193 15:17:00 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:43.193 00:06:43.193 real 0m1.330s 00:06:43.193 user 0m1.235s 00:06:43.193 sys 0m0.107s 00:06:43.193 15:17:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:43.193 15:17:00 -- common/autotest_common.sh@10 -- # set +x 00:06:43.193 ************************************ 00:06:43.193 END TEST accel_deomp_full_mthread 00:06:43.193 ************************************ 00:06:43.193 15:17:00 -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:43.193 15:17:00 -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:43.193 15:17:00 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:43.193 15:17:00 -- accel/accel.sh@137 -- # build_accel_config 00:06:43.193 15:17:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:43.193 15:17:00 -- common/autotest_common.sh@10 -- # set +x 00:06:43.193 15:17:00 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:43.193 15:17:00 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:43.193 15:17:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.193 15:17:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.193 15:17:00 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:43.193 15:17:00 -- accel/accel.sh@40 -- # local IFS=, 00:06:43.193 15:17:00 -- accel/accel.sh@41 -- # jq -r . 00:06:43.193 ************************************ 00:06:43.193 START TEST accel_dif_functional_tests 00:06:43.193 ************************************ 00:06:43.193 15:17:00 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:43.193 [2024-04-26 15:17:00.585493] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:06:43.193 [2024-04-26 15:17:00.585539] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1440418 ] 00:06:43.193 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.454 [2024-04-26 15:17:00.646093] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:43.454 [2024-04-26 15:17:00.712635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.454 [2024-04-26 15:17:00.712750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:43.454 [2024-04-26 15:17:00.712753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.454 00:06:43.454 00:06:43.454 CUnit - A unit testing framework for C - Version 2.1-3 00:06:43.454 http://cunit.sourceforge.net/ 00:06:43.454 00:06:43.454 00:06:43.454 Suite: accel_dif 00:06:43.454 Test: verify: DIF generated, GUARD check ...passed 00:06:43.454 Test: verify: DIF generated, APPTAG check ...passed 00:06:43.454 Test: verify: DIF generated, REFTAG check ...passed 00:06:43.454 Test: verify: DIF not generated, GUARD check ...[2024-04-26 15:17:00.768287] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:43.454 [2024-04-26 15:17:00.768324] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:43.454 passed 00:06:43.454 Test: verify: DIF not generated, APPTAG check ...[2024-04-26 15:17:00.768354] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:43.454 [2024-04-26 15:17:00.768369] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:43.454 passed 00:06:43.454 Test: verify: DIF not generated, REFTAG check ...[2024-04-26 15:17:00.768384] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:43.454 [2024-04-26 15:17:00.768399] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:43.454 passed 00:06:43.454 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:43.454 Test: verify: APPTAG incorrect, APPTAG check ...[2024-04-26 15:17:00.768444] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:43.454 passed 00:06:43.454 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:43.454 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:43.454 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:43.454 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-04-26 15:17:00.768560] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:43.454 passed 00:06:43.454 Test: generate copy: DIF generated, GUARD check ...passed 00:06:43.454 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:43.454 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:43.454 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:43.454 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:43.454 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:43.454 Test: generate copy: iovecs-len validate ...[2024-04-26 15:17:00.768746] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:43.454 passed 00:06:43.454 Test: generate copy: buffer alignment validate ...passed 00:06:43.454 00:06:43.454 Run Summary: Type Total Ran Passed Failed Inactive 00:06:43.454 suites 1 1 n/a 0 0 00:06:43.454 tests 20 20 20 0 0 00:06:43.454 asserts 204 204 204 0 n/a 00:06:43.454 00:06:43.454 Elapsed time = 0.000 seconds 00:06:43.454 00:06:43.454 real 0m0.346s 00:06:43.454 user 0m0.449s 00:06:43.454 sys 0m0.117s 00:06:43.454 15:17:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:43.454 15:17:00 -- common/autotest_common.sh@10 -- # set +x 00:06:43.454 ************************************ 00:06:43.454 END TEST accel_dif_functional_tests 00:06:43.454 ************************************ 00:06:43.716 00:06:43.716 real 0m33.150s 00:06:43.716 user 0m34.848s 00:06:43.716 sys 0m5.566s 00:06:43.716 15:17:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:43.716 15:17:00 -- common/autotest_common.sh@10 -- # set +x 00:06:43.716 ************************************ 00:06:43.716 END TEST accel 00:06:43.716 ************************************ 00:06:43.716 15:17:00 -- spdk/autotest.sh@180 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:43.716 15:17:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:43.716 15:17:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:43.716 15:17:00 -- common/autotest_common.sh@10 -- # set +x 00:06:43.716 ************************************ 00:06:43.716 START TEST accel_rpc 00:06:43.716 ************************************ 00:06:43.716 15:17:01 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:43.976 * Looking for test storage... 00:06:43.976 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:43.976 15:17:01 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:43.976 15:17:01 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1440753 00:06:43.976 15:17:01 -- accel/accel_rpc.sh@15 -- # waitforlisten 1440753 00:06:43.976 15:17:01 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:43.976 15:17:01 -- common/autotest_common.sh@817 -- # '[' -z 1440753 ']' 00:06:43.976 15:17:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.976 15:17:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:43.976 15:17:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.976 15:17:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:43.976 15:17:01 -- common/autotest_common.sh@10 -- # set +x 00:06:43.976 [2024-04-26 15:17:01.263586] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:06:43.976 [2024-04-26 15:17:01.263636] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1440753 ] 00:06:43.976 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.976 [2024-04-26 15:17:01.323411] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.976 [2024-04-26 15:17:01.386114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.917 15:17:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:44.917 15:17:02 -- common/autotest_common.sh@850 -- # return 0 00:06:44.917 15:17:02 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:44.917 15:17:02 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:44.917 15:17:02 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:44.917 15:17:02 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:44.917 15:17:02 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:44.917 15:17:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:44.917 15:17:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:44.917 15:17:02 -- common/autotest_common.sh@10 -- # set +x 00:06:44.917 ************************************ 00:06:44.917 START TEST accel_assign_opcode 00:06:44.917 ************************************ 00:06:44.917 15:17:02 -- common/autotest_common.sh@1111 -- # accel_assign_opcode_test_suite 00:06:44.917 15:17:02 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:44.917 15:17:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:44.917 15:17:02 -- common/autotest_common.sh@10 -- # set +x 00:06:44.917 [2024-04-26 15:17:02.168325] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:44.917 15:17:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:44.917 15:17:02 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:44.917 15:17:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:44.917 15:17:02 -- common/autotest_common.sh@10 -- # set +x 00:06:44.918 [2024-04-26 15:17:02.176336] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:44.918 15:17:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:44.918 15:17:02 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:44.918 15:17:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:44.918 15:17:02 -- common/autotest_common.sh@10 -- # set +x 00:06:44.918 15:17:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:44.918 15:17:02 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:44.918 15:17:02 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:44.918 15:17:02 -- accel/accel_rpc.sh@42 -- # grep software 00:06:44.918 15:17:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:44.918 15:17:02 -- common/autotest_common.sh@10 -- # set +x 00:06:44.918 15:17:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:44.918 software 00:06:44.918 00:06:44.918 real 0m0.202s 00:06:44.918 user 0m0.044s 00:06:44.918 sys 0m0.009s 00:06:44.918 15:17:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:44.918 15:17:02 -- common/autotest_common.sh@10 -- # set +x 00:06:44.918 ************************************ 00:06:44.918 END TEST accel_assign_opcode 00:06:44.918 ************************************ 00:06:45.178 15:17:02 -- accel/accel_rpc.sh@55 -- # killprocess 1440753 00:06:45.178 15:17:02 -- common/autotest_common.sh@936 -- # '[' -z 1440753 ']' 00:06:45.178 15:17:02 -- common/autotest_common.sh@940 -- # kill -0 1440753 00:06:45.178 15:17:02 -- common/autotest_common.sh@941 -- # uname 00:06:45.178 15:17:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:45.178 15:17:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1440753 00:06:45.178 15:17:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:45.178 15:17:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:45.178 15:17:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1440753' 00:06:45.178 killing process with pid 1440753 00:06:45.178 15:17:02 -- common/autotest_common.sh@955 -- # kill 1440753 00:06:45.178 15:17:02 -- common/autotest_common.sh@960 -- # wait 1440753 00:06:45.440 00:06:45.440 real 0m1.562s 00:06:45.440 user 0m1.695s 00:06:45.440 sys 0m0.437s 00:06:45.440 15:17:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:45.440 15:17:02 -- common/autotest_common.sh@10 -- # set +x 00:06:45.440 ************************************ 00:06:45.440 END TEST accel_rpc 00:06:45.440 ************************************ 00:06:45.440 15:17:02 -- spdk/autotest.sh@181 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:45.440 15:17:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:45.440 15:17:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:45.440 15:17:02 -- common/autotest_common.sh@10 -- # set +x 00:06:45.440 ************************************ 00:06:45.440 START TEST app_cmdline 00:06:45.440 ************************************ 00:06:45.440 15:17:02 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:45.744 * Looking for test storage... 00:06:45.744 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:45.744 15:17:02 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:45.744 15:17:02 -- app/cmdline.sh@17 -- # spdk_tgt_pid=1441218 00:06:45.744 15:17:02 -- app/cmdline.sh@18 -- # waitforlisten 1441218 00:06:45.744 15:17:02 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:45.744 15:17:02 -- common/autotest_common.sh@817 -- # '[' -z 1441218 ']' 00:06:45.744 15:17:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.744 15:17:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:45.744 15:17:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.744 15:17:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:45.744 15:17:02 -- common/autotest_common.sh@10 -- # set +x 00:06:45.744 [2024-04-26 15:17:03.024082] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:06:45.744 [2024-04-26 15:17:03.024140] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1441218 ] 00:06:45.744 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.744 [2024-04-26 15:17:03.088796] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.744 [2024-04-26 15:17:03.161643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.353 15:17:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:46.353 15:17:03 -- common/autotest_common.sh@850 -- # return 0 00:06:46.353 15:17:03 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:46.614 { 00:06:46.614 "version": "SPDK v24.05-pre git sha1 f93182c78", 00:06:46.614 "fields": { 00:06:46.614 "major": 24, 00:06:46.614 "minor": 5, 00:06:46.614 "patch": 0, 00:06:46.615 "suffix": "-pre", 00:06:46.615 "commit": "f93182c78" 00:06:46.615 } 00:06:46.615 } 00:06:46.615 15:17:03 -- app/cmdline.sh@22 -- # expected_methods=() 00:06:46.615 15:17:03 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:46.615 15:17:03 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:46.615 15:17:03 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:46.615 15:17:03 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:46.615 15:17:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:46.615 15:17:03 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:46.615 15:17:03 -- common/autotest_common.sh@10 -- # set +x 00:06:46.615 15:17:03 -- app/cmdline.sh@26 -- # sort 00:06:46.615 15:17:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:46.615 15:17:03 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:46.615 15:17:03 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:46.615 15:17:03 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:46.615 15:17:03 -- common/autotest_common.sh@638 -- # local es=0 00:06:46.615 15:17:03 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:46.615 15:17:03 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:46.615 15:17:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:46.615 15:17:03 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:46.615 15:17:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:46.615 15:17:03 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:46.615 15:17:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:46.615 15:17:03 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:46.615 15:17:03 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:46.615 15:17:03 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:46.876 request: 00:06:46.876 { 00:06:46.876 "method": "env_dpdk_get_mem_stats", 00:06:46.876 "req_id": 1 00:06:46.876 } 00:06:46.876 Got JSON-RPC error response 00:06:46.876 response: 00:06:46.876 { 00:06:46.876 "code": -32601, 00:06:46.876 "message": "Method not found" 00:06:46.876 } 00:06:46.876 15:17:04 -- common/autotest_common.sh@641 -- # es=1 00:06:46.876 15:17:04 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:46.876 15:17:04 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:46.876 15:17:04 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:46.876 15:17:04 -- app/cmdline.sh@1 -- # killprocess 1441218 00:06:46.876 15:17:04 -- common/autotest_common.sh@936 -- # '[' -z 1441218 ']' 00:06:46.876 15:17:04 -- common/autotest_common.sh@940 -- # kill -0 1441218 00:06:46.876 15:17:04 -- common/autotest_common.sh@941 -- # uname 00:06:46.876 15:17:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:46.876 15:17:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1441218 00:06:46.876 15:17:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:46.876 15:17:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:46.876 15:17:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1441218' 00:06:46.876 killing process with pid 1441218 00:06:46.876 15:17:04 -- common/autotest_common.sh@955 -- # kill 1441218 00:06:46.876 15:17:04 -- common/autotest_common.sh@960 -- # wait 1441218 00:06:47.142 00:06:47.142 real 0m1.516s 00:06:47.142 user 0m1.806s 00:06:47.142 sys 0m0.386s 00:06:47.142 15:17:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:47.142 15:17:04 -- common/autotest_common.sh@10 -- # set +x 00:06:47.142 ************************************ 00:06:47.142 END TEST app_cmdline 00:06:47.142 ************************************ 00:06:47.142 15:17:04 -- spdk/autotest.sh@182 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:47.142 15:17:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:47.142 15:17:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:47.142 15:17:04 -- common/autotest_common.sh@10 -- # set +x 00:06:47.142 ************************************ 00:06:47.142 START TEST version 00:06:47.142 ************************************ 00:06:47.142 15:17:04 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:47.403 * Looking for test storage... 00:06:47.403 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:47.403 15:17:04 -- app/version.sh@17 -- # get_header_version major 00:06:47.403 15:17:04 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:47.403 15:17:04 -- app/version.sh@14 -- # cut -f2 00:06:47.403 15:17:04 -- app/version.sh@14 -- # tr -d '"' 00:06:47.403 15:17:04 -- app/version.sh@17 -- # major=24 00:06:47.403 15:17:04 -- app/version.sh@18 -- # get_header_version minor 00:06:47.403 15:17:04 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:47.403 15:17:04 -- app/version.sh@14 -- # cut -f2 00:06:47.403 15:17:04 -- app/version.sh@14 -- # tr -d '"' 00:06:47.403 15:17:04 -- app/version.sh@18 -- # minor=5 00:06:47.403 15:17:04 -- app/version.sh@19 -- # get_header_version patch 00:06:47.403 15:17:04 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:47.403 15:17:04 -- app/version.sh@14 -- # cut -f2 00:06:47.403 15:17:04 -- app/version.sh@14 -- # tr -d '"' 00:06:47.403 15:17:04 -- app/version.sh@19 -- # patch=0 00:06:47.403 15:17:04 -- app/version.sh@20 -- # get_header_version suffix 00:06:47.403 15:17:04 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:47.403 15:17:04 -- app/version.sh@14 -- # cut -f2 00:06:47.403 15:17:04 -- app/version.sh@14 -- # tr -d '"' 00:06:47.403 15:17:04 -- app/version.sh@20 -- # suffix=-pre 00:06:47.403 15:17:04 -- app/version.sh@22 -- # version=24.5 00:06:47.403 15:17:04 -- app/version.sh@25 -- # (( patch != 0 )) 00:06:47.403 15:17:04 -- app/version.sh@28 -- # version=24.5rc0 00:06:47.403 15:17:04 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:47.403 15:17:04 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:47.403 15:17:04 -- app/version.sh@30 -- # py_version=24.5rc0 00:06:47.403 15:17:04 -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:06:47.403 00:06:47.403 real 0m0.190s 00:06:47.403 user 0m0.085s 00:06:47.403 sys 0m0.147s 00:06:47.403 15:17:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:47.403 15:17:04 -- common/autotest_common.sh@10 -- # set +x 00:06:47.403 ************************************ 00:06:47.403 END TEST version 00:06:47.403 ************************************ 00:06:47.403 15:17:04 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:06:47.403 15:17:04 -- spdk/autotest.sh@194 -- # uname -s 00:06:47.403 15:17:04 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:47.403 15:17:04 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:47.403 15:17:04 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:47.403 15:17:04 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:47.403 15:17:04 -- spdk/autotest.sh@254 -- # '[' 0 -eq 1 ']' 00:06:47.403 15:17:04 -- spdk/autotest.sh@258 -- # timing_exit lib 00:06:47.403 15:17:04 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:47.403 15:17:04 -- common/autotest_common.sh@10 -- # set +x 00:06:47.403 15:17:04 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:06:47.403 15:17:04 -- spdk/autotest.sh@268 -- # '[' 0 -eq 1 ']' 00:06:47.403 15:17:04 -- spdk/autotest.sh@277 -- # '[' 1 -eq 1 ']' 00:06:47.403 15:17:04 -- spdk/autotest.sh@278 -- # export NET_TYPE 00:06:47.403 15:17:04 -- spdk/autotest.sh@281 -- # '[' tcp = rdma ']' 00:06:47.403 15:17:04 -- spdk/autotest.sh@284 -- # '[' tcp = tcp ']' 00:06:47.403 15:17:04 -- spdk/autotest.sh@285 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:47.403 15:17:04 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:47.403 15:17:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:47.403 15:17:04 -- common/autotest_common.sh@10 -- # set +x 00:06:47.664 ************************************ 00:06:47.664 START TEST nvmf_tcp 00:06:47.664 ************************************ 00:06:47.664 15:17:04 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:47.664 * Looking for test storage... 00:06:47.664 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:47.664 15:17:05 -- nvmf/nvmf.sh@10 -- # uname -s 00:06:47.664 15:17:05 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:47.664 15:17:05 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:47.664 15:17:05 -- nvmf/common.sh@7 -- # uname -s 00:06:47.664 15:17:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:47.664 15:17:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:47.664 15:17:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:47.664 15:17:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:47.664 15:17:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:47.664 15:17:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:47.664 15:17:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:47.664 15:17:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:47.664 15:17:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:47.664 15:17:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:47.925 15:17:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:47.925 15:17:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:47.925 15:17:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:47.925 15:17:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:47.925 15:17:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:47.925 15:17:05 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:47.925 15:17:05 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:47.925 15:17:05 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:47.925 15:17:05 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:47.925 15:17:05 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:47.925 15:17:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.925 15:17:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.925 15:17:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.925 15:17:05 -- paths/export.sh@5 -- # export PATH 00:06:47.925 15:17:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.925 15:17:05 -- nvmf/common.sh@47 -- # : 0 00:06:47.925 15:17:05 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:47.925 15:17:05 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:47.925 15:17:05 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:47.925 15:17:05 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:47.925 15:17:05 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:47.925 15:17:05 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:47.925 15:17:05 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:47.925 15:17:05 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:47.925 15:17:05 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:47.925 15:17:05 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:47.925 15:17:05 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:47.925 15:17:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:47.925 15:17:05 -- common/autotest_common.sh@10 -- # set +x 00:06:47.925 15:17:05 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:47.925 15:17:05 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:47.925 15:17:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:47.925 15:17:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:47.925 15:17:05 -- common/autotest_common.sh@10 -- # set +x 00:06:47.925 ************************************ 00:06:47.925 START TEST nvmf_example 00:06:47.925 ************************************ 00:06:47.925 15:17:05 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:47.925 * Looking for test storage... 00:06:48.187 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:48.187 15:17:05 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:48.187 15:17:05 -- nvmf/common.sh@7 -- # uname -s 00:06:48.187 15:17:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:48.187 15:17:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:48.187 15:17:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:48.187 15:17:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:48.187 15:17:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:48.187 15:17:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:48.187 15:17:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:48.187 15:17:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:48.187 15:17:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:48.187 15:17:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:48.187 15:17:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:48.187 15:17:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:48.187 15:17:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:48.187 15:17:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:48.187 15:17:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:48.187 15:17:05 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:48.187 15:17:05 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:48.187 15:17:05 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:48.187 15:17:05 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:48.187 15:17:05 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:48.187 15:17:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.187 15:17:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.187 15:17:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.187 15:17:05 -- paths/export.sh@5 -- # export PATH 00:06:48.187 15:17:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.187 15:17:05 -- nvmf/common.sh@47 -- # : 0 00:06:48.187 15:17:05 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:48.187 15:17:05 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:48.187 15:17:05 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:48.187 15:17:05 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:48.187 15:17:05 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:48.187 15:17:05 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:48.187 15:17:05 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:48.187 15:17:05 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:48.187 15:17:05 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:48.187 15:17:05 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:48.187 15:17:05 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:48.187 15:17:05 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:48.187 15:17:05 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:48.187 15:17:05 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:48.187 15:17:05 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:48.187 15:17:05 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:48.187 15:17:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:48.187 15:17:05 -- common/autotest_common.sh@10 -- # set +x 00:06:48.187 15:17:05 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:48.187 15:17:05 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:06:48.187 15:17:05 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:48.187 15:17:05 -- nvmf/common.sh@437 -- # prepare_net_devs 00:06:48.187 15:17:05 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:06:48.187 15:17:05 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:06:48.187 15:17:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:48.187 15:17:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:48.187 15:17:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:48.187 15:17:05 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:06:48.187 15:17:05 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:06:48.187 15:17:05 -- nvmf/common.sh@285 -- # xtrace_disable 00:06:48.187 15:17:05 -- common/autotest_common.sh@10 -- # set +x 00:06:56.333 15:17:12 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:06:56.333 15:17:12 -- nvmf/common.sh@291 -- # pci_devs=() 00:06:56.333 15:17:12 -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:56.333 15:17:12 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:56.333 15:17:12 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:56.333 15:17:12 -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:56.333 15:17:12 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:56.333 15:17:12 -- nvmf/common.sh@295 -- # net_devs=() 00:06:56.333 15:17:12 -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:56.333 15:17:12 -- nvmf/common.sh@296 -- # e810=() 00:06:56.333 15:17:12 -- nvmf/common.sh@296 -- # local -ga e810 00:06:56.333 15:17:12 -- nvmf/common.sh@297 -- # x722=() 00:06:56.333 15:17:12 -- nvmf/common.sh@297 -- # local -ga x722 00:06:56.333 15:17:12 -- nvmf/common.sh@298 -- # mlx=() 00:06:56.333 15:17:12 -- nvmf/common.sh@298 -- # local -ga mlx 00:06:56.333 15:17:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:56.333 15:17:12 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:56.333 15:17:12 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:56.333 15:17:12 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:56.333 15:17:12 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:56.333 15:17:12 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:56.333 15:17:12 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:56.333 15:17:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:56.333 15:17:12 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:56.333 15:17:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:56.333 15:17:12 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:56.333 15:17:12 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:56.333 15:17:12 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:56.333 15:17:12 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:56.333 15:17:12 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:56.333 15:17:12 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:56.333 15:17:12 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:56.333 15:17:12 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:56.333 15:17:12 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:06:56.333 Found 0000:31:00.0 (0x8086 - 0x159b) 00:06:56.333 15:17:12 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:56.333 15:17:12 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:56.333 15:17:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:56.333 15:17:12 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:56.333 15:17:12 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:56.333 15:17:12 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:56.333 15:17:12 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:06:56.333 Found 0000:31:00.1 (0x8086 - 0x159b) 00:06:56.333 15:17:12 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:56.333 15:17:12 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:56.333 15:17:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:56.333 15:17:12 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:56.333 15:17:12 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:56.333 15:17:12 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:56.333 15:17:12 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:56.333 15:17:12 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:56.333 15:17:12 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:56.333 15:17:12 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:56.333 15:17:12 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:56.334 15:17:12 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:56.334 15:17:12 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:06:56.334 Found net devices under 0000:31:00.0: cvl_0_0 00:06:56.334 15:17:12 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:56.334 15:17:12 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:56.334 15:17:12 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:56.334 15:17:12 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:56.334 15:17:12 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:56.334 15:17:12 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:06:56.334 Found net devices under 0000:31:00.1: cvl_0_1 00:06:56.334 15:17:12 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:56.334 15:17:12 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:06:56.334 15:17:12 -- nvmf/common.sh@403 -- # is_hw=yes 00:06:56.334 15:17:12 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:06:56.334 15:17:12 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:06:56.334 15:17:12 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:06:56.334 15:17:12 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:56.334 15:17:12 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:56.334 15:17:12 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:56.334 15:17:12 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:56.334 15:17:12 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:56.334 15:17:12 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:56.334 15:17:12 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:56.334 15:17:12 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:56.334 15:17:12 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:56.334 15:17:12 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:56.334 15:17:12 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:56.334 15:17:12 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:56.334 15:17:12 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:56.334 15:17:12 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:56.334 15:17:12 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:56.334 15:17:12 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:56.334 15:17:12 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:56.334 15:17:12 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:56.334 15:17:12 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:56.334 15:17:12 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:56.334 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:56.334 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.632 ms 00:06:56.334 00:06:56.334 --- 10.0.0.2 ping statistics --- 00:06:56.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:56.334 rtt min/avg/max/mdev = 0.632/0.632/0.632/0.000 ms 00:06:56.334 15:17:12 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:56.334 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:56.334 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:06:56.334 00:06:56.334 --- 10.0.0.1 ping statistics --- 00:06:56.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:56.334 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:06:56.334 15:17:12 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:56.334 15:17:12 -- nvmf/common.sh@411 -- # return 0 00:06:56.334 15:17:12 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:06:56.334 15:17:12 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:56.334 15:17:12 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:06:56.334 15:17:12 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:06:56.334 15:17:12 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:56.334 15:17:12 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:06:56.334 15:17:12 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:06:56.334 15:17:12 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:56.334 15:17:12 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:56.334 15:17:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:56.334 15:17:12 -- common/autotest_common.sh@10 -- # set +x 00:06:56.334 15:17:12 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:56.334 15:17:12 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:56.334 15:17:12 -- target/nvmf_example.sh@34 -- # nvmfpid=1445434 00:06:56.334 15:17:12 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:56.334 15:17:12 -- target/nvmf_example.sh@36 -- # waitforlisten 1445434 00:06:56.334 15:17:12 -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:56.334 15:17:12 -- common/autotest_common.sh@817 -- # '[' -z 1445434 ']' 00:06:56.334 15:17:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.334 15:17:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:56.334 15:17:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.334 15:17:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:56.334 15:17:12 -- common/autotest_common.sh@10 -- # set +x 00:06:56.334 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.334 15:17:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:56.334 15:17:13 -- common/autotest_common.sh@850 -- # return 0 00:06:56.334 15:17:13 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:56.334 15:17:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:56.334 15:17:13 -- common/autotest_common.sh@10 -- # set +x 00:06:56.334 15:17:13 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:56.334 15:17:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:56.334 15:17:13 -- common/autotest_common.sh@10 -- # set +x 00:06:56.334 15:17:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:56.334 15:17:13 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:56.334 15:17:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:56.334 15:17:13 -- common/autotest_common.sh@10 -- # set +x 00:06:56.334 15:17:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:56.334 15:17:13 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:56.334 15:17:13 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:56.334 15:17:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:56.334 15:17:13 -- common/autotest_common.sh@10 -- # set +x 00:06:56.334 15:17:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:56.334 15:17:13 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:56.334 15:17:13 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:56.334 15:17:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:56.334 15:17:13 -- common/autotest_common.sh@10 -- # set +x 00:06:56.334 15:17:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:56.334 15:17:13 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:56.334 15:17:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:56.334 15:17:13 -- common/autotest_common.sh@10 -- # set +x 00:06:56.334 15:17:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:56.334 15:17:13 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:06:56.334 15:17:13 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:56.334 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.575 Initializing NVMe Controllers 00:07:08.575 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:08.575 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:08.575 Initialization complete. Launching workers. 00:07:08.575 ======================================================== 00:07:08.575 Latency(us) 00:07:08.575 Device Information : IOPS MiB/s Average min max 00:07:08.575 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19108.00 74.64 3351.18 606.86 19013.06 00:07:08.575 ======================================================== 00:07:08.575 Total : 19108.00 74.64 3351.18 606.86 19013.06 00:07:08.575 00:07:08.575 15:17:23 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:08.575 15:17:23 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:08.575 15:17:23 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:08.575 15:17:23 -- nvmf/common.sh@117 -- # sync 00:07:08.575 15:17:23 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:08.575 15:17:23 -- nvmf/common.sh@120 -- # set +e 00:07:08.575 15:17:23 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:08.575 15:17:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:08.575 rmmod nvme_tcp 00:07:08.575 rmmod nvme_fabrics 00:07:08.575 rmmod nvme_keyring 00:07:08.575 15:17:23 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:08.575 15:17:23 -- nvmf/common.sh@124 -- # set -e 00:07:08.575 15:17:23 -- nvmf/common.sh@125 -- # return 0 00:07:08.575 15:17:23 -- nvmf/common.sh@478 -- # '[' -n 1445434 ']' 00:07:08.575 15:17:23 -- nvmf/common.sh@479 -- # killprocess 1445434 00:07:08.576 15:17:23 -- common/autotest_common.sh@936 -- # '[' -z 1445434 ']' 00:07:08.576 15:17:23 -- common/autotest_common.sh@940 -- # kill -0 1445434 00:07:08.576 15:17:23 -- common/autotest_common.sh@941 -- # uname 00:07:08.576 15:17:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:08.576 15:17:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1445434 00:07:08.576 15:17:24 -- common/autotest_common.sh@942 -- # process_name=nvmf 00:07:08.576 15:17:24 -- common/autotest_common.sh@946 -- # '[' nvmf = sudo ']' 00:07:08.576 15:17:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1445434' 00:07:08.576 killing process with pid 1445434 00:07:08.576 15:17:24 -- common/autotest_common.sh@955 -- # kill 1445434 00:07:08.576 15:17:24 -- common/autotest_common.sh@960 -- # wait 1445434 00:07:08.576 nvmf threads initialize successfully 00:07:08.576 bdev subsystem init successfully 00:07:08.576 created a nvmf target service 00:07:08.576 create targets's poll groups done 00:07:08.576 all subsystems of target started 00:07:08.576 nvmf target is running 00:07:08.576 all subsystems of target stopped 00:07:08.576 destroy targets's poll groups done 00:07:08.576 destroyed the nvmf target service 00:07:08.576 bdev subsystem finish successfully 00:07:08.576 nvmf threads destroy successfully 00:07:08.576 15:17:24 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:08.576 15:17:24 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:07:08.576 15:17:24 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:07:08.576 15:17:24 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:08.576 15:17:24 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:08.576 15:17:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:08.576 15:17:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:08.576 15:17:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:08.836 15:17:26 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:08.836 15:17:26 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:08.836 15:17:26 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:08.836 15:17:26 -- common/autotest_common.sh@10 -- # set +x 00:07:08.836 00:07:08.836 real 0m21.009s 00:07:08.836 user 0m46.389s 00:07:08.836 sys 0m6.527s 00:07:08.836 15:17:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:08.836 15:17:26 -- common/autotest_common.sh@10 -- # set +x 00:07:08.836 ************************************ 00:07:08.836 END TEST nvmf_example 00:07:08.836 ************************************ 00:07:09.098 15:17:26 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:09.098 15:17:26 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:09.098 15:17:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:09.098 15:17:26 -- common/autotest_common.sh@10 -- # set +x 00:07:09.098 ************************************ 00:07:09.098 START TEST nvmf_filesystem 00:07:09.098 ************************************ 00:07:09.098 15:17:26 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:09.362 * Looking for test storage... 00:07:09.362 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:09.362 15:17:26 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:09.362 15:17:26 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:09.362 15:17:26 -- common/autotest_common.sh@34 -- # set -e 00:07:09.362 15:17:26 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:09.362 15:17:26 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:09.362 15:17:26 -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:09.362 15:17:26 -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:09.362 15:17:26 -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:09.363 15:17:26 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:09.363 15:17:26 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:09.363 15:17:26 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:09.363 15:17:26 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:09.363 15:17:26 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:09.363 15:17:26 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:09.363 15:17:26 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:09.363 15:17:26 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:09.363 15:17:26 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:09.363 15:17:26 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:09.363 15:17:26 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:09.363 15:17:26 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:09.363 15:17:26 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:09.363 15:17:26 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:09.363 15:17:26 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:09.363 15:17:26 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:09.363 15:17:26 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:09.363 15:17:26 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:09.363 15:17:26 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:09.363 15:17:26 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:09.363 15:17:26 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:09.363 15:17:26 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:09.363 15:17:26 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:09.363 15:17:26 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:09.363 15:17:26 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:09.363 15:17:26 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:09.363 15:17:26 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:09.363 15:17:26 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:09.363 15:17:26 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:09.363 15:17:26 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:09.363 15:17:26 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:09.363 15:17:26 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:09.363 15:17:26 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:09.363 15:17:26 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:09.363 15:17:26 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:09.363 15:17:26 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:09.363 15:17:26 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:09.363 15:17:26 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:09.363 15:17:26 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:09.363 15:17:26 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:09.363 15:17:26 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:09.363 15:17:26 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:09.363 15:17:26 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:09.363 15:17:26 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:09.363 15:17:26 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:09.363 15:17:26 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:07:09.363 15:17:26 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:07:09.363 15:17:26 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:09.363 15:17:26 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:07:09.363 15:17:26 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:07:09.363 15:17:26 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:07:09.363 15:17:26 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:07:09.363 15:17:26 -- common/build_config.sh@53 -- # CONFIG_HAVE_EVP_MAC=y 00:07:09.363 15:17:26 -- common/build_config.sh@54 -- # CONFIG_URING_ZNS=n 00:07:09.363 15:17:26 -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:07:09.363 15:17:26 -- common/build_config.sh@56 -- # CONFIG_HAVE_LIBBSD=n 00:07:09.363 15:17:26 -- common/build_config.sh@57 -- # CONFIG_UBSAN=y 00:07:09.363 15:17:26 -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB_DIR= 00:07:09.363 15:17:26 -- common/build_config.sh@59 -- # CONFIG_GOLANG=n 00:07:09.363 15:17:26 -- common/build_config.sh@60 -- # CONFIG_ISAL=y 00:07:09.363 15:17:26 -- common/build_config.sh@61 -- # CONFIG_IDXD_KERNEL=n 00:07:09.363 15:17:26 -- common/build_config.sh@62 -- # CONFIG_DPDK_LIB_DIR= 00:07:09.363 15:17:26 -- common/build_config.sh@63 -- # CONFIG_RDMA_PROV=verbs 00:07:09.363 15:17:26 -- common/build_config.sh@64 -- # CONFIG_APPS=y 00:07:09.363 15:17:26 -- common/build_config.sh@65 -- # CONFIG_SHARED=y 00:07:09.363 15:17:26 -- common/build_config.sh@66 -- # CONFIG_HAVE_KEYUTILS=n 00:07:09.363 15:17:26 -- common/build_config.sh@67 -- # CONFIG_FC_PATH= 00:07:09.363 15:17:26 -- common/build_config.sh@68 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:09.363 15:17:26 -- common/build_config.sh@69 -- # CONFIG_FC=n 00:07:09.363 15:17:26 -- common/build_config.sh@70 -- # CONFIG_AVAHI=n 00:07:09.363 15:17:26 -- common/build_config.sh@71 -- # CONFIG_FIO_PLUGIN=y 00:07:09.363 15:17:26 -- common/build_config.sh@72 -- # CONFIG_RAID5F=n 00:07:09.363 15:17:26 -- common/build_config.sh@73 -- # CONFIG_EXAMPLES=y 00:07:09.363 15:17:26 -- common/build_config.sh@74 -- # CONFIG_TESTS=y 00:07:09.363 15:17:26 -- common/build_config.sh@75 -- # CONFIG_CRYPTO_MLX5=n 00:07:09.363 15:17:26 -- common/build_config.sh@76 -- # CONFIG_MAX_LCORES= 00:07:09.363 15:17:26 -- common/build_config.sh@77 -- # CONFIG_IPSEC_MB=n 00:07:09.363 15:17:26 -- common/build_config.sh@78 -- # CONFIG_PGO_DIR= 00:07:09.363 15:17:26 -- common/build_config.sh@79 -- # CONFIG_DEBUG=y 00:07:09.363 15:17:26 -- common/build_config.sh@80 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:09.363 15:17:26 -- common/build_config.sh@81 -- # CONFIG_CROSS_PREFIX= 00:07:09.363 15:17:26 -- common/build_config.sh@82 -- # CONFIG_URING=n 00:07:09.363 15:17:26 -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:09.363 15:17:26 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:09.363 15:17:26 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:09.363 15:17:26 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:09.363 15:17:26 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:09.363 15:17:26 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:09.363 15:17:26 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:09.363 15:17:26 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:09.363 15:17:26 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:09.363 15:17:26 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:09.363 15:17:26 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:09.363 15:17:26 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:09.363 15:17:26 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:09.363 15:17:26 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:09.363 15:17:26 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:09.363 15:17:26 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:09.363 #define SPDK_CONFIG_H 00:07:09.363 #define SPDK_CONFIG_APPS 1 00:07:09.363 #define SPDK_CONFIG_ARCH native 00:07:09.363 #undef SPDK_CONFIG_ASAN 00:07:09.363 #undef SPDK_CONFIG_AVAHI 00:07:09.363 #undef SPDK_CONFIG_CET 00:07:09.363 #define SPDK_CONFIG_COVERAGE 1 00:07:09.363 #define SPDK_CONFIG_CROSS_PREFIX 00:07:09.363 #undef SPDK_CONFIG_CRYPTO 00:07:09.363 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:09.363 #undef SPDK_CONFIG_CUSTOMOCF 00:07:09.363 #undef SPDK_CONFIG_DAOS 00:07:09.363 #define SPDK_CONFIG_DAOS_DIR 00:07:09.363 #define SPDK_CONFIG_DEBUG 1 00:07:09.363 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:09.363 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:09.363 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:09.363 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:09.363 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:09.363 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:09.363 #define SPDK_CONFIG_EXAMPLES 1 00:07:09.363 #undef SPDK_CONFIG_FC 00:07:09.363 #define SPDK_CONFIG_FC_PATH 00:07:09.363 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:09.363 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:09.363 #undef SPDK_CONFIG_FUSE 00:07:09.363 #undef SPDK_CONFIG_FUZZER 00:07:09.363 #define SPDK_CONFIG_FUZZER_LIB 00:07:09.363 #undef SPDK_CONFIG_GOLANG 00:07:09.363 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:09.363 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:09.363 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:09.363 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:07:09.363 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:09.363 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:09.363 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:09.363 #define SPDK_CONFIG_IDXD 1 00:07:09.363 #undef SPDK_CONFIG_IDXD_KERNEL 00:07:09.363 #undef SPDK_CONFIG_IPSEC_MB 00:07:09.363 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:09.363 #define SPDK_CONFIG_ISAL 1 00:07:09.363 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:09.363 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:09.363 #define SPDK_CONFIG_LIBDIR 00:07:09.363 #undef SPDK_CONFIG_LTO 00:07:09.363 #define SPDK_CONFIG_MAX_LCORES 00:07:09.363 #define SPDK_CONFIG_NVME_CUSE 1 00:07:09.363 #undef SPDK_CONFIG_OCF 00:07:09.363 #define SPDK_CONFIG_OCF_PATH 00:07:09.363 #define SPDK_CONFIG_OPENSSL_PATH 00:07:09.363 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:09.363 #define SPDK_CONFIG_PGO_DIR 00:07:09.363 #undef SPDK_CONFIG_PGO_USE 00:07:09.363 #define SPDK_CONFIG_PREFIX /usr/local 00:07:09.363 #undef SPDK_CONFIG_RAID5F 00:07:09.363 #undef SPDK_CONFIG_RBD 00:07:09.363 #define SPDK_CONFIG_RDMA 1 00:07:09.363 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:09.363 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:09.363 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:09.363 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:09.363 #define SPDK_CONFIG_SHARED 1 00:07:09.363 #undef SPDK_CONFIG_SMA 00:07:09.363 #define SPDK_CONFIG_TESTS 1 00:07:09.363 #undef SPDK_CONFIG_TSAN 00:07:09.363 #define SPDK_CONFIG_UBLK 1 00:07:09.363 #define SPDK_CONFIG_UBSAN 1 00:07:09.363 #undef SPDK_CONFIG_UNIT_TESTS 00:07:09.363 #undef SPDK_CONFIG_URING 00:07:09.363 #define SPDK_CONFIG_URING_PATH 00:07:09.363 #undef SPDK_CONFIG_URING_ZNS 00:07:09.363 #undef SPDK_CONFIG_USDT 00:07:09.363 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:09.363 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:09.363 #define SPDK_CONFIG_VFIO_USER 1 00:07:09.363 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:09.364 #define SPDK_CONFIG_VHOST 1 00:07:09.364 #define SPDK_CONFIG_VIRTIO 1 00:07:09.364 #undef SPDK_CONFIG_VTUNE 00:07:09.364 #define SPDK_CONFIG_VTUNE_DIR 00:07:09.364 #define SPDK_CONFIG_WERROR 1 00:07:09.364 #define SPDK_CONFIG_WPDK_DIR 00:07:09.364 #undef SPDK_CONFIG_XNVME 00:07:09.364 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:09.364 15:17:26 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:09.364 15:17:26 -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:09.364 15:17:26 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:09.364 15:17:26 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:09.364 15:17:26 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:09.364 15:17:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.364 15:17:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.364 15:17:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.364 15:17:26 -- paths/export.sh@5 -- # export PATH 00:07:09.364 15:17:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.364 15:17:26 -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:09.364 15:17:26 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:09.364 15:17:26 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:09.364 15:17:26 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:09.364 15:17:26 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:09.364 15:17:26 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:09.364 15:17:26 -- pm/common@67 -- # TEST_TAG=N/A 00:07:09.364 15:17:26 -- pm/common@68 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:09.364 15:17:26 -- pm/common@70 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:09.364 15:17:26 -- pm/common@71 -- # uname -s 00:07:09.364 15:17:26 -- pm/common@71 -- # PM_OS=Linux 00:07:09.364 15:17:26 -- pm/common@73 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:09.364 15:17:26 -- pm/common@74 -- # [[ Linux == FreeBSD ]] 00:07:09.364 15:17:26 -- pm/common@76 -- # [[ Linux == Linux ]] 00:07:09.364 15:17:26 -- pm/common@76 -- # [[ ............................... != QEMU ]] 00:07:09.364 15:17:26 -- pm/common@76 -- # [[ ! -e /.dockerenv ]] 00:07:09.364 15:17:26 -- pm/common@79 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:09.364 15:17:26 -- pm/common@80 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:09.364 15:17:26 -- pm/common@83 -- # MONITOR_RESOURCES_PIDS=() 00:07:09.364 15:17:26 -- pm/common@83 -- # declare -A MONITOR_RESOURCES_PIDS 00:07:09.364 15:17:26 -- pm/common@85 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:09.364 15:17:26 -- common/autotest_common.sh@57 -- # : 0 00:07:09.364 15:17:26 -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:07:09.364 15:17:26 -- common/autotest_common.sh@61 -- # : 0 00:07:09.364 15:17:26 -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:09.364 15:17:26 -- common/autotest_common.sh@63 -- # : 0 00:07:09.364 15:17:26 -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:07:09.364 15:17:26 -- common/autotest_common.sh@65 -- # : 1 00:07:09.364 15:17:26 -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:09.364 15:17:26 -- common/autotest_common.sh@67 -- # : 0 00:07:09.364 15:17:26 -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:07:09.364 15:17:26 -- common/autotest_common.sh@69 -- # : 00:07:09.364 15:17:26 -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:07:09.364 15:17:26 -- common/autotest_common.sh@71 -- # : 0 00:07:09.364 15:17:26 -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:07:09.364 15:17:26 -- common/autotest_common.sh@73 -- # : 0 00:07:09.364 15:17:26 -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:07:09.364 15:17:26 -- common/autotest_common.sh@75 -- # : 0 00:07:09.364 15:17:26 -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:07:09.364 15:17:26 -- common/autotest_common.sh@77 -- # : 0 00:07:09.364 15:17:26 -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:09.364 15:17:26 -- common/autotest_common.sh@79 -- # : 0 00:07:09.364 15:17:26 -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:07:09.364 15:17:26 -- common/autotest_common.sh@81 -- # : 0 00:07:09.364 15:17:26 -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:07:09.364 15:17:26 -- common/autotest_common.sh@83 -- # : 0 00:07:09.364 15:17:26 -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:07:09.364 15:17:26 -- common/autotest_common.sh@85 -- # : 1 00:07:09.364 15:17:26 -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:07:09.364 15:17:26 -- common/autotest_common.sh@87 -- # : 0 00:07:09.364 15:17:26 -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:07:09.364 15:17:26 -- common/autotest_common.sh@89 -- # : 0 00:07:09.364 15:17:26 -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:07:09.364 15:17:26 -- common/autotest_common.sh@91 -- # : 1 00:07:09.364 15:17:26 -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:07:09.364 15:17:26 -- common/autotest_common.sh@93 -- # : 1 00:07:09.364 15:17:26 -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:07:09.364 15:17:26 -- common/autotest_common.sh@95 -- # : 0 00:07:09.364 15:17:26 -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:09.364 15:17:26 -- common/autotest_common.sh@97 -- # : 0 00:07:09.364 15:17:26 -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:07:09.364 15:17:26 -- common/autotest_common.sh@99 -- # : 0 00:07:09.364 15:17:26 -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:07:09.364 15:17:26 -- common/autotest_common.sh@101 -- # : tcp 00:07:09.364 15:17:26 -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:09.364 15:17:26 -- common/autotest_common.sh@103 -- # : 0 00:07:09.364 15:17:26 -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:07:09.364 15:17:26 -- common/autotest_common.sh@105 -- # : 0 00:07:09.364 15:17:26 -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:07:09.364 15:17:26 -- common/autotest_common.sh@107 -- # : 0 00:07:09.364 15:17:26 -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:07:09.364 15:17:26 -- common/autotest_common.sh@109 -- # : 0 00:07:09.364 15:17:26 -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:07:09.364 15:17:26 -- common/autotest_common.sh@111 -- # : 0 00:07:09.364 15:17:26 -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:07:09.364 15:17:26 -- common/autotest_common.sh@113 -- # : 0 00:07:09.364 15:17:26 -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:07:09.364 15:17:26 -- common/autotest_common.sh@115 -- # : 0 00:07:09.364 15:17:26 -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:07:09.364 15:17:26 -- common/autotest_common.sh@117 -- # : 0 00:07:09.364 15:17:26 -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:09.364 15:17:26 -- common/autotest_common.sh@119 -- # : 0 00:07:09.364 15:17:26 -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:07:09.364 15:17:26 -- common/autotest_common.sh@121 -- # : 1 00:07:09.364 15:17:26 -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:07:09.364 15:17:26 -- common/autotest_common.sh@123 -- # : 00:07:09.364 15:17:26 -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:09.364 15:17:26 -- common/autotest_common.sh@125 -- # : 0 00:07:09.364 15:17:26 -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:07:09.364 15:17:26 -- common/autotest_common.sh@127 -- # : 0 00:07:09.364 15:17:26 -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:07:09.364 15:17:26 -- common/autotest_common.sh@129 -- # : 0 00:07:09.364 15:17:26 -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:07:09.364 15:17:26 -- common/autotest_common.sh@131 -- # : 0 00:07:09.364 15:17:26 -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:07:09.364 15:17:26 -- common/autotest_common.sh@133 -- # : 0 00:07:09.364 15:17:26 -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:07:09.364 15:17:26 -- common/autotest_common.sh@135 -- # : 0 00:07:09.364 15:17:26 -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:07:09.364 15:17:26 -- common/autotest_common.sh@137 -- # : 00:07:09.364 15:17:26 -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:07:09.364 15:17:26 -- common/autotest_common.sh@139 -- # : true 00:07:09.364 15:17:26 -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:07:09.364 15:17:26 -- common/autotest_common.sh@141 -- # : 0 00:07:09.364 15:17:26 -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:07:09.364 15:17:26 -- common/autotest_common.sh@143 -- # : 0 00:07:09.364 15:17:26 -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:07:09.364 15:17:26 -- common/autotest_common.sh@145 -- # : 0 00:07:09.364 15:17:26 -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:07:09.364 15:17:26 -- common/autotest_common.sh@147 -- # : 0 00:07:09.364 15:17:26 -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:07:09.364 15:17:26 -- common/autotest_common.sh@149 -- # : 0 00:07:09.364 15:17:26 -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:07:09.364 15:17:26 -- common/autotest_common.sh@151 -- # : 0 00:07:09.364 15:17:26 -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:07:09.364 15:17:26 -- common/autotest_common.sh@153 -- # : e810 00:07:09.365 15:17:26 -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:07:09.365 15:17:26 -- common/autotest_common.sh@155 -- # : 0 00:07:09.365 15:17:26 -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:07:09.365 15:17:26 -- common/autotest_common.sh@157 -- # : 0 00:07:09.365 15:17:26 -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:07:09.365 15:17:26 -- common/autotest_common.sh@159 -- # : 0 00:07:09.365 15:17:26 -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:07:09.365 15:17:26 -- common/autotest_common.sh@161 -- # : 0 00:07:09.365 15:17:26 -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:07:09.365 15:17:26 -- common/autotest_common.sh@163 -- # : 0 00:07:09.365 15:17:26 -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:07:09.365 15:17:26 -- common/autotest_common.sh@166 -- # : 00:07:09.365 15:17:26 -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:07:09.365 15:17:26 -- common/autotest_common.sh@168 -- # : 0 00:07:09.365 15:17:26 -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:07:09.365 15:17:26 -- common/autotest_common.sh@170 -- # : 0 00:07:09.365 15:17:26 -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:09.365 15:17:26 -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:09.365 15:17:26 -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:09.365 15:17:26 -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:09.365 15:17:26 -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:09.365 15:17:26 -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:09.365 15:17:26 -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:09.365 15:17:26 -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:09.365 15:17:26 -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:09.365 15:17:26 -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:09.365 15:17:26 -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:09.365 15:17:26 -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:09.365 15:17:26 -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:09.365 15:17:26 -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:09.365 15:17:26 -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:07:09.365 15:17:26 -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:09.365 15:17:26 -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:09.365 15:17:26 -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:09.365 15:17:26 -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:09.365 15:17:26 -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:09.365 15:17:26 -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:07:09.365 15:17:26 -- common/autotest_common.sh@199 -- # cat 00:07:09.365 15:17:26 -- common/autotest_common.sh@225 -- # echo leak:libfuse3.so 00:07:09.365 15:17:26 -- common/autotest_common.sh@227 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:09.365 15:17:26 -- common/autotest_common.sh@227 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:09.365 15:17:26 -- common/autotest_common.sh@229 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:09.365 15:17:26 -- common/autotest_common.sh@229 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:09.365 15:17:26 -- common/autotest_common.sh@231 -- # '[' -z /var/spdk/dependencies ']' 00:07:09.365 15:17:26 -- common/autotest_common.sh@234 -- # export DEPENDENCY_DIR 00:07:09.365 15:17:26 -- common/autotest_common.sh@238 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:09.365 15:17:26 -- common/autotest_common.sh@238 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:09.365 15:17:26 -- common/autotest_common.sh@239 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:09.365 15:17:26 -- common/autotest_common.sh@239 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:09.365 15:17:26 -- common/autotest_common.sh@242 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:09.365 15:17:26 -- common/autotest_common.sh@242 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:09.365 15:17:26 -- common/autotest_common.sh@243 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:09.365 15:17:26 -- common/autotest_common.sh@243 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:09.365 15:17:26 -- common/autotest_common.sh@245 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:09.365 15:17:26 -- common/autotest_common.sh@245 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:09.365 15:17:26 -- common/autotest_common.sh@248 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:09.365 15:17:26 -- common/autotest_common.sh@248 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:09.365 15:17:26 -- common/autotest_common.sh@251 -- # '[' 0 -eq 0 ']' 00:07:09.365 15:17:26 -- common/autotest_common.sh@252 -- # export valgrind= 00:07:09.365 15:17:26 -- common/autotest_common.sh@252 -- # valgrind= 00:07:09.365 15:17:26 -- common/autotest_common.sh@258 -- # uname -s 00:07:09.365 15:17:26 -- common/autotest_common.sh@258 -- # '[' Linux = Linux ']' 00:07:09.365 15:17:26 -- common/autotest_common.sh@259 -- # HUGEMEM=4096 00:07:09.365 15:17:26 -- common/autotest_common.sh@260 -- # export CLEAR_HUGE=yes 00:07:09.365 15:17:26 -- common/autotest_common.sh@260 -- # CLEAR_HUGE=yes 00:07:09.365 15:17:26 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:07:09.365 15:17:26 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:07:09.365 15:17:26 -- common/autotest_common.sh@268 -- # MAKE=make 00:07:09.365 15:17:26 -- common/autotest_common.sh@269 -- # MAKEFLAGS=-j144 00:07:09.365 15:17:26 -- common/autotest_common.sh@285 -- # export HUGEMEM=4096 00:07:09.365 15:17:26 -- common/autotest_common.sh@285 -- # HUGEMEM=4096 00:07:09.365 15:17:26 -- common/autotest_common.sh@287 -- # NO_HUGE=() 00:07:09.365 15:17:26 -- common/autotest_common.sh@288 -- # TEST_MODE= 00:07:09.365 15:17:26 -- common/autotest_common.sh@289 -- # for i in "$@" 00:07:09.365 15:17:26 -- common/autotest_common.sh@290 -- # case "$i" in 00:07:09.365 15:17:26 -- common/autotest_common.sh@295 -- # TEST_TRANSPORT=tcp 00:07:09.365 15:17:26 -- common/autotest_common.sh@307 -- # [[ -z 1448317 ]] 00:07:09.365 15:17:26 -- common/autotest_common.sh@307 -- # kill -0 1448317 00:07:09.365 15:17:26 -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:07:09.365 15:17:26 -- common/autotest_common.sh@317 -- # [[ -v testdir ]] 00:07:09.365 15:17:26 -- common/autotest_common.sh@319 -- # local requested_size=2147483648 00:07:09.365 15:17:26 -- common/autotest_common.sh@320 -- # local mount target_dir 00:07:09.365 15:17:26 -- common/autotest_common.sh@322 -- # local -A mounts fss sizes avails uses 00:07:09.365 15:17:26 -- common/autotest_common.sh@323 -- # local source fs size avail mount use 00:07:09.365 15:17:26 -- common/autotest_common.sh@325 -- # local storage_fallback storage_candidates 00:07:09.365 15:17:26 -- common/autotest_common.sh@327 -- # mktemp -udt spdk.XXXXXX 00:07:09.365 15:17:26 -- common/autotest_common.sh@327 -- # storage_fallback=/tmp/spdk.PjajZF 00:07:09.365 15:17:26 -- common/autotest_common.sh@332 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:09.365 15:17:26 -- common/autotest_common.sh@334 -- # [[ -n '' ]] 00:07:09.365 15:17:26 -- common/autotest_common.sh@339 -- # [[ -n '' ]] 00:07:09.365 15:17:26 -- common/autotest_common.sh@344 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.PjajZF/tests/target /tmp/spdk.PjajZF 00:07:09.365 15:17:26 -- common/autotest_common.sh@347 -- # requested_size=2214592512 00:07:09.365 15:17:26 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:09.365 15:17:26 -- common/autotest_common.sh@316 -- # df -T 00:07:09.365 15:17:26 -- common/autotest_common.sh@316 -- # grep -v Filesystem 00:07:09.365 15:17:26 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_devtmpfs 00:07:09.365 15:17:26 -- common/autotest_common.sh@350 -- # fss["$mount"]=devtmpfs 00:07:09.365 15:17:26 -- common/autotest_common.sh@351 -- # avails["$mount"]=67108864 00:07:09.365 15:17:26 -- common/autotest_common.sh@351 -- # sizes["$mount"]=67108864 00:07:09.365 15:17:26 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:07:09.365 15:17:26 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:09.365 15:17:26 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/pmem0 00:07:09.365 15:17:26 -- common/autotest_common.sh@350 -- # fss["$mount"]=ext2 00:07:09.365 15:17:26 -- common/autotest_common.sh@351 -- # avails["$mount"]=1052192768 00:07:09.365 15:17:26 -- common/autotest_common.sh@351 -- # sizes["$mount"]=5284429824 00:07:09.365 15:17:26 -- common/autotest_common.sh@352 -- # uses["$mount"]=4232237056 00:07:09.365 15:17:26 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:09.365 15:17:26 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_root 00:07:09.365 15:17:26 -- common/autotest_common.sh@350 -- # fss["$mount"]=overlay 00:07:09.365 15:17:26 -- common/autotest_common.sh@351 -- # avails["$mount"]=122876231680 00:07:09.366 15:17:26 -- common/autotest_common.sh@351 -- # sizes["$mount"]=129371000832 00:07:09.366 15:17:26 -- common/autotest_common.sh@352 -- # uses["$mount"]=6494769152 00:07:09.366 15:17:26 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:09.366 15:17:26 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:07:09.366 15:17:26 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:07:09.366 15:17:26 -- common/autotest_common.sh@351 -- # avails["$mount"]=64682885120 00:07:09.366 15:17:26 -- common/autotest_common.sh@351 -- # sizes["$mount"]=64685498368 00:07:09.366 15:17:26 -- common/autotest_common.sh@352 -- # uses["$mount"]=2613248 00:07:09.366 15:17:26 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:09.366 15:17:26 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:07:09.366 15:17:26 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:07:09.366 15:17:26 -- common/autotest_common.sh@351 -- # avails["$mount"]=25864454144 00:07:09.366 15:17:26 -- common/autotest_common.sh@351 -- # sizes["$mount"]=25874202624 00:07:09.366 15:17:26 -- common/autotest_common.sh@352 -- # uses["$mount"]=9748480 00:07:09.366 15:17:26 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:09.366 15:17:26 -- common/autotest_common.sh@350 -- # mounts["$mount"]=efivarfs 00:07:09.366 15:17:26 -- common/autotest_common.sh@350 -- # fss["$mount"]=efivarfs 00:07:09.366 15:17:26 -- common/autotest_common.sh@351 -- # avails["$mount"]=189440 00:07:09.366 15:17:26 -- common/autotest_common.sh@351 -- # sizes["$mount"]=507904 00:07:09.366 15:17:26 -- common/autotest_common.sh@352 -- # uses["$mount"]=314368 00:07:09.366 15:17:26 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:09.366 15:17:26 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:07:09.366 15:17:26 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:07:09.366 15:17:26 -- common/autotest_common.sh@351 -- # avails["$mount"]=64684933120 00:07:09.366 15:17:26 -- common/autotest_common.sh@351 -- # sizes["$mount"]=64685502464 00:07:09.366 15:17:26 -- common/autotest_common.sh@352 -- # uses["$mount"]=569344 00:07:09.366 15:17:26 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:09.366 15:17:26 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:07:09.366 15:17:26 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:07:09.366 15:17:26 -- common/autotest_common.sh@351 -- # avails["$mount"]=12937093120 00:07:09.366 15:17:26 -- common/autotest_common.sh@351 -- # sizes["$mount"]=12937097216 00:07:09.366 15:17:26 -- common/autotest_common.sh@352 -- # uses["$mount"]=4096 00:07:09.366 15:17:26 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:09.366 15:17:26 -- common/autotest_common.sh@355 -- # printf '* Looking for test storage...\n' 00:07:09.366 * Looking for test storage... 00:07:09.366 15:17:26 -- common/autotest_common.sh@357 -- # local target_space new_size 00:07:09.366 15:17:26 -- common/autotest_common.sh@358 -- # for target_dir in "${storage_candidates[@]}" 00:07:09.366 15:17:26 -- common/autotest_common.sh@361 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:09.366 15:17:26 -- common/autotest_common.sh@361 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:09.366 15:17:26 -- common/autotest_common.sh@361 -- # mount=/ 00:07:09.366 15:17:26 -- common/autotest_common.sh@363 -- # target_space=122876231680 00:07:09.366 15:17:26 -- common/autotest_common.sh@364 -- # (( target_space == 0 || target_space < requested_size )) 00:07:09.366 15:17:26 -- common/autotest_common.sh@367 -- # (( target_space >= requested_size )) 00:07:09.366 15:17:26 -- common/autotest_common.sh@369 -- # [[ overlay == tmpfs ]] 00:07:09.366 15:17:26 -- common/autotest_common.sh@369 -- # [[ overlay == ramfs ]] 00:07:09.366 15:17:26 -- common/autotest_common.sh@369 -- # [[ / == / ]] 00:07:09.366 15:17:26 -- common/autotest_common.sh@370 -- # new_size=8709361664 00:07:09.366 15:17:26 -- common/autotest_common.sh@371 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:09.366 15:17:26 -- common/autotest_common.sh@376 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:09.366 15:17:26 -- common/autotest_common.sh@376 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:09.366 15:17:26 -- common/autotest_common.sh@377 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:09.366 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:09.366 15:17:26 -- common/autotest_common.sh@378 -- # return 0 00:07:09.366 15:17:26 -- common/autotest_common.sh@1668 -- # set -o errtrace 00:07:09.366 15:17:26 -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:07:09.366 15:17:26 -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:09.366 15:17:26 -- common/autotest_common.sh@1672 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:09.366 15:17:26 -- common/autotest_common.sh@1673 -- # true 00:07:09.366 15:17:26 -- common/autotest_common.sh@1675 -- # xtrace_fd 00:07:09.366 15:17:26 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:09.366 15:17:26 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:09.366 15:17:26 -- common/autotest_common.sh@27 -- # exec 00:07:09.366 15:17:26 -- common/autotest_common.sh@29 -- # exec 00:07:09.366 15:17:26 -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:09.366 15:17:26 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:09.366 15:17:26 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:09.366 15:17:26 -- common/autotest_common.sh@18 -- # set -x 00:07:09.366 15:17:26 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:09.366 15:17:26 -- nvmf/common.sh@7 -- # uname -s 00:07:09.366 15:17:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:09.366 15:17:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:09.366 15:17:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:09.366 15:17:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:09.366 15:17:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:09.366 15:17:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:09.366 15:17:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:09.366 15:17:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:09.366 15:17:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:09.366 15:17:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:09.366 15:17:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:09.366 15:17:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:09.366 15:17:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:09.366 15:17:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:09.366 15:17:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:09.366 15:17:26 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:09.366 15:17:26 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:09.366 15:17:26 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:09.366 15:17:26 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:09.366 15:17:26 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:09.366 15:17:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.366 15:17:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.366 15:17:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.366 15:17:26 -- paths/export.sh@5 -- # export PATH 00:07:09.366 15:17:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.366 15:17:26 -- nvmf/common.sh@47 -- # : 0 00:07:09.366 15:17:26 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:09.366 15:17:26 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:09.366 15:17:26 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:09.366 15:17:26 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:09.366 15:17:26 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:09.366 15:17:26 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:09.366 15:17:26 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:09.366 15:17:26 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:09.366 15:17:26 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:09.366 15:17:26 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:09.366 15:17:26 -- target/filesystem.sh@15 -- # nvmftestinit 00:07:09.366 15:17:26 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:09.366 15:17:26 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:09.366 15:17:26 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:09.366 15:17:26 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:09.366 15:17:26 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:09.366 15:17:26 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:09.366 15:17:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:09.366 15:17:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:09.366 15:17:26 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:09.366 15:17:26 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:09.366 15:17:26 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:09.366 15:17:26 -- common/autotest_common.sh@10 -- # set +x 00:07:17.510 15:17:33 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:17.510 15:17:33 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:17.510 15:17:33 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:17.510 15:17:33 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:17.510 15:17:33 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:17.510 15:17:33 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:17.510 15:17:33 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:17.510 15:17:33 -- nvmf/common.sh@295 -- # net_devs=() 00:07:17.510 15:17:33 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:17.510 15:17:33 -- nvmf/common.sh@296 -- # e810=() 00:07:17.510 15:17:33 -- nvmf/common.sh@296 -- # local -ga e810 00:07:17.510 15:17:33 -- nvmf/common.sh@297 -- # x722=() 00:07:17.510 15:17:33 -- nvmf/common.sh@297 -- # local -ga x722 00:07:17.510 15:17:33 -- nvmf/common.sh@298 -- # mlx=() 00:07:17.510 15:17:33 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:17.510 15:17:33 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:17.510 15:17:33 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:17.510 15:17:33 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:17.510 15:17:33 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:17.510 15:17:33 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:17.510 15:17:33 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:17.510 15:17:33 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:17.510 15:17:33 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:17.510 15:17:33 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:17.510 15:17:33 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:17.510 15:17:33 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:17.510 15:17:33 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:17.510 15:17:33 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:17.510 15:17:33 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:17.510 15:17:33 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:17.510 15:17:33 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:17.510 15:17:33 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:17.510 15:17:33 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:17.510 15:17:33 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:17.510 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:17.510 15:17:33 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:17.510 15:17:33 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:17.510 15:17:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:17.510 15:17:33 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:17.510 15:17:33 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:17.510 15:17:33 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:17.510 15:17:33 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:17.510 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:17.510 15:17:33 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:17.510 15:17:33 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:17.510 15:17:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:17.510 15:17:33 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:17.510 15:17:33 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:17.510 15:17:33 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:17.511 15:17:33 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:17.511 15:17:33 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:17.511 15:17:33 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:17.511 15:17:33 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:17.511 15:17:33 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:17.511 15:17:33 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:17.511 15:17:33 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:17.511 Found net devices under 0000:31:00.0: cvl_0_0 00:07:17.511 15:17:33 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:17.511 15:17:33 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:17.511 15:17:33 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:17.511 15:17:33 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:17.511 15:17:33 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:17.511 15:17:33 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:17.511 Found net devices under 0000:31:00.1: cvl_0_1 00:07:17.511 15:17:33 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:17.511 15:17:33 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:17.511 15:17:33 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:17.511 15:17:33 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:17.511 15:17:33 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:07:17.511 15:17:33 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:07:17.511 15:17:33 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:17.511 15:17:33 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:17.511 15:17:33 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:17.511 15:17:33 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:17.511 15:17:33 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:17.511 15:17:33 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:17.511 15:17:33 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:17.511 15:17:33 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:17.511 15:17:33 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:17.511 15:17:33 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:17.511 15:17:33 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:17.511 15:17:33 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:17.511 15:17:33 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:17.511 15:17:33 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:17.511 15:17:33 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:17.511 15:17:33 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:17.511 15:17:33 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:17.511 15:17:34 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:17.511 15:17:34 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:17.511 15:17:34 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:17.511 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:17.511 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.463 ms 00:07:17.511 00:07:17.511 --- 10.0.0.2 ping statistics --- 00:07:17.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:17.511 rtt min/avg/max/mdev = 0.463/0.463/0.463/0.000 ms 00:07:17.511 15:17:34 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:17.511 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:17.511 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.350 ms 00:07:17.511 00:07:17.511 --- 10.0.0.1 ping statistics --- 00:07:17.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:17.511 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:07:17.511 15:17:34 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:17.511 15:17:34 -- nvmf/common.sh@411 -- # return 0 00:07:17.511 15:17:34 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:17.511 15:17:34 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:17.511 15:17:34 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:17.511 15:17:34 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:17.511 15:17:34 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:17.511 15:17:34 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:17.511 15:17:34 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:17.511 15:17:34 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:17.511 15:17:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:17.511 15:17:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:17.511 15:17:34 -- common/autotest_common.sh@10 -- # set +x 00:07:17.511 ************************************ 00:07:17.511 START TEST nvmf_filesystem_no_in_capsule 00:07:17.511 ************************************ 00:07:17.511 15:17:34 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 0 00:07:17.511 15:17:34 -- target/filesystem.sh@47 -- # in_capsule=0 00:07:17.511 15:17:34 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:17.511 15:17:34 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:17.511 15:17:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:17.511 15:17:34 -- common/autotest_common.sh@10 -- # set +x 00:07:17.511 15:17:34 -- nvmf/common.sh@470 -- # nvmfpid=1452258 00:07:17.511 15:17:34 -- nvmf/common.sh@471 -- # waitforlisten 1452258 00:07:17.511 15:17:34 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:17.511 15:17:34 -- common/autotest_common.sh@817 -- # '[' -z 1452258 ']' 00:07:17.511 15:17:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.511 15:17:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:17.511 15:17:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.511 15:17:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:17.511 15:17:34 -- common/autotest_common.sh@10 -- # set +x 00:07:17.511 [2024-04-26 15:17:34.394128] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:07:17.511 [2024-04-26 15:17:34.394171] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:17.511 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.511 [2024-04-26 15:17:34.460009] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:17.511 [2024-04-26 15:17:34.526749] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:17.511 [2024-04-26 15:17:34.526787] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:17.511 [2024-04-26 15:17:34.526795] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:17.511 [2024-04-26 15:17:34.526802] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:17.511 [2024-04-26 15:17:34.526808] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:17.511 [2024-04-26 15:17:34.526900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:17.511 [2024-04-26 15:17:34.526950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:17.511 [2024-04-26 15:17:34.527252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:17.511 [2024-04-26 15:17:34.527253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.771 15:17:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:17.771 15:17:35 -- common/autotest_common.sh@850 -- # return 0 00:07:17.771 15:17:35 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:17.771 15:17:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:17.771 15:17:35 -- common/autotest_common.sh@10 -- # set +x 00:07:17.771 15:17:35 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:17.771 15:17:35 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:17.771 15:17:35 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:17.771 15:17:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.771 15:17:35 -- common/autotest_common.sh@10 -- # set +x 00:07:17.771 [2024-04-26 15:17:35.208425] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:17.771 15:17:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.771 15:17:35 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:17.771 15:17:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.771 15:17:35 -- common/autotest_common.sh@10 -- # set +x 00:07:18.032 Malloc1 00:07:18.032 15:17:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:18.032 15:17:35 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:18.032 15:17:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:18.032 15:17:35 -- common/autotest_common.sh@10 -- # set +x 00:07:18.032 15:17:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:18.032 15:17:35 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:18.032 15:17:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:18.032 15:17:35 -- common/autotest_common.sh@10 -- # set +x 00:07:18.032 15:17:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:18.032 15:17:35 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:18.032 15:17:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:18.032 15:17:35 -- common/autotest_common.sh@10 -- # set +x 00:07:18.032 [2024-04-26 15:17:35.343168] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:18.032 15:17:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:18.032 15:17:35 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:18.032 15:17:35 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:07:18.032 15:17:35 -- common/autotest_common.sh@1365 -- # local bdev_info 00:07:18.032 15:17:35 -- common/autotest_common.sh@1366 -- # local bs 00:07:18.032 15:17:35 -- common/autotest_common.sh@1367 -- # local nb 00:07:18.032 15:17:35 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:18.032 15:17:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:18.032 15:17:35 -- common/autotest_common.sh@10 -- # set +x 00:07:18.032 15:17:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:18.032 15:17:35 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:07:18.032 { 00:07:18.032 "name": "Malloc1", 00:07:18.032 "aliases": [ 00:07:18.032 "fabc605c-aed9-46f0-8f94-0936e00ab935" 00:07:18.032 ], 00:07:18.032 "product_name": "Malloc disk", 00:07:18.032 "block_size": 512, 00:07:18.032 "num_blocks": 1048576, 00:07:18.032 "uuid": "fabc605c-aed9-46f0-8f94-0936e00ab935", 00:07:18.032 "assigned_rate_limits": { 00:07:18.032 "rw_ios_per_sec": 0, 00:07:18.032 "rw_mbytes_per_sec": 0, 00:07:18.032 "r_mbytes_per_sec": 0, 00:07:18.032 "w_mbytes_per_sec": 0 00:07:18.032 }, 00:07:18.032 "claimed": true, 00:07:18.032 "claim_type": "exclusive_write", 00:07:18.032 "zoned": false, 00:07:18.032 "supported_io_types": { 00:07:18.032 "read": true, 00:07:18.032 "write": true, 00:07:18.032 "unmap": true, 00:07:18.032 "write_zeroes": true, 00:07:18.032 "flush": true, 00:07:18.032 "reset": true, 00:07:18.032 "compare": false, 00:07:18.032 "compare_and_write": false, 00:07:18.032 "abort": true, 00:07:18.032 "nvme_admin": false, 00:07:18.032 "nvme_io": false 00:07:18.032 }, 00:07:18.032 "memory_domains": [ 00:07:18.032 { 00:07:18.032 "dma_device_id": "system", 00:07:18.032 "dma_device_type": 1 00:07:18.032 }, 00:07:18.032 { 00:07:18.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:18.032 "dma_device_type": 2 00:07:18.032 } 00:07:18.032 ], 00:07:18.032 "driver_specific": {} 00:07:18.032 } 00:07:18.032 ]' 00:07:18.032 15:17:35 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:07:18.032 15:17:35 -- common/autotest_common.sh@1369 -- # bs=512 00:07:18.032 15:17:35 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:07:18.032 15:17:35 -- common/autotest_common.sh@1370 -- # nb=1048576 00:07:18.032 15:17:35 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:07:18.032 15:17:35 -- common/autotest_common.sh@1374 -- # echo 512 00:07:18.032 15:17:35 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:18.032 15:17:35 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:19.945 15:17:36 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:19.945 15:17:36 -- common/autotest_common.sh@1184 -- # local i=0 00:07:19.945 15:17:36 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:19.945 15:17:36 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:19.945 15:17:36 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:21.858 15:17:38 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:21.858 15:17:38 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:21.858 15:17:38 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:21.858 15:17:38 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:21.858 15:17:38 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:21.858 15:17:38 -- common/autotest_common.sh@1194 -- # return 0 00:07:21.858 15:17:38 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:21.858 15:17:38 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:21.858 15:17:38 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:21.858 15:17:38 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:21.858 15:17:38 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:21.858 15:17:38 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:21.858 15:17:38 -- setup/common.sh@80 -- # echo 536870912 00:07:21.858 15:17:38 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:21.858 15:17:38 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:21.858 15:17:38 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:21.858 15:17:38 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:22.118 15:17:39 -- target/filesystem.sh@69 -- # partprobe 00:07:22.378 15:17:39 -- target/filesystem.sh@70 -- # sleep 1 00:07:23.321 15:17:40 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:23.321 15:17:40 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:23.321 15:17:40 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:23.321 15:17:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:23.321 15:17:40 -- common/autotest_common.sh@10 -- # set +x 00:07:23.321 ************************************ 00:07:23.321 START TEST filesystem_ext4 00:07:23.321 ************************************ 00:07:23.321 15:17:40 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:23.321 15:17:40 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:23.321 15:17:40 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:23.321 15:17:40 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:23.321 15:17:40 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:07:23.321 15:17:40 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:23.321 15:17:40 -- common/autotest_common.sh@914 -- # local i=0 00:07:23.321 15:17:40 -- common/autotest_common.sh@915 -- # local force 00:07:23.321 15:17:40 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:07:23.321 15:17:40 -- common/autotest_common.sh@918 -- # force=-F 00:07:23.321 15:17:40 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:23.321 mke2fs 1.46.5 (30-Dec-2021) 00:07:23.581 Discarding device blocks: 0/522240 done 00:07:23.581 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:23.581 Filesystem UUID: c1a345ca-8580-451e-b526-40ac3f41a52f 00:07:23.581 Superblock backups stored on blocks: 00:07:23.581 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:23.581 00:07:23.581 Allocating group tables: 0/64 done 00:07:23.581 Writing inode tables: 0/64 done 00:07:26.879 Creating journal (8192 blocks): done 00:07:27.449 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:07:27.449 00:07:27.449 15:17:44 -- common/autotest_common.sh@931 -- # return 0 00:07:27.449 15:17:44 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:27.449 15:17:44 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:27.449 15:17:44 -- target/filesystem.sh@25 -- # sync 00:07:27.449 15:17:44 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:27.449 15:17:44 -- target/filesystem.sh@27 -- # sync 00:07:27.449 15:17:44 -- target/filesystem.sh@29 -- # i=0 00:07:27.449 15:17:44 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:27.710 15:17:44 -- target/filesystem.sh@37 -- # kill -0 1452258 00:07:27.710 15:17:44 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:27.710 15:17:44 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:27.710 15:17:44 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:27.710 15:17:44 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:27.710 00:07:27.710 real 0m4.184s 00:07:27.710 user 0m0.023s 00:07:27.710 sys 0m0.075s 00:07:27.710 15:17:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:27.710 15:17:44 -- common/autotest_common.sh@10 -- # set +x 00:07:27.710 ************************************ 00:07:27.710 END TEST filesystem_ext4 00:07:27.710 ************************************ 00:07:27.710 15:17:44 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:27.710 15:17:44 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:27.710 15:17:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:27.710 15:17:44 -- common/autotest_common.sh@10 -- # set +x 00:07:27.710 ************************************ 00:07:27.710 START TEST filesystem_btrfs 00:07:27.710 ************************************ 00:07:27.710 15:17:45 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:27.710 15:17:45 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:27.710 15:17:45 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:27.710 15:17:45 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:27.710 15:17:45 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:07:27.710 15:17:45 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:27.710 15:17:45 -- common/autotest_common.sh@914 -- # local i=0 00:07:27.710 15:17:45 -- common/autotest_common.sh@915 -- # local force 00:07:27.710 15:17:45 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:07:27.710 15:17:45 -- common/autotest_common.sh@920 -- # force=-f 00:07:27.710 15:17:45 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:28.282 btrfs-progs v6.6.2 00:07:28.282 See https://btrfs.readthedocs.io for more information. 00:07:28.282 00:07:28.282 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:28.282 NOTE: several default settings have changed in version 5.15, please make sure 00:07:28.282 this does not affect your deployments: 00:07:28.282 - DUP for metadata (-m dup) 00:07:28.282 - enabled no-holes (-O no-holes) 00:07:28.282 - enabled free-space-tree (-R free-space-tree) 00:07:28.282 00:07:28.282 Label: (null) 00:07:28.282 UUID: 6bb742ee-96c6-4cf9-b43b-f72629e52965 00:07:28.282 Node size: 16384 00:07:28.282 Sector size: 4096 00:07:28.282 Filesystem size: 510.00MiB 00:07:28.282 Block group profiles: 00:07:28.282 Data: single 8.00MiB 00:07:28.282 Metadata: DUP 32.00MiB 00:07:28.282 System: DUP 8.00MiB 00:07:28.282 SSD detected: yes 00:07:28.282 Zoned device: no 00:07:28.282 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:28.282 Runtime features: free-space-tree 00:07:28.282 Checksum: crc32c 00:07:28.282 Number of devices: 1 00:07:28.282 Devices: 00:07:28.282 ID SIZE PATH 00:07:28.282 1 510.00MiB /dev/nvme0n1p1 00:07:28.282 00:07:28.282 15:17:45 -- common/autotest_common.sh@931 -- # return 0 00:07:28.282 15:17:45 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:28.854 15:17:46 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:28.854 15:17:46 -- target/filesystem.sh@25 -- # sync 00:07:28.854 15:17:46 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:28.854 15:17:46 -- target/filesystem.sh@27 -- # sync 00:07:28.854 15:17:46 -- target/filesystem.sh@29 -- # i=0 00:07:28.854 15:17:46 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:28.854 15:17:46 -- target/filesystem.sh@37 -- # kill -0 1452258 00:07:28.854 15:17:46 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:28.854 15:17:46 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:28.854 15:17:46 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:28.854 15:17:46 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:28.854 00:07:28.854 real 0m1.073s 00:07:28.854 user 0m0.021s 00:07:28.854 sys 0m0.135s 00:07:28.854 15:17:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:28.854 15:17:46 -- common/autotest_common.sh@10 -- # set +x 00:07:28.854 ************************************ 00:07:28.854 END TEST filesystem_btrfs 00:07:28.854 ************************************ 00:07:28.854 15:17:46 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:28.854 15:17:46 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:28.854 15:17:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:28.854 15:17:46 -- common/autotest_common.sh@10 -- # set +x 00:07:29.115 ************************************ 00:07:29.115 START TEST filesystem_xfs 00:07:29.115 ************************************ 00:07:29.115 15:17:46 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:07:29.115 15:17:46 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:29.115 15:17:46 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:29.115 15:17:46 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:29.115 15:17:46 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:07:29.115 15:17:46 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:29.115 15:17:46 -- common/autotest_common.sh@914 -- # local i=0 00:07:29.115 15:17:46 -- common/autotest_common.sh@915 -- # local force 00:07:29.115 15:17:46 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:07:29.115 15:17:46 -- common/autotest_common.sh@920 -- # force=-f 00:07:29.115 15:17:46 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:29.115 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:29.115 = sectsz=512 attr=2, projid32bit=1 00:07:29.115 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:29.115 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:29.115 data = bsize=4096 blocks=130560, imaxpct=25 00:07:29.115 = sunit=0 swidth=0 blks 00:07:29.115 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:29.115 log =internal log bsize=4096 blocks=16384, version=2 00:07:29.115 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:29.115 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:30.060 Discarding blocks...Done. 00:07:30.060 15:17:47 -- common/autotest_common.sh@931 -- # return 0 00:07:30.060 15:17:47 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:31.977 15:17:49 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:31.977 15:17:49 -- target/filesystem.sh@25 -- # sync 00:07:31.977 15:17:49 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:31.977 15:17:49 -- target/filesystem.sh@27 -- # sync 00:07:31.977 15:17:49 -- target/filesystem.sh@29 -- # i=0 00:07:31.977 15:17:49 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:31.977 15:17:49 -- target/filesystem.sh@37 -- # kill -0 1452258 00:07:31.977 15:17:49 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:31.977 15:17:49 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:31.977 15:17:49 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:31.978 15:17:49 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:31.978 00:07:31.978 real 0m2.964s 00:07:31.978 user 0m0.019s 00:07:31.978 sys 0m0.085s 00:07:31.978 15:17:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:31.978 15:17:49 -- common/autotest_common.sh@10 -- # set +x 00:07:31.978 ************************************ 00:07:31.978 END TEST filesystem_xfs 00:07:31.978 ************************************ 00:07:31.978 15:17:49 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:32.239 15:17:49 -- target/filesystem.sh@93 -- # sync 00:07:32.239 15:17:49 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:32.239 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:32.239 15:17:49 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:32.239 15:17:49 -- common/autotest_common.sh@1205 -- # local i=0 00:07:32.239 15:17:49 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:32.239 15:17:49 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:32.239 15:17:49 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:32.239 15:17:49 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:32.239 15:17:49 -- common/autotest_common.sh@1217 -- # return 0 00:07:32.239 15:17:49 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:32.239 15:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:32.239 15:17:49 -- common/autotest_common.sh@10 -- # set +x 00:07:32.240 15:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:32.240 15:17:49 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:32.240 15:17:49 -- target/filesystem.sh@101 -- # killprocess 1452258 00:07:32.240 15:17:49 -- common/autotest_common.sh@936 -- # '[' -z 1452258 ']' 00:07:32.240 15:17:49 -- common/autotest_common.sh@940 -- # kill -0 1452258 00:07:32.240 15:17:49 -- common/autotest_common.sh@941 -- # uname 00:07:32.240 15:17:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:32.240 15:17:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1452258 00:07:32.240 15:17:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:32.240 15:17:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:32.240 15:17:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1452258' 00:07:32.240 killing process with pid 1452258 00:07:32.240 15:17:49 -- common/autotest_common.sh@955 -- # kill 1452258 00:07:32.240 15:17:49 -- common/autotest_common.sh@960 -- # wait 1452258 00:07:32.501 15:17:49 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:32.501 00:07:32.501 real 0m15.549s 00:07:32.501 user 1m1.504s 00:07:32.501 sys 0m1.396s 00:07:32.501 15:17:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:32.501 15:17:49 -- common/autotest_common.sh@10 -- # set +x 00:07:32.501 ************************************ 00:07:32.501 END TEST nvmf_filesystem_no_in_capsule 00:07:32.501 ************************************ 00:07:32.501 15:17:49 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:32.501 15:17:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:32.501 15:17:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:32.501 15:17:49 -- common/autotest_common.sh@10 -- # set +x 00:07:32.763 ************************************ 00:07:32.763 START TEST nvmf_filesystem_in_capsule 00:07:32.763 ************************************ 00:07:32.763 15:17:50 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 4096 00:07:32.763 15:17:50 -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:32.763 15:17:50 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:32.763 15:17:50 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:32.763 15:17:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:32.763 15:17:50 -- common/autotest_common.sh@10 -- # set +x 00:07:32.763 15:17:50 -- nvmf/common.sh@470 -- # nvmfpid=1455543 00:07:32.763 15:17:50 -- nvmf/common.sh@471 -- # waitforlisten 1455543 00:07:32.763 15:17:50 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:32.763 15:17:50 -- common/autotest_common.sh@817 -- # '[' -z 1455543 ']' 00:07:32.763 15:17:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.763 15:17:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:32.763 15:17:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.763 15:17:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:32.763 15:17:50 -- common/autotest_common.sh@10 -- # set +x 00:07:32.763 [2024-04-26 15:17:50.143183] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:07:32.763 [2024-04-26 15:17:50.143238] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:32.763 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.024 [2024-04-26 15:17:50.215089] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:33.024 [2024-04-26 15:17:50.289590] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:33.024 [2024-04-26 15:17:50.289626] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:33.024 [2024-04-26 15:17:50.289635] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:33.024 [2024-04-26 15:17:50.289642] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:33.024 [2024-04-26 15:17:50.289648] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:33.024 [2024-04-26 15:17:50.289797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:33.024 [2024-04-26 15:17:50.290011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:33.024 [2024-04-26 15:17:50.290154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.024 [2024-04-26 15:17:50.289853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:33.595 15:17:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:33.595 15:17:50 -- common/autotest_common.sh@850 -- # return 0 00:07:33.595 15:17:50 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:33.595 15:17:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:33.595 15:17:50 -- common/autotest_common.sh@10 -- # set +x 00:07:33.595 15:17:50 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:33.595 15:17:50 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:33.595 15:17:50 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:33.595 15:17:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:33.595 15:17:50 -- common/autotest_common.sh@10 -- # set +x 00:07:33.595 [2024-04-26 15:17:50.966431] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:33.595 15:17:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:33.595 15:17:50 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:33.595 15:17:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:33.595 15:17:50 -- common/autotest_common.sh@10 -- # set +x 00:07:33.863 Malloc1 00:07:33.863 15:17:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:33.863 15:17:51 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:33.863 15:17:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:33.863 15:17:51 -- common/autotest_common.sh@10 -- # set +x 00:07:33.863 15:17:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:33.863 15:17:51 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:33.863 15:17:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:33.863 15:17:51 -- common/autotest_common.sh@10 -- # set +x 00:07:33.863 15:17:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:33.863 15:17:51 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:33.863 15:17:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:33.863 15:17:51 -- common/autotest_common.sh@10 -- # set +x 00:07:33.863 [2024-04-26 15:17:51.093868] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:33.863 15:17:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:33.863 15:17:51 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:33.863 15:17:51 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:07:33.863 15:17:51 -- common/autotest_common.sh@1365 -- # local bdev_info 00:07:33.863 15:17:51 -- common/autotest_common.sh@1366 -- # local bs 00:07:33.863 15:17:51 -- common/autotest_common.sh@1367 -- # local nb 00:07:33.863 15:17:51 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:33.863 15:17:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:33.863 15:17:51 -- common/autotest_common.sh@10 -- # set +x 00:07:33.863 15:17:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:33.863 15:17:51 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:07:33.863 { 00:07:33.863 "name": "Malloc1", 00:07:33.863 "aliases": [ 00:07:33.863 "53f3a0f5-3652-4921-869d-403c5cf82813" 00:07:33.863 ], 00:07:33.863 "product_name": "Malloc disk", 00:07:33.863 "block_size": 512, 00:07:33.863 "num_blocks": 1048576, 00:07:33.863 "uuid": "53f3a0f5-3652-4921-869d-403c5cf82813", 00:07:33.863 "assigned_rate_limits": { 00:07:33.863 "rw_ios_per_sec": 0, 00:07:33.863 "rw_mbytes_per_sec": 0, 00:07:33.863 "r_mbytes_per_sec": 0, 00:07:33.863 "w_mbytes_per_sec": 0 00:07:33.863 }, 00:07:33.863 "claimed": true, 00:07:33.863 "claim_type": "exclusive_write", 00:07:33.863 "zoned": false, 00:07:33.863 "supported_io_types": { 00:07:33.863 "read": true, 00:07:33.863 "write": true, 00:07:33.863 "unmap": true, 00:07:33.863 "write_zeroes": true, 00:07:33.863 "flush": true, 00:07:33.863 "reset": true, 00:07:33.863 "compare": false, 00:07:33.863 "compare_and_write": false, 00:07:33.863 "abort": true, 00:07:33.863 "nvme_admin": false, 00:07:33.863 "nvme_io": false 00:07:33.863 }, 00:07:33.863 "memory_domains": [ 00:07:33.863 { 00:07:33.863 "dma_device_id": "system", 00:07:33.863 "dma_device_type": 1 00:07:33.863 }, 00:07:33.863 { 00:07:33.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:33.863 "dma_device_type": 2 00:07:33.863 } 00:07:33.863 ], 00:07:33.863 "driver_specific": {} 00:07:33.863 } 00:07:33.863 ]' 00:07:33.863 15:17:51 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:07:33.863 15:17:51 -- common/autotest_common.sh@1369 -- # bs=512 00:07:33.863 15:17:51 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:07:33.863 15:17:51 -- common/autotest_common.sh@1370 -- # nb=1048576 00:07:33.863 15:17:51 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:07:33.863 15:17:51 -- common/autotest_common.sh@1374 -- # echo 512 00:07:33.863 15:17:51 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:33.863 15:17:51 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:35.320 15:17:52 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:35.320 15:17:52 -- common/autotest_common.sh@1184 -- # local i=0 00:07:35.320 15:17:52 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:35.320 15:17:52 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:35.320 15:17:52 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:37.866 15:17:54 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:37.866 15:17:54 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:37.866 15:17:54 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:37.866 15:17:54 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:37.866 15:17:54 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:37.866 15:17:54 -- common/autotest_common.sh@1194 -- # return 0 00:07:37.866 15:17:54 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:37.866 15:17:54 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:37.866 15:17:54 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:37.866 15:17:54 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:37.866 15:17:54 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:37.866 15:17:54 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:37.866 15:17:54 -- setup/common.sh@80 -- # echo 536870912 00:07:37.866 15:17:54 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:37.866 15:17:54 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:37.866 15:17:54 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:37.866 15:17:54 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:37.866 15:17:54 -- target/filesystem.sh@69 -- # partprobe 00:07:38.439 15:17:55 -- target/filesystem.sh@70 -- # sleep 1 00:07:39.382 15:17:56 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:39.382 15:17:56 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:39.383 15:17:56 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:39.383 15:17:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:39.383 15:17:56 -- common/autotest_common.sh@10 -- # set +x 00:07:39.644 ************************************ 00:07:39.644 START TEST filesystem_in_capsule_ext4 00:07:39.644 ************************************ 00:07:39.644 15:17:56 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:39.644 15:17:56 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:39.644 15:17:56 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:39.644 15:17:56 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:39.644 15:17:56 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:07:39.644 15:17:56 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:39.644 15:17:56 -- common/autotest_common.sh@914 -- # local i=0 00:07:39.644 15:17:56 -- common/autotest_common.sh@915 -- # local force 00:07:39.644 15:17:56 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:07:39.644 15:17:56 -- common/autotest_common.sh@918 -- # force=-F 00:07:39.644 15:17:56 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:39.644 mke2fs 1.46.5 (30-Dec-2021) 00:07:39.644 Discarding device blocks: 0/522240 done 00:07:39.644 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:39.644 Filesystem UUID: 694e4745-da2c-4354-9ac7-eb1c5ca7141e 00:07:39.644 Superblock backups stored on blocks: 00:07:39.644 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:39.644 00:07:39.644 Allocating group tables: 0/64 done 00:07:39.644 Writing inode tables: 0/64 done 00:07:42.943 Creating journal (8192 blocks): done 00:07:42.943 Writing superblocks and filesystem accounting information: 0/64 done 00:07:42.943 00:07:42.943 15:17:59 -- common/autotest_common.sh@931 -- # return 0 00:07:42.943 15:17:59 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:43.205 15:18:00 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:43.205 15:18:00 -- target/filesystem.sh@25 -- # sync 00:07:43.205 15:18:00 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:43.205 15:18:00 -- target/filesystem.sh@27 -- # sync 00:07:43.205 15:18:00 -- target/filesystem.sh@29 -- # i=0 00:07:43.205 15:18:00 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:43.465 15:18:00 -- target/filesystem.sh@37 -- # kill -0 1455543 00:07:43.465 15:18:00 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:43.465 15:18:00 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:43.465 15:18:00 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:43.465 15:18:00 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:43.465 00:07:43.465 real 0m3.808s 00:07:43.465 user 0m0.021s 00:07:43.465 sys 0m0.080s 00:07:43.465 15:18:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:43.465 15:18:00 -- common/autotest_common.sh@10 -- # set +x 00:07:43.465 ************************************ 00:07:43.465 END TEST filesystem_in_capsule_ext4 00:07:43.465 ************************************ 00:07:43.465 15:18:00 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:43.465 15:18:00 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:43.465 15:18:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:43.465 15:18:00 -- common/autotest_common.sh@10 -- # set +x 00:07:43.465 ************************************ 00:07:43.465 START TEST filesystem_in_capsule_btrfs 00:07:43.465 ************************************ 00:07:43.465 15:18:00 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:43.465 15:18:00 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:43.465 15:18:00 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:43.465 15:18:00 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:43.465 15:18:00 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:07:43.465 15:18:00 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:43.465 15:18:00 -- common/autotest_common.sh@914 -- # local i=0 00:07:43.465 15:18:00 -- common/autotest_common.sh@915 -- # local force 00:07:43.465 15:18:00 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:07:43.465 15:18:00 -- common/autotest_common.sh@920 -- # force=-f 00:07:43.465 15:18:00 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:44.036 btrfs-progs v6.6.2 00:07:44.036 See https://btrfs.readthedocs.io for more information. 00:07:44.036 00:07:44.036 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:44.036 NOTE: several default settings have changed in version 5.15, please make sure 00:07:44.036 this does not affect your deployments: 00:07:44.036 - DUP for metadata (-m dup) 00:07:44.036 - enabled no-holes (-O no-holes) 00:07:44.036 - enabled free-space-tree (-R free-space-tree) 00:07:44.036 00:07:44.036 Label: (null) 00:07:44.036 UUID: a703472b-9a4b-4552-8b65-ca6a8851a677 00:07:44.036 Node size: 16384 00:07:44.036 Sector size: 4096 00:07:44.036 Filesystem size: 510.00MiB 00:07:44.036 Block group profiles: 00:07:44.036 Data: single 8.00MiB 00:07:44.036 Metadata: DUP 32.00MiB 00:07:44.036 System: DUP 8.00MiB 00:07:44.036 SSD detected: yes 00:07:44.036 Zoned device: no 00:07:44.036 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:44.036 Runtime features: free-space-tree 00:07:44.036 Checksum: crc32c 00:07:44.036 Number of devices: 1 00:07:44.036 Devices: 00:07:44.036 ID SIZE PATH 00:07:44.036 1 510.00MiB /dev/nvme0n1p1 00:07:44.036 00:07:44.036 15:18:01 -- common/autotest_common.sh@931 -- # return 0 00:07:44.036 15:18:01 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:44.297 15:18:01 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:44.297 15:18:01 -- target/filesystem.sh@25 -- # sync 00:07:44.297 15:18:01 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:44.297 15:18:01 -- target/filesystem.sh@27 -- # sync 00:07:44.297 15:18:01 -- target/filesystem.sh@29 -- # i=0 00:07:44.297 15:18:01 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:44.297 15:18:01 -- target/filesystem.sh@37 -- # kill -0 1455543 00:07:44.297 15:18:01 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:44.297 15:18:01 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:44.297 15:18:01 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:44.297 15:18:01 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:44.558 00:07:44.558 real 0m0.885s 00:07:44.558 user 0m0.027s 00:07:44.558 sys 0m0.134s 00:07:44.558 15:18:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:44.558 15:18:01 -- common/autotest_common.sh@10 -- # set +x 00:07:44.558 ************************************ 00:07:44.558 END TEST filesystem_in_capsule_btrfs 00:07:44.558 ************************************ 00:07:44.558 15:18:01 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:44.558 15:18:01 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:44.558 15:18:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:44.558 15:18:01 -- common/autotest_common.sh@10 -- # set +x 00:07:44.558 ************************************ 00:07:44.558 START TEST filesystem_in_capsule_xfs 00:07:44.558 ************************************ 00:07:44.558 15:18:01 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:07:44.558 15:18:01 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:44.558 15:18:01 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:44.558 15:18:01 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:44.558 15:18:01 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:07:44.558 15:18:01 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:44.558 15:18:01 -- common/autotest_common.sh@914 -- # local i=0 00:07:44.558 15:18:01 -- common/autotest_common.sh@915 -- # local force 00:07:44.558 15:18:01 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:07:44.558 15:18:01 -- common/autotest_common.sh@920 -- # force=-f 00:07:44.558 15:18:01 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:44.558 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:44.558 = sectsz=512 attr=2, projid32bit=1 00:07:44.558 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:44.558 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:44.558 data = bsize=4096 blocks=130560, imaxpct=25 00:07:44.558 = sunit=0 swidth=0 blks 00:07:44.558 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:44.558 log =internal log bsize=4096 blocks=16384, version=2 00:07:44.558 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:44.558 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:45.499 Discarding blocks...Done. 00:07:45.499 15:18:02 -- common/autotest_common.sh@931 -- # return 0 00:07:45.500 15:18:02 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:47.411 15:18:04 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:47.411 15:18:04 -- target/filesystem.sh@25 -- # sync 00:07:47.671 15:18:04 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:47.671 15:18:04 -- target/filesystem.sh@27 -- # sync 00:07:47.671 15:18:04 -- target/filesystem.sh@29 -- # i=0 00:07:47.671 15:18:04 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:47.671 15:18:04 -- target/filesystem.sh@37 -- # kill -0 1455543 00:07:47.671 15:18:04 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:47.671 15:18:04 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:47.671 15:18:04 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:47.671 15:18:04 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:47.671 00:07:47.671 real 0m2.978s 00:07:47.671 user 0m0.031s 00:07:47.671 sys 0m0.072s 00:07:47.671 15:18:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:47.671 15:18:04 -- common/autotest_common.sh@10 -- # set +x 00:07:47.671 ************************************ 00:07:47.671 END TEST filesystem_in_capsule_xfs 00:07:47.671 ************************************ 00:07:47.671 15:18:04 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:47.671 15:18:05 -- target/filesystem.sh@93 -- # sync 00:07:48.243 15:18:05 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:48.243 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:48.243 15:18:05 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:48.243 15:18:05 -- common/autotest_common.sh@1205 -- # local i=0 00:07:48.243 15:18:05 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:48.243 15:18:05 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:48.243 15:18:05 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:48.243 15:18:05 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:48.243 15:18:05 -- common/autotest_common.sh@1217 -- # return 0 00:07:48.243 15:18:05 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:48.243 15:18:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:48.243 15:18:05 -- common/autotest_common.sh@10 -- # set +x 00:07:48.243 15:18:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:48.243 15:18:05 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:48.243 15:18:05 -- target/filesystem.sh@101 -- # killprocess 1455543 00:07:48.243 15:18:05 -- common/autotest_common.sh@936 -- # '[' -z 1455543 ']' 00:07:48.243 15:18:05 -- common/autotest_common.sh@940 -- # kill -0 1455543 00:07:48.243 15:18:05 -- common/autotest_common.sh@941 -- # uname 00:07:48.243 15:18:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:48.243 15:18:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1455543 00:07:48.243 15:18:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:48.243 15:18:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:48.243 15:18:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1455543' 00:07:48.243 killing process with pid 1455543 00:07:48.243 15:18:05 -- common/autotest_common.sh@955 -- # kill 1455543 00:07:48.243 15:18:05 -- common/autotest_common.sh@960 -- # wait 1455543 00:07:48.504 15:18:05 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:48.504 00:07:48.504 real 0m15.774s 00:07:48.504 user 1m2.368s 00:07:48.504 sys 0m1.415s 00:07:48.504 15:18:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:48.504 15:18:05 -- common/autotest_common.sh@10 -- # set +x 00:07:48.504 ************************************ 00:07:48.504 END TEST nvmf_filesystem_in_capsule 00:07:48.504 ************************************ 00:07:48.504 15:18:05 -- target/filesystem.sh@108 -- # nvmftestfini 00:07:48.504 15:18:05 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:48.504 15:18:05 -- nvmf/common.sh@117 -- # sync 00:07:48.504 15:18:05 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:48.504 15:18:05 -- nvmf/common.sh@120 -- # set +e 00:07:48.504 15:18:05 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:48.504 15:18:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:48.504 rmmod nvme_tcp 00:07:48.504 rmmod nvme_fabrics 00:07:48.504 rmmod nvme_keyring 00:07:48.764 15:18:05 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:48.764 15:18:05 -- nvmf/common.sh@124 -- # set -e 00:07:48.764 15:18:05 -- nvmf/common.sh@125 -- # return 0 00:07:48.764 15:18:05 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:07:48.764 15:18:05 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:48.764 15:18:05 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:07:48.764 15:18:05 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:07:48.764 15:18:05 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:48.764 15:18:05 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:48.764 15:18:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:48.764 15:18:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:48.764 15:18:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:50.679 15:18:08 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:50.679 00:07:50.679 real 0m41.567s 00:07:50.679 user 2m6.286s 00:07:50.679 sys 0m8.514s 00:07:50.679 15:18:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:50.679 15:18:08 -- common/autotest_common.sh@10 -- # set +x 00:07:50.679 ************************************ 00:07:50.679 END TEST nvmf_filesystem 00:07:50.679 ************************************ 00:07:50.679 15:18:08 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:50.679 15:18:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:50.679 15:18:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:50.679 15:18:08 -- common/autotest_common.sh@10 -- # set +x 00:07:50.940 ************************************ 00:07:50.940 START TEST nvmf_discovery 00:07:50.940 ************************************ 00:07:50.940 15:18:08 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:50.940 * Looking for test storage... 00:07:50.940 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:50.940 15:18:08 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:50.940 15:18:08 -- nvmf/common.sh@7 -- # uname -s 00:07:50.940 15:18:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:50.940 15:18:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:50.940 15:18:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:50.940 15:18:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:50.940 15:18:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:50.940 15:18:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:50.940 15:18:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:50.940 15:18:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:50.940 15:18:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:50.940 15:18:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:50.940 15:18:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:50.940 15:18:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:50.940 15:18:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:50.940 15:18:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:50.940 15:18:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:50.940 15:18:08 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:50.940 15:18:08 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:50.940 15:18:08 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:50.940 15:18:08 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:50.940 15:18:08 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:50.940 15:18:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.940 15:18:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.940 15:18:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.940 15:18:08 -- paths/export.sh@5 -- # export PATH 00:07:50.940 15:18:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.940 15:18:08 -- nvmf/common.sh@47 -- # : 0 00:07:50.940 15:18:08 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:50.940 15:18:08 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:50.940 15:18:08 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:50.940 15:18:08 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:50.940 15:18:08 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:50.940 15:18:08 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:50.940 15:18:08 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:50.940 15:18:08 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:50.940 15:18:08 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:50.940 15:18:08 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:50.940 15:18:08 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:50.940 15:18:08 -- target/discovery.sh@15 -- # hash nvme 00:07:50.940 15:18:08 -- target/discovery.sh@20 -- # nvmftestinit 00:07:50.940 15:18:08 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:50.940 15:18:08 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:50.940 15:18:08 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:50.940 15:18:08 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:50.940 15:18:08 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:50.940 15:18:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:50.940 15:18:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:50.940 15:18:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:50.940 15:18:08 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:50.940 15:18:08 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:50.940 15:18:08 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:50.940 15:18:08 -- common/autotest_common.sh@10 -- # set +x 00:07:59.079 15:18:15 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:59.079 15:18:15 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:59.079 15:18:15 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:59.079 15:18:15 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:59.079 15:18:15 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:59.079 15:18:15 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:59.079 15:18:15 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:59.079 15:18:15 -- nvmf/common.sh@295 -- # net_devs=() 00:07:59.079 15:18:15 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:59.079 15:18:15 -- nvmf/common.sh@296 -- # e810=() 00:07:59.079 15:18:15 -- nvmf/common.sh@296 -- # local -ga e810 00:07:59.079 15:18:15 -- nvmf/common.sh@297 -- # x722=() 00:07:59.079 15:18:15 -- nvmf/common.sh@297 -- # local -ga x722 00:07:59.079 15:18:15 -- nvmf/common.sh@298 -- # mlx=() 00:07:59.079 15:18:15 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:59.080 15:18:15 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:59.080 15:18:15 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:59.080 15:18:15 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:59.080 15:18:15 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:59.080 15:18:15 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:59.080 15:18:15 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:59.080 15:18:15 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:59.080 15:18:15 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:59.080 15:18:15 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:59.080 15:18:15 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:59.080 15:18:15 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:59.080 15:18:15 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:59.080 15:18:15 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:59.080 15:18:15 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:59.080 15:18:15 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:59.080 15:18:15 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:59.080 15:18:15 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:59.080 15:18:15 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:59.080 15:18:15 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:59.080 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:59.080 15:18:15 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:59.080 15:18:15 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:59.080 15:18:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:59.080 15:18:15 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:59.080 15:18:15 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:59.080 15:18:15 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:59.080 15:18:15 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:59.080 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:59.080 15:18:15 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:59.080 15:18:15 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:59.080 15:18:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:59.080 15:18:15 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:59.080 15:18:15 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:59.080 15:18:15 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:59.080 15:18:15 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:59.080 15:18:15 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:59.080 15:18:15 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:59.080 15:18:15 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:59.080 15:18:15 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:59.080 15:18:15 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:59.080 15:18:15 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:59.080 Found net devices under 0000:31:00.0: cvl_0_0 00:07:59.080 15:18:15 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:59.080 15:18:15 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:59.080 15:18:15 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:59.080 15:18:15 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:59.080 15:18:15 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:59.080 15:18:15 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:59.080 Found net devices under 0000:31:00.1: cvl_0_1 00:07:59.080 15:18:15 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:59.080 15:18:15 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:59.080 15:18:15 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:59.080 15:18:15 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:59.080 15:18:15 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:07:59.080 15:18:15 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:07:59.080 15:18:15 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:59.080 15:18:15 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:59.080 15:18:15 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:59.080 15:18:15 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:59.080 15:18:15 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:59.080 15:18:15 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:59.080 15:18:15 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:59.080 15:18:15 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:59.080 15:18:15 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:59.080 15:18:15 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:59.080 15:18:15 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:59.080 15:18:15 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:59.080 15:18:15 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:59.080 15:18:15 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:59.080 15:18:15 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:59.080 15:18:15 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:59.080 15:18:15 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:59.080 15:18:15 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:59.080 15:18:15 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:59.080 15:18:15 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:59.080 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:59.080 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.680 ms 00:07:59.080 00:07:59.080 --- 10.0.0.2 ping statistics --- 00:07:59.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:59.080 rtt min/avg/max/mdev = 0.680/0.680/0.680/0.000 ms 00:07:59.080 15:18:15 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:59.080 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:59.080 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.300 ms 00:07:59.080 00:07:59.080 --- 10.0.0.1 ping statistics --- 00:07:59.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:59.080 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:07:59.080 15:18:15 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:59.080 15:18:15 -- nvmf/common.sh@411 -- # return 0 00:07:59.080 15:18:15 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:59.080 15:18:15 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:59.080 15:18:15 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:59.080 15:18:15 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:59.080 15:18:15 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:59.080 15:18:15 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:59.080 15:18:15 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:59.080 15:18:15 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:59.080 15:18:15 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:59.080 15:18:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:59.080 15:18:15 -- common/autotest_common.sh@10 -- # set +x 00:07:59.080 15:18:15 -- nvmf/common.sh@470 -- # nvmfpid=1463781 00:07:59.080 15:18:15 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:59.080 15:18:15 -- nvmf/common.sh@471 -- # waitforlisten 1463781 00:07:59.080 15:18:15 -- common/autotest_common.sh@817 -- # '[' -z 1463781 ']' 00:07:59.080 15:18:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.080 15:18:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:59.080 15:18:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.080 15:18:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:59.080 15:18:15 -- common/autotest_common.sh@10 -- # set +x 00:07:59.080 [2024-04-26 15:18:15.618354] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:07:59.080 [2024-04-26 15:18:15.618444] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:59.080 EAL: No free 2048 kB hugepages reported on node 1 00:07:59.080 [2024-04-26 15:18:15.709396] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:59.080 [2024-04-26 15:18:15.778995] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:59.080 [2024-04-26 15:18:15.779030] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:59.080 [2024-04-26 15:18:15.779036] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:59.080 [2024-04-26 15:18:15.779041] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:59.080 [2024-04-26 15:18:15.779045] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:59.080 [2024-04-26 15:18:15.779126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:59.080 [2024-04-26 15:18:15.779266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:59.080 [2024-04-26 15:18:15.779422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.080 [2024-04-26 15:18:15.779424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:59.080 15:18:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:59.080 15:18:16 -- common/autotest_common.sh@850 -- # return 0 00:07:59.080 15:18:16 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:59.080 15:18:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:59.080 15:18:16 -- common/autotest_common.sh@10 -- # set +x 00:07:59.080 15:18:16 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:59.080 15:18:16 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:59.080 15:18:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:59.080 15:18:16 -- common/autotest_common.sh@10 -- # set +x 00:07:59.080 [2024-04-26 15:18:16.495538] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:59.080 15:18:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:59.080 15:18:16 -- target/discovery.sh@26 -- # seq 1 4 00:07:59.080 15:18:16 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:59.080 15:18:16 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:59.080 15:18:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:59.080 15:18:16 -- common/autotest_common.sh@10 -- # set +x 00:07:59.080 Null1 00:07:59.080 15:18:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:59.080 15:18:16 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:59.080 15:18:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:59.080 15:18:16 -- common/autotest_common.sh@10 -- # set +x 00:07:59.341 15:18:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:59.341 15:18:16 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:59.341 15:18:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:59.341 15:18:16 -- common/autotest_common.sh@10 -- # set +x 00:07:59.341 15:18:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:59.341 15:18:16 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:59.341 15:18:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:59.341 15:18:16 -- common/autotest_common.sh@10 -- # set +x 00:07:59.341 [2024-04-26 15:18:16.551865] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:59.341 15:18:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:59.341 15:18:16 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:59.341 15:18:16 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:59.341 15:18:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:59.341 15:18:16 -- common/autotest_common.sh@10 -- # set +x 00:07:59.341 Null2 00:07:59.341 15:18:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:59.341 15:18:16 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:59.341 15:18:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:59.341 15:18:16 -- common/autotest_common.sh@10 -- # set +x 00:07:59.341 15:18:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:59.341 15:18:16 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:59.341 15:18:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:59.341 15:18:16 -- common/autotest_common.sh@10 -- # set +x 00:07:59.341 15:18:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:59.341 15:18:16 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:59.341 15:18:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:59.341 15:18:16 -- common/autotest_common.sh@10 -- # set +x 00:07:59.341 15:18:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:59.341 15:18:16 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:59.341 15:18:16 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:59.341 15:18:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:59.341 15:18:16 -- common/autotest_common.sh@10 -- # set +x 00:07:59.341 Null3 00:07:59.341 15:18:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:59.341 15:18:16 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:59.341 15:18:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:59.341 15:18:16 -- common/autotest_common.sh@10 -- # set +x 00:07:59.342 15:18:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:59.342 15:18:16 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:59.342 15:18:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:59.342 15:18:16 -- common/autotest_common.sh@10 -- # set +x 00:07:59.342 15:18:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:59.342 15:18:16 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:59.342 15:18:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:59.342 15:18:16 -- common/autotest_common.sh@10 -- # set +x 00:07:59.342 15:18:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:59.342 15:18:16 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:59.342 15:18:16 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:59.342 15:18:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:59.342 15:18:16 -- common/autotest_common.sh@10 -- # set +x 00:07:59.342 Null4 00:07:59.342 15:18:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:59.342 15:18:16 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:59.342 15:18:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:59.342 15:18:16 -- common/autotest_common.sh@10 -- # set +x 00:07:59.342 15:18:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:59.342 15:18:16 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:59.342 15:18:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:59.342 15:18:16 -- common/autotest_common.sh@10 -- # set +x 00:07:59.342 15:18:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:59.342 15:18:16 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:59.342 15:18:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:59.342 15:18:16 -- common/autotest_common.sh@10 -- # set +x 00:07:59.342 15:18:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:59.342 15:18:16 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:59.342 15:18:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:59.342 15:18:16 -- common/autotest_common.sh@10 -- # set +x 00:07:59.342 15:18:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:59.342 15:18:16 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:59.342 15:18:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:59.342 15:18:16 -- common/autotest_common.sh@10 -- # set +x 00:07:59.342 15:18:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:59.342 15:18:16 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:07:59.603 00:07:59.603 Discovery Log Number of Records 6, Generation counter 6 00:07:59.603 =====Discovery Log Entry 0====== 00:07:59.603 trtype: tcp 00:07:59.603 adrfam: ipv4 00:07:59.603 subtype: current discovery subsystem 00:07:59.603 treq: not required 00:07:59.603 portid: 0 00:07:59.603 trsvcid: 4420 00:07:59.603 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:59.603 traddr: 10.0.0.2 00:07:59.603 eflags: explicit discovery connections, duplicate discovery information 00:07:59.603 sectype: none 00:07:59.603 =====Discovery Log Entry 1====== 00:07:59.603 trtype: tcp 00:07:59.603 adrfam: ipv4 00:07:59.603 subtype: nvme subsystem 00:07:59.603 treq: not required 00:07:59.603 portid: 0 00:07:59.603 trsvcid: 4420 00:07:59.603 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:59.603 traddr: 10.0.0.2 00:07:59.603 eflags: none 00:07:59.603 sectype: none 00:07:59.603 =====Discovery Log Entry 2====== 00:07:59.603 trtype: tcp 00:07:59.603 adrfam: ipv4 00:07:59.603 subtype: nvme subsystem 00:07:59.603 treq: not required 00:07:59.603 portid: 0 00:07:59.603 trsvcid: 4420 00:07:59.603 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:59.603 traddr: 10.0.0.2 00:07:59.603 eflags: none 00:07:59.603 sectype: none 00:07:59.603 =====Discovery Log Entry 3====== 00:07:59.603 trtype: tcp 00:07:59.603 adrfam: ipv4 00:07:59.603 subtype: nvme subsystem 00:07:59.603 treq: not required 00:07:59.603 portid: 0 00:07:59.603 trsvcid: 4420 00:07:59.603 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:59.603 traddr: 10.0.0.2 00:07:59.603 eflags: none 00:07:59.603 sectype: none 00:07:59.603 =====Discovery Log Entry 4====== 00:07:59.603 trtype: tcp 00:07:59.603 adrfam: ipv4 00:07:59.603 subtype: nvme subsystem 00:07:59.603 treq: not required 00:07:59.603 portid: 0 00:07:59.603 trsvcid: 4420 00:07:59.603 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:59.603 traddr: 10.0.0.2 00:07:59.603 eflags: none 00:07:59.603 sectype: none 00:07:59.603 =====Discovery Log Entry 5====== 00:07:59.603 trtype: tcp 00:07:59.603 adrfam: ipv4 00:07:59.603 subtype: discovery subsystem referral 00:07:59.603 treq: not required 00:07:59.603 portid: 0 00:07:59.603 trsvcid: 4430 00:07:59.603 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:59.603 traddr: 10.0.0.2 00:07:59.603 eflags: none 00:07:59.603 sectype: none 00:07:59.603 15:18:16 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:59.603 Perform nvmf subsystem discovery via RPC 00:07:59.603 15:18:16 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:59.603 15:18:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:59.603 15:18:16 -- common/autotest_common.sh@10 -- # set +x 00:07:59.603 [2024-04-26 15:18:16.904872] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:07:59.603 [ 00:07:59.603 { 00:07:59.603 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:59.603 "subtype": "Discovery", 00:07:59.603 "listen_addresses": [ 00:07:59.603 { 00:07:59.603 "transport": "TCP", 00:07:59.603 "trtype": "TCP", 00:07:59.603 "adrfam": "IPv4", 00:07:59.603 "traddr": "10.0.0.2", 00:07:59.603 "trsvcid": "4420" 00:07:59.603 } 00:07:59.603 ], 00:07:59.603 "allow_any_host": true, 00:07:59.603 "hosts": [] 00:07:59.603 }, 00:07:59.603 { 00:07:59.603 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:59.603 "subtype": "NVMe", 00:07:59.603 "listen_addresses": [ 00:07:59.603 { 00:07:59.603 "transport": "TCP", 00:07:59.603 "trtype": "TCP", 00:07:59.603 "adrfam": "IPv4", 00:07:59.603 "traddr": "10.0.0.2", 00:07:59.603 "trsvcid": "4420" 00:07:59.603 } 00:07:59.603 ], 00:07:59.603 "allow_any_host": true, 00:07:59.603 "hosts": [], 00:07:59.603 "serial_number": "SPDK00000000000001", 00:07:59.603 "model_number": "SPDK bdev Controller", 00:07:59.603 "max_namespaces": 32, 00:07:59.603 "min_cntlid": 1, 00:07:59.603 "max_cntlid": 65519, 00:07:59.603 "namespaces": [ 00:07:59.603 { 00:07:59.603 "nsid": 1, 00:07:59.603 "bdev_name": "Null1", 00:07:59.603 "name": "Null1", 00:07:59.603 "nguid": "A6BE8B9B4CBC4FEB9DC7BED5B469EE2A", 00:07:59.603 "uuid": "a6be8b9b-4cbc-4feb-9dc7-bed5b469ee2a" 00:07:59.603 } 00:07:59.603 ] 00:07:59.603 }, 00:07:59.603 { 00:07:59.603 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:59.603 "subtype": "NVMe", 00:07:59.603 "listen_addresses": [ 00:07:59.603 { 00:07:59.603 "transport": "TCP", 00:07:59.604 "trtype": "TCP", 00:07:59.604 "adrfam": "IPv4", 00:07:59.604 "traddr": "10.0.0.2", 00:07:59.604 "trsvcid": "4420" 00:07:59.604 } 00:07:59.604 ], 00:07:59.604 "allow_any_host": true, 00:07:59.604 "hosts": [], 00:07:59.604 "serial_number": "SPDK00000000000002", 00:07:59.604 "model_number": "SPDK bdev Controller", 00:07:59.604 "max_namespaces": 32, 00:07:59.604 "min_cntlid": 1, 00:07:59.604 "max_cntlid": 65519, 00:07:59.604 "namespaces": [ 00:07:59.604 { 00:07:59.604 "nsid": 1, 00:07:59.604 "bdev_name": "Null2", 00:07:59.604 "name": "Null2", 00:07:59.604 "nguid": "3D68D4E4B56148758E6EBEA75F6C9848", 00:07:59.604 "uuid": "3d68d4e4-b561-4875-8e6e-bea75f6c9848" 00:07:59.604 } 00:07:59.604 ] 00:07:59.604 }, 00:07:59.604 { 00:07:59.604 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:59.604 "subtype": "NVMe", 00:07:59.604 "listen_addresses": [ 00:07:59.604 { 00:07:59.604 "transport": "TCP", 00:07:59.604 "trtype": "TCP", 00:07:59.604 "adrfam": "IPv4", 00:07:59.604 "traddr": "10.0.0.2", 00:07:59.604 "trsvcid": "4420" 00:07:59.604 } 00:07:59.604 ], 00:07:59.604 "allow_any_host": true, 00:07:59.604 "hosts": [], 00:07:59.604 "serial_number": "SPDK00000000000003", 00:07:59.604 "model_number": "SPDK bdev Controller", 00:07:59.604 "max_namespaces": 32, 00:07:59.604 "min_cntlid": 1, 00:07:59.604 "max_cntlid": 65519, 00:07:59.604 "namespaces": [ 00:07:59.604 { 00:07:59.604 "nsid": 1, 00:07:59.604 "bdev_name": "Null3", 00:07:59.604 "name": "Null3", 00:07:59.604 "nguid": "24892469A0B54A8AADC129E118A6DA2A", 00:07:59.604 "uuid": "24892469-a0b5-4a8a-adc1-29e118a6da2a" 00:07:59.604 } 00:07:59.604 ] 00:07:59.604 }, 00:07:59.604 { 00:07:59.604 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:59.604 "subtype": "NVMe", 00:07:59.604 "listen_addresses": [ 00:07:59.604 { 00:07:59.604 "transport": "TCP", 00:07:59.604 "trtype": "TCP", 00:07:59.604 "adrfam": "IPv4", 00:07:59.604 "traddr": "10.0.0.2", 00:07:59.604 "trsvcid": "4420" 00:07:59.604 } 00:07:59.604 ], 00:07:59.604 "allow_any_host": true, 00:07:59.604 "hosts": [], 00:07:59.604 "serial_number": "SPDK00000000000004", 00:07:59.604 "model_number": "SPDK bdev Controller", 00:07:59.604 "max_namespaces": 32, 00:07:59.604 "min_cntlid": 1, 00:07:59.604 "max_cntlid": 65519, 00:07:59.604 "namespaces": [ 00:07:59.604 { 00:07:59.604 "nsid": 1, 00:07:59.604 "bdev_name": "Null4", 00:07:59.604 "name": "Null4", 00:07:59.604 "nguid": "A8933A9130BD4F9EB134CD42F4071768", 00:07:59.604 "uuid": "a8933a91-30bd-4f9e-b134-cd42f4071768" 00:07:59.604 } 00:07:59.604 ] 00:07:59.604 } 00:07:59.604 ] 00:07:59.604 15:18:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:59.604 15:18:16 -- target/discovery.sh@42 -- # seq 1 4 00:07:59.604 15:18:16 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:59.604 15:18:16 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:59.604 15:18:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:59.604 15:18:16 -- common/autotest_common.sh@10 -- # set +x 00:07:59.604 15:18:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:59.604 15:18:16 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:59.604 15:18:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:59.604 15:18:16 -- common/autotest_common.sh@10 -- # set +x 00:07:59.604 15:18:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:59.604 15:18:16 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:59.604 15:18:16 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:59.604 15:18:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:59.604 15:18:16 -- common/autotest_common.sh@10 -- # set +x 00:07:59.604 15:18:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:59.604 15:18:16 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:59.604 15:18:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:59.604 15:18:16 -- common/autotest_common.sh@10 -- # set +x 00:07:59.604 15:18:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:59.604 15:18:16 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:59.604 15:18:16 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:59.604 15:18:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:59.604 15:18:16 -- common/autotest_common.sh@10 -- # set +x 00:07:59.604 15:18:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:59.604 15:18:16 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:59.604 15:18:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:59.604 15:18:16 -- common/autotest_common.sh@10 -- # set +x 00:07:59.604 15:18:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:59.604 15:18:16 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:59.604 15:18:16 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:59.604 15:18:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:59.604 15:18:16 -- common/autotest_common.sh@10 -- # set +x 00:07:59.604 15:18:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:59.604 15:18:17 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:59.604 15:18:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:59.604 15:18:17 -- common/autotest_common.sh@10 -- # set +x 00:07:59.604 15:18:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:59.604 15:18:17 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:59.604 15:18:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:59.604 15:18:17 -- common/autotest_common.sh@10 -- # set +x 00:07:59.604 15:18:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:59.604 15:18:17 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:59.604 15:18:17 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:59.604 15:18:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:59.604 15:18:17 -- common/autotest_common.sh@10 -- # set +x 00:07:59.604 15:18:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:59.865 15:18:17 -- target/discovery.sh@49 -- # check_bdevs= 00:07:59.865 15:18:17 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:59.865 15:18:17 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:59.865 15:18:17 -- target/discovery.sh@57 -- # nvmftestfini 00:07:59.865 15:18:17 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:59.865 15:18:17 -- nvmf/common.sh@117 -- # sync 00:07:59.865 15:18:17 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:59.865 15:18:17 -- nvmf/common.sh@120 -- # set +e 00:07:59.865 15:18:17 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:59.865 15:18:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:59.865 rmmod nvme_tcp 00:07:59.865 rmmod nvme_fabrics 00:07:59.865 rmmod nvme_keyring 00:07:59.865 15:18:17 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:59.865 15:18:17 -- nvmf/common.sh@124 -- # set -e 00:07:59.865 15:18:17 -- nvmf/common.sh@125 -- # return 0 00:07:59.865 15:18:17 -- nvmf/common.sh@478 -- # '[' -n 1463781 ']' 00:07:59.865 15:18:17 -- nvmf/common.sh@479 -- # killprocess 1463781 00:07:59.865 15:18:17 -- common/autotest_common.sh@936 -- # '[' -z 1463781 ']' 00:07:59.865 15:18:17 -- common/autotest_common.sh@940 -- # kill -0 1463781 00:07:59.865 15:18:17 -- common/autotest_common.sh@941 -- # uname 00:07:59.865 15:18:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:59.865 15:18:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1463781 00:07:59.865 15:18:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:59.865 15:18:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:59.865 15:18:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1463781' 00:07:59.865 killing process with pid 1463781 00:07:59.865 15:18:17 -- common/autotest_common.sh@955 -- # kill 1463781 00:07:59.865 [2024-04-26 15:18:17.187512] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:07:59.865 15:18:17 -- common/autotest_common.sh@960 -- # wait 1463781 00:08:00.126 15:18:17 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:00.126 15:18:17 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:00.126 15:18:17 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:00.126 15:18:17 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:00.126 15:18:17 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:00.127 15:18:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:00.127 15:18:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:00.127 15:18:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:02.040 15:18:19 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:02.040 00:08:02.040 real 0m11.153s 00:08:02.040 user 0m8.439s 00:08:02.040 sys 0m5.661s 00:08:02.040 15:18:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:02.040 15:18:19 -- common/autotest_common.sh@10 -- # set +x 00:08:02.040 ************************************ 00:08:02.040 END TEST nvmf_discovery 00:08:02.040 ************************************ 00:08:02.040 15:18:19 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:02.040 15:18:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:02.040 15:18:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:02.040 15:18:19 -- common/autotest_common.sh@10 -- # set +x 00:08:02.302 ************************************ 00:08:02.302 START TEST nvmf_referrals 00:08:02.302 ************************************ 00:08:02.302 15:18:19 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:02.302 * Looking for test storage... 00:08:02.302 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:02.302 15:18:19 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:02.302 15:18:19 -- nvmf/common.sh@7 -- # uname -s 00:08:02.302 15:18:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:02.302 15:18:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:02.302 15:18:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:02.302 15:18:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:02.302 15:18:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:02.302 15:18:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:02.302 15:18:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:02.302 15:18:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:02.302 15:18:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:02.302 15:18:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:02.302 15:18:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:02.302 15:18:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:02.302 15:18:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:02.302 15:18:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:02.302 15:18:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:02.302 15:18:19 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:02.302 15:18:19 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:02.302 15:18:19 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:02.302 15:18:19 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:02.302 15:18:19 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:02.302 15:18:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.302 15:18:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.302 15:18:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.302 15:18:19 -- paths/export.sh@5 -- # export PATH 00:08:02.302 15:18:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.302 15:18:19 -- nvmf/common.sh@47 -- # : 0 00:08:02.303 15:18:19 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:02.303 15:18:19 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:02.303 15:18:19 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:02.303 15:18:19 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:02.303 15:18:19 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:02.303 15:18:19 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:02.303 15:18:19 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:02.303 15:18:19 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:02.303 15:18:19 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:02.303 15:18:19 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:02.303 15:18:19 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:02.303 15:18:19 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:02.303 15:18:19 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:02.303 15:18:19 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:02.303 15:18:19 -- target/referrals.sh@37 -- # nvmftestinit 00:08:02.303 15:18:19 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:02.303 15:18:19 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:02.303 15:18:19 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:02.303 15:18:19 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:02.303 15:18:19 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:02.303 15:18:19 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:02.303 15:18:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:02.303 15:18:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:02.303 15:18:19 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:02.303 15:18:19 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:02.303 15:18:19 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:02.303 15:18:19 -- common/autotest_common.sh@10 -- # set +x 00:08:10.447 15:18:26 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:10.447 15:18:26 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:10.448 15:18:26 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:10.448 15:18:26 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:10.448 15:18:26 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:10.448 15:18:26 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:10.448 15:18:26 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:10.448 15:18:26 -- nvmf/common.sh@295 -- # net_devs=() 00:08:10.448 15:18:26 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:10.448 15:18:26 -- nvmf/common.sh@296 -- # e810=() 00:08:10.448 15:18:26 -- nvmf/common.sh@296 -- # local -ga e810 00:08:10.448 15:18:26 -- nvmf/common.sh@297 -- # x722=() 00:08:10.448 15:18:26 -- nvmf/common.sh@297 -- # local -ga x722 00:08:10.448 15:18:26 -- nvmf/common.sh@298 -- # mlx=() 00:08:10.448 15:18:26 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:10.448 15:18:26 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:10.448 15:18:26 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:10.448 15:18:26 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:10.448 15:18:26 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:10.448 15:18:26 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:10.448 15:18:26 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:10.448 15:18:26 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:10.448 15:18:26 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:10.448 15:18:26 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:10.448 15:18:26 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:10.448 15:18:26 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:10.448 15:18:26 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:10.448 15:18:26 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:10.448 15:18:26 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:10.448 15:18:26 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:10.448 15:18:26 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:10.448 15:18:26 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:10.448 15:18:26 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:10.448 15:18:26 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:10.448 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:10.448 15:18:26 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:10.448 15:18:26 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:10.448 15:18:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:10.448 15:18:26 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:10.448 15:18:26 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:10.448 15:18:26 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:10.448 15:18:26 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:10.448 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:10.448 15:18:26 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:10.448 15:18:26 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:10.448 15:18:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:10.448 15:18:26 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:10.448 15:18:26 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:10.448 15:18:26 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:10.448 15:18:26 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:10.448 15:18:26 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:10.448 15:18:26 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:10.448 15:18:26 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:10.448 15:18:26 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:10.448 15:18:26 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:10.448 15:18:26 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:10.448 Found net devices under 0000:31:00.0: cvl_0_0 00:08:10.448 15:18:26 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:10.448 15:18:26 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:10.448 15:18:26 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:10.448 15:18:26 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:10.448 15:18:26 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:10.448 15:18:26 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:10.448 Found net devices under 0000:31:00.1: cvl_0_1 00:08:10.448 15:18:26 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:10.448 15:18:26 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:10.448 15:18:26 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:10.448 15:18:26 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:10.448 15:18:26 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:10.448 15:18:26 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:10.448 15:18:26 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:10.448 15:18:26 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:10.448 15:18:26 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:10.448 15:18:26 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:10.448 15:18:26 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:10.448 15:18:26 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:10.448 15:18:26 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:10.448 15:18:26 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:10.448 15:18:26 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:10.448 15:18:26 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:10.448 15:18:26 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:10.448 15:18:26 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:10.448 15:18:26 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:10.448 15:18:26 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:10.448 15:18:26 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:10.448 15:18:26 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:10.448 15:18:26 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:10.448 15:18:27 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:10.448 15:18:27 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:10.448 15:18:27 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:10.448 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:10.448 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.663 ms 00:08:10.448 00:08:10.448 --- 10.0.0.2 ping statistics --- 00:08:10.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:10.448 rtt min/avg/max/mdev = 0.663/0.663/0.663/0.000 ms 00:08:10.448 15:18:27 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:10.448 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:10.448 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:08:10.448 00:08:10.448 --- 10.0.0.1 ping statistics --- 00:08:10.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:10.448 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:08:10.448 15:18:27 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:10.448 15:18:27 -- nvmf/common.sh@411 -- # return 0 00:08:10.448 15:18:27 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:10.448 15:18:27 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:10.448 15:18:27 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:10.448 15:18:27 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:10.448 15:18:27 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:10.448 15:18:27 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:10.448 15:18:27 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:10.448 15:18:27 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:10.448 15:18:27 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:10.448 15:18:27 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:10.448 15:18:27 -- common/autotest_common.sh@10 -- # set +x 00:08:10.448 15:18:27 -- nvmf/common.sh@470 -- # nvmfpid=1468544 00:08:10.448 15:18:27 -- nvmf/common.sh@471 -- # waitforlisten 1468544 00:08:10.448 15:18:27 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:10.448 15:18:27 -- common/autotest_common.sh@817 -- # '[' -z 1468544 ']' 00:08:10.448 15:18:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:10.448 15:18:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:10.448 15:18:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:10.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:10.448 15:18:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:10.448 15:18:27 -- common/autotest_common.sh@10 -- # set +x 00:08:10.448 [2024-04-26 15:18:27.167923] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:08:10.448 [2024-04-26 15:18:27.167990] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:10.448 EAL: No free 2048 kB hugepages reported on node 1 00:08:10.448 [2024-04-26 15:18:27.240585] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:10.448 [2024-04-26 15:18:27.314174] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:10.448 [2024-04-26 15:18:27.314212] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:10.448 [2024-04-26 15:18:27.314222] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:10.448 [2024-04-26 15:18:27.314229] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:10.448 [2024-04-26 15:18:27.314236] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:10.448 [2024-04-26 15:18:27.314407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:10.448 [2024-04-26 15:18:27.314524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:10.448 [2024-04-26 15:18:27.314682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.448 [2024-04-26 15:18:27.314682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:10.709 15:18:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:10.709 15:18:27 -- common/autotest_common.sh@850 -- # return 0 00:08:10.709 15:18:27 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:10.709 15:18:27 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:10.709 15:18:27 -- common/autotest_common.sh@10 -- # set +x 00:08:10.709 15:18:27 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:10.709 15:18:27 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:10.709 15:18:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.710 15:18:27 -- common/autotest_common.sh@10 -- # set +x 00:08:10.710 [2024-04-26 15:18:27.988404] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:10.710 15:18:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.710 15:18:27 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:10.710 15:18:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.710 15:18:27 -- common/autotest_common.sh@10 -- # set +x 00:08:10.710 [2024-04-26 15:18:28.004591] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:10.710 15:18:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.710 15:18:28 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:10.710 15:18:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.710 15:18:28 -- common/autotest_common.sh@10 -- # set +x 00:08:10.710 15:18:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.710 15:18:28 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:10.710 15:18:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.710 15:18:28 -- common/autotest_common.sh@10 -- # set +x 00:08:10.710 15:18:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.710 15:18:28 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:10.710 15:18:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.710 15:18:28 -- common/autotest_common.sh@10 -- # set +x 00:08:10.710 15:18:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.710 15:18:28 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:10.710 15:18:28 -- target/referrals.sh@48 -- # jq length 00:08:10.710 15:18:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.710 15:18:28 -- common/autotest_common.sh@10 -- # set +x 00:08:10.710 15:18:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.710 15:18:28 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:10.710 15:18:28 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:10.710 15:18:28 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:10.710 15:18:28 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:10.710 15:18:28 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:10.710 15:18:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.710 15:18:28 -- target/referrals.sh@21 -- # sort 00:08:10.710 15:18:28 -- common/autotest_common.sh@10 -- # set +x 00:08:10.710 15:18:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.710 15:18:28 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:10.710 15:18:28 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:10.710 15:18:28 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:10.710 15:18:28 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:10.710 15:18:28 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:10.710 15:18:28 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:10.710 15:18:28 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:10.710 15:18:28 -- target/referrals.sh@26 -- # sort 00:08:10.969 15:18:28 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:10.969 15:18:28 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:10.969 15:18:28 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:10.969 15:18:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.969 15:18:28 -- common/autotest_common.sh@10 -- # set +x 00:08:10.969 15:18:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.969 15:18:28 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:10.969 15:18:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.969 15:18:28 -- common/autotest_common.sh@10 -- # set +x 00:08:10.969 15:18:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.969 15:18:28 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:10.969 15:18:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.969 15:18:28 -- common/autotest_common.sh@10 -- # set +x 00:08:10.969 15:18:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.969 15:18:28 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:10.969 15:18:28 -- target/referrals.sh@56 -- # jq length 00:08:10.969 15:18:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.969 15:18:28 -- common/autotest_common.sh@10 -- # set +x 00:08:10.969 15:18:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.969 15:18:28 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:10.969 15:18:28 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:10.969 15:18:28 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:10.970 15:18:28 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:10.970 15:18:28 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:10.970 15:18:28 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:10.970 15:18:28 -- target/referrals.sh@26 -- # sort 00:08:11.230 15:18:28 -- target/referrals.sh@26 -- # echo 00:08:11.230 15:18:28 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:11.230 15:18:28 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:11.230 15:18:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:11.230 15:18:28 -- common/autotest_common.sh@10 -- # set +x 00:08:11.230 15:18:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:11.230 15:18:28 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:11.230 15:18:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:11.230 15:18:28 -- common/autotest_common.sh@10 -- # set +x 00:08:11.230 15:18:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:11.230 15:18:28 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:11.230 15:18:28 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:11.230 15:18:28 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:11.230 15:18:28 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:11.230 15:18:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:11.230 15:18:28 -- target/referrals.sh@21 -- # sort 00:08:11.230 15:18:28 -- common/autotest_common.sh@10 -- # set +x 00:08:11.230 15:18:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:11.230 15:18:28 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:11.230 15:18:28 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:11.230 15:18:28 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:11.230 15:18:28 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:11.230 15:18:28 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:11.230 15:18:28 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:11.230 15:18:28 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:11.230 15:18:28 -- target/referrals.sh@26 -- # sort 00:08:11.230 15:18:28 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:11.230 15:18:28 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:11.230 15:18:28 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:11.230 15:18:28 -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:11.230 15:18:28 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:11.490 15:18:28 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:11.490 15:18:28 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:11.490 15:18:28 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:11.490 15:18:28 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:11.490 15:18:28 -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:11.490 15:18:28 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:11.491 15:18:28 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:11.491 15:18:28 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:11.751 15:18:29 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:11.751 15:18:29 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:11.751 15:18:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:11.751 15:18:29 -- common/autotest_common.sh@10 -- # set +x 00:08:11.751 15:18:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:11.751 15:18:29 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:11.751 15:18:29 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:11.751 15:18:29 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:11.751 15:18:29 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:11.751 15:18:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:11.751 15:18:29 -- target/referrals.sh@21 -- # sort 00:08:11.751 15:18:29 -- common/autotest_common.sh@10 -- # set +x 00:08:11.751 15:18:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:11.751 15:18:29 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:11.751 15:18:29 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:11.751 15:18:29 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:11.751 15:18:29 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:11.751 15:18:29 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:11.751 15:18:29 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:11.751 15:18:29 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:11.751 15:18:29 -- target/referrals.sh@26 -- # sort 00:08:11.751 15:18:29 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:11.751 15:18:29 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:11.751 15:18:29 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:11.751 15:18:29 -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:11.751 15:18:29 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:12.011 15:18:29 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:12.011 15:18:29 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:12.011 15:18:29 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:12.011 15:18:29 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:12.011 15:18:29 -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:12.011 15:18:29 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:12.011 15:18:29 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:12.011 15:18:29 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:12.271 15:18:29 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:12.271 15:18:29 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:12.271 15:18:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:12.271 15:18:29 -- common/autotest_common.sh@10 -- # set +x 00:08:12.271 15:18:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:12.271 15:18:29 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:12.271 15:18:29 -- target/referrals.sh@82 -- # jq length 00:08:12.271 15:18:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:12.271 15:18:29 -- common/autotest_common.sh@10 -- # set +x 00:08:12.271 15:18:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:12.271 15:18:29 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:12.271 15:18:29 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:12.271 15:18:29 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:12.271 15:18:29 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:12.271 15:18:29 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:12.271 15:18:29 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:12.271 15:18:29 -- target/referrals.sh@26 -- # sort 00:08:12.271 15:18:29 -- target/referrals.sh@26 -- # echo 00:08:12.271 15:18:29 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:12.271 15:18:29 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:12.271 15:18:29 -- target/referrals.sh@86 -- # nvmftestfini 00:08:12.271 15:18:29 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:12.271 15:18:29 -- nvmf/common.sh@117 -- # sync 00:08:12.531 15:18:29 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:12.531 15:18:29 -- nvmf/common.sh@120 -- # set +e 00:08:12.531 15:18:29 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:12.531 15:18:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:12.531 rmmod nvme_tcp 00:08:12.531 rmmod nvme_fabrics 00:08:12.531 rmmod nvme_keyring 00:08:12.531 15:18:29 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:12.531 15:18:29 -- nvmf/common.sh@124 -- # set -e 00:08:12.531 15:18:29 -- nvmf/common.sh@125 -- # return 0 00:08:12.531 15:18:29 -- nvmf/common.sh@478 -- # '[' -n 1468544 ']' 00:08:12.531 15:18:29 -- nvmf/common.sh@479 -- # killprocess 1468544 00:08:12.531 15:18:29 -- common/autotest_common.sh@936 -- # '[' -z 1468544 ']' 00:08:12.531 15:18:29 -- common/autotest_common.sh@940 -- # kill -0 1468544 00:08:12.531 15:18:29 -- common/autotest_common.sh@941 -- # uname 00:08:12.531 15:18:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:12.531 15:18:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1468544 00:08:12.531 15:18:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:12.531 15:18:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:12.531 15:18:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1468544' 00:08:12.531 killing process with pid 1468544 00:08:12.531 15:18:29 -- common/autotest_common.sh@955 -- # kill 1468544 00:08:12.531 15:18:29 -- common/autotest_common.sh@960 -- # wait 1468544 00:08:12.531 15:18:29 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:12.531 15:18:29 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:12.531 15:18:29 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:12.531 15:18:29 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:12.531 15:18:29 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:12.531 15:18:29 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:12.531 15:18:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:12.531 15:18:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:15.071 15:18:32 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:15.071 00:08:15.071 real 0m12.451s 00:08:15.071 user 0m13.645s 00:08:15.071 sys 0m6.154s 00:08:15.071 15:18:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:15.071 15:18:32 -- common/autotest_common.sh@10 -- # set +x 00:08:15.071 ************************************ 00:08:15.071 END TEST nvmf_referrals 00:08:15.071 ************************************ 00:08:15.071 15:18:32 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:15.071 15:18:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:15.071 15:18:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:15.071 15:18:32 -- common/autotest_common.sh@10 -- # set +x 00:08:15.071 ************************************ 00:08:15.071 START TEST nvmf_connect_disconnect 00:08:15.071 ************************************ 00:08:15.071 15:18:32 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:15.071 * Looking for test storage... 00:08:15.071 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:15.071 15:18:32 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:15.071 15:18:32 -- nvmf/common.sh@7 -- # uname -s 00:08:15.071 15:18:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:15.071 15:18:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:15.071 15:18:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:15.071 15:18:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:15.071 15:18:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:15.071 15:18:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:15.071 15:18:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:15.071 15:18:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:15.071 15:18:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:15.071 15:18:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:15.071 15:18:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:15.071 15:18:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:15.071 15:18:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:15.071 15:18:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:15.071 15:18:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:15.071 15:18:32 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:15.071 15:18:32 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:15.071 15:18:32 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:15.071 15:18:32 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:15.071 15:18:32 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:15.071 15:18:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.071 15:18:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.072 15:18:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.072 15:18:32 -- paths/export.sh@5 -- # export PATH 00:08:15.072 15:18:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.072 15:18:32 -- nvmf/common.sh@47 -- # : 0 00:08:15.072 15:18:32 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:15.072 15:18:32 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:15.072 15:18:32 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:15.072 15:18:32 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:15.072 15:18:32 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:15.072 15:18:32 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:15.072 15:18:32 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:15.072 15:18:32 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:15.072 15:18:32 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:15.072 15:18:32 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:15.072 15:18:32 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:15.072 15:18:32 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:15.072 15:18:32 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:15.072 15:18:32 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:15.072 15:18:32 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:15.072 15:18:32 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:15.072 15:18:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:15.072 15:18:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:15.072 15:18:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:15.072 15:18:32 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:15.072 15:18:32 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:15.072 15:18:32 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:15.072 15:18:32 -- common/autotest_common.sh@10 -- # set +x 00:08:23.330 15:18:39 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:23.330 15:18:39 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:23.330 15:18:39 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:23.330 15:18:39 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:23.330 15:18:39 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:23.330 15:18:39 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:23.330 15:18:39 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:23.330 15:18:39 -- nvmf/common.sh@295 -- # net_devs=() 00:08:23.330 15:18:39 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:23.330 15:18:39 -- nvmf/common.sh@296 -- # e810=() 00:08:23.330 15:18:39 -- nvmf/common.sh@296 -- # local -ga e810 00:08:23.330 15:18:39 -- nvmf/common.sh@297 -- # x722=() 00:08:23.330 15:18:39 -- nvmf/common.sh@297 -- # local -ga x722 00:08:23.330 15:18:39 -- nvmf/common.sh@298 -- # mlx=() 00:08:23.330 15:18:39 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:23.330 15:18:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:23.330 15:18:39 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:23.330 15:18:39 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:23.330 15:18:39 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:23.330 15:18:39 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:23.330 15:18:39 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:23.330 15:18:39 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:23.330 15:18:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:23.330 15:18:39 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:23.330 15:18:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:23.330 15:18:39 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:23.330 15:18:39 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:23.330 15:18:39 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:23.330 15:18:39 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:23.330 15:18:39 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:23.330 15:18:39 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:23.330 15:18:39 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:23.331 15:18:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:23.331 15:18:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:23.331 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:23.331 15:18:39 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:23.331 15:18:39 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:23.331 15:18:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:23.331 15:18:39 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:23.331 15:18:39 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:23.331 15:18:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:23.331 15:18:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:23.331 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:23.331 15:18:39 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:23.331 15:18:39 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:23.331 15:18:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:23.331 15:18:39 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:23.331 15:18:39 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:23.331 15:18:39 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:23.331 15:18:39 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:23.331 15:18:39 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:23.331 15:18:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:23.331 15:18:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:23.331 15:18:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:23.331 15:18:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:23.331 15:18:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:23.331 Found net devices under 0000:31:00.0: cvl_0_0 00:08:23.331 15:18:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:23.331 15:18:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:23.331 15:18:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:23.331 15:18:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:23.331 15:18:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:23.331 15:18:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:23.331 Found net devices under 0000:31:00.1: cvl_0_1 00:08:23.331 15:18:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:23.331 15:18:39 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:23.331 15:18:39 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:23.331 15:18:39 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:23.331 15:18:39 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:23.331 15:18:39 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:23.331 15:18:39 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:23.331 15:18:39 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:23.331 15:18:39 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:23.331 15:18:39 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:23.331 15:18:39 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:23.331 15:18:39 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:23.331 15:18:39 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:23.331 15:18:39 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:23.331 15:18:39 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:23.331 15:18:39 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:23.331 15:18:39 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:23.331 15:18:39 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:23.331 15:18:39 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:23.331 15:18:39 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:23.331 15:18:39 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:23.331 15:18:39 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:23.331 15:18:39 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:23.331 15:18:39 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:23.331 15:18:39 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:23.331 15:18:39 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:23.331 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:23.331 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.572 ms 00:08:23.331 00:08:23.331 --- 10.0.0.2 ping statistics --- 00:08:23.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:23.331 rtt min/avg/max/mdev = 0.572/0.572/0.572/0.000 ms 00:08:23.331 15:18:39 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:23.331 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:23.331 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:08:23.331 00:08:23.331 --- 10.0.0.1 ping statistics --- 00:08:23.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:23.331 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:08:23.331 15:18:39 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:23.331 15:18:39 -- nvmf/common.sh@411 -- # return 0 00:08:23.331 15:18:39 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:23.331 15:18:39 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:23.331 15:18:39 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:23.331 15:18:39 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:23.331 15:18:39 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:23.331 15:18:39 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:23.331 15:18:39 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:23.331 15:18:39 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:23.331 15:18:39 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:23.331 15:18:39 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:23.331 15:18:39 -- common/autotest_common.sh@10 -- # set +x 00:08:23.331 15:18:39 -- nvmf/common.sh@470 -- # nvmfpid=1473386 00:08:23.331 15:18:39 -- nvmf/common.sh@471 -- # waitforlisten 1473386 00:08:23.331 15:18:39 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:23.331 15:18:39 -- common/autotest_common.sh@817 -- # '[' -z 1473386 ']' 00:08:23.331 15:18:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.331 15:18:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:23.331 15:18:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.331 15:18:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:23.331 15:18:39 -- common/autotest_common.sh@10 -- # set +x 00:08:23.331 [2024-04-26 15:18:39.660932] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:08:23.331 [2024-04-26 15:18:39.660988] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:23.331 EAL: No free 2048 kB hugepages reported on node 1 00:08:23.331 [2024-04-26 15:18:39.730703] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:23.331 [2024-04-26 15:18:39.801649] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:23.331 [2024-04-26 15:18:39.801690] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:23.331 [2024-04-26 15:18:39.801700] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:23.331 [2024-04-26 15:18:39.801707] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:23.331 [2024-04-26 15:18:39.801714] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:23.331 [2024-04-26 15:18:39.801887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:23.331 [2024-04-26 15:18:39.802085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.331 [2024-04-26 15:18:39.802086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:23.331 [2024-04-26 15:18:39.801947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:23.331 15:18:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:23.331 15:18:40 -- common/autotest_common.sh@850 -- # return 0 00:08:23.331 15:18:40 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:23.331 15:18:40 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:23.331 15:18:40 -- common/autotest_common.sh@10 -- # set +x 00:08:23.331 15:18:40 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:23.331 15:18:40 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:23.331 15:18:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:23.331 15:18:40 -- common/autotest_common.sh@10 -- # set +x 00:08:23.331 [2024-04-26 15:18:40.483870] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:23.331 15:18:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:23.331 15:18:40 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:23.331 15:18:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:23.331 15:18:40 -- common/autotest_common.sh@10 -- # set +x 00:08:23.331 15:18:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:23.331 15:18:40 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:23.331 15:18:40 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:23.331 15:18:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:23.331 15:18:40 -- common/autotest_common.sh@10 -- # set +x 00:08:23.331 15:18:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:23.331 15:18:40 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:23.331 15:18:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:23.331 15:18:40 -- common/autotest_common.sh@10 -- # set +x 00:08:23.331 15:18:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:23.331 15:18:40 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:23.331 15:18:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:23.331 15:18:40 -- common/autotest_common.sh@10 -- # set +x 00:08:23.331 [2024-04-26 15:18:40.543199] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:23.331 15:18:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:23.331 15:18:40 -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:08:23.331 15:18:40 -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:08:23.331 15:18:40 -- target/connect_disconnect.sh@34 -- # set +x 00:08:27.534 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:30.837 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:34.179 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:38.390 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:41.693 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:41.693 15:18:58 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:08:41.693 15:18:58 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:08:41.693 15:18:58 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:41.693 15:18:58 -- nvmf/common.sh@117 -- # sync 00:08:41.693 15:18:58 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:41.693 15:18:58 -- nvmf/common.sh@120 -- # set +e 00:08:41.693 15:18:58 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:41.693 15:18:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:41.693 rmmod nvme_tcp 00:08:41.693 rmmod nvme_fabrics 00:08:41.693 rmmod nvme_keyring 00:08:41.693 15:18:58 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:41.693 15:18:58 -- nvmf/common.sh@124 -- # set -e 00:08:41.693 15:18:58 -- nvmf/common.sh@125 -- # return 0 00:08:41.693 15:18:58 -- nvmf/common.sh@478 -- # '[' -n 1473386 ']' 00:08:41.693 15:18:58 -- nvmf/common.sh@479 -- # killprocess 1473386 00:08:41.693 15:18:58 -- common/autotest_common.sh@936 -- # '[' -z 1473386 ']' 00:08:41.693 15:18:58 -- common/autotest_common.sh@940 -- # kill -0 1473386 00:08:41.693 15:18:58 -- common/autotest_common.sh@941 -- # uname 00:08:41.693 15:18:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:41.693 15:18:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1473386 00:08:41.693 15:18:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:41.693 15:18:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:41.693 15:18:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1473386' 00:08:41.693 killing process with pid 1473386 00:08:41.693 15:18:58 -- common/autotest_common.sh@955 -- # kill 1473386 00:08:41.693 15:18:58 -- common/autotest_common.sh@960 -- # wait 1473386 00:08:41.693 15:18:58 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:41.693 15:18:58 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:41.693 15:18:58 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:41.693 15:18:58 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:41.693 15:18:58 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:41.693 15:18:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:41.693 15:18:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:41.693 15:18:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:43.607 15:19:01 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:43.868 00:08:43.868 real 0m28.819s 00:08:43.868 user 1m18.467s 00:08:43.868 sys 0m6.601s 00:08:43.868 15:19:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:43.868 15:19:01 -- common/autotest_common.sh@10 -- # set +x 00:08:43.868 ************************************ 00:08:43.868 END TEST nvmf_connect_disconnect 00:08:43.868 ************************************ 00:08:43.868 15:19:01 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:43.868 15:19:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:43.868 15:19:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:43.868 15:19:01 -- common/autotest_common.sh@10 -- # set +x 00:08:43.868 ************************************ 00:08:43.868 START TEST nvmf_multitarget 00:08:43.868 ************************************ 00:08:43.868 15:19:01 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:44.136 * Looking for test storage... 00:08:44.136 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:44.136 15:19:01 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:44.136 15:19:01 -- nvmf/common.sh@7 -- # uname -s 00:08:44.136 15:19:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:44.136 15:19:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:44.136 15:19:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:44.136 15:19:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:44.136 15:19:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:44.136 15:19:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:44.136 15:19:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:44.136 15:19:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:44.136 15:19:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:44.136 15:19:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:44.136 15:19:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:44.136 15:19:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:44.136 15:19:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:44.136 15:19:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:44.136 15:19:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:44.136 15:19:01 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:44.136 15:19:01 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:44.136 15:19:01 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:44.136 15:19:01 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:44.136 15:19:01 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:44.136 15:19:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.136 15:19:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.136 15:19:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.136 15:19:01 -- paths/export.sh@5 -- # export PATH 00:08:44.136 15:19:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.136 15:19:01 -- nvmf/common.sh@47 -- # : 0 00:08:44.136 15:19:01 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:44.136 15:19:01 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:44.136 15:19:01 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:44.136 15:19:01 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:44.136 15:19:01 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:44.136 15:19:01 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:44.136 15:19:01 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:44.136 15:19:01 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:44.136 15:19:01 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:08:44.136 15:19:01 -- target/multitarget.sh@15 -- # nvmftestinit 00:08:44.136 15:19:01 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:44.136 15:19:01 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:44.136 15:19:01 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:44.136 15:19:01 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:44.136 15:19:01 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:44.136 15:19:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:44.136 15:19:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:44.136 15:19:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:44.136 15:19:01 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:44.136 15:19:01 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:44.136 15:19:01 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:44.136 15:19:01 -- common/autotest_common.sh@10 -- # set +x 00:08:52.287 15:19:08 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:52.287 15:19:08 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:52.287 15:19:08 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:52.287 15:19:08 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:52.287 15:19:08 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:52.287 15:19:08 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:52.287 15:19:08 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:52.287 15:19:08 -- nvmf/common.sh@295 -- # net_devs=() 00:08:52.287 15:19:08 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:52.287 15:19:08 -- nvmf/common.sh@296 -- # e810=() 00:08:52.287 15:19:08 -- nvmf/common.sh@296 -- # local -ga e810 00:08:52.287 15:19:08 -- nvmf/common.sh@297 -- # x722=() 00:08:52.287 15:19:08 -- nvmf/common.sh@297 -- # local -ga x722 00:08:52.287 15:19:08 -- nvmf/common.sh@298 -- # mlx=() 00:08:52.287 15:19:08 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:52.287 15:19:08 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:52.287 15:19:08 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:52.287 15:19:08 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:52.287 15:19:08 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:52.287 15:19:08 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:52.287 15:19:08 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:52.287 15:19:08 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:52.287 15:19:08 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:52.287 15:19:08 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:52.287 15:19:08 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:52.287 15:19:08 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:52.287 15:19:08 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:52.287 15:19:08 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:52.287 15:19:08 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:52.287 15:19:08 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:52.287 15:19:08 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:52.287 15:19:08 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:52.287 15:19:08 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:52.287 15:19:08 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:52.287 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:52.287 15:19:08 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:52.287 15:19:08 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:52.287 15:19:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:52.287 15:19:08 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:52.287 15:19:08 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:52.287 15:19:08 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:52.287 15:19:08 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:52.287 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:52.287 15:19:08 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:52.287 15:19:08 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:52.287 15:19:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:52.287 15:19:08 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:52.287 15:19:08 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:52.287 15:19:08 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:52.287 15:19:08 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:52.287 15:19:08 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:52.287 15:19:08 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:52.287 15:19:08 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:52.287 15:19:08 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:52.287 15:19:08 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:52.287 15:19:08 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:52.287 Found net devices under 0000:31:00.0: cvl_0_0 00:08:52.287 15:19:08 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:52.287 15:19:08 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:52.287 15:19:08 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:52.287 15:19:08 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:52.287 15:19:08 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:52.287 15:19:08 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:52.287 Found net devices under 0000:31:00.1: cvl_0_1 00:08:52.287 15:19:08 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:52.287 15:19:08 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:52.287 15:19:08 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:52.287 15:19:08 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:52.287 15:19:08 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:52.287 15:19:08 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:52.287 15:19:08 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:52.287 15:19:08 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:52.287 15:19:08 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:52.288 15:19:08 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:52.288 15:19:08 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:52.288 15:19:08 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:52.288 15:19:08 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:52.288 15:19:08 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:52.288 15:19:08 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:52.288 15:19:08 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:52.288 15:19:08 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:52.288 15:19:08 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:52.288 15:19:08 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:52.288 15:19:08 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:52.288 15:19:08 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:52.288 15:19:08 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:52.288 15:19:08 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:52.288 15:19:08 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:52.288 15:19:08 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:52.288 15:19:08 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:52.288 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:52.288 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.505 ms 00:08:52.288 00:08:52.288 --- 10.0.0.2 ping statistics --- 00:08:52.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:52.288 rtt min/avg/max/mdev = 0.505/0.505/0.505/0.000 ms 00:08:52.288 15:19:08 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:52.288 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:52.288 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.329 ms 00:08:52.288 00:08:52.288 --- 10.0.0.1 ping statistics --- 00:08:52.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:52.288 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:08:52.288 15:19:08 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:52.288 15:19:08 -- nvmf/common.sh@411 -- # return 0 00:08:52.288 15:19:08 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:52.288 15:19:08 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:52.288 15:19:08 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:52.288 15:19:08 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:52.288 15:19:08 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:52.288 15:19:08 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:52.288 15:19:08 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:52.288 15:19:08 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:08:52.288 15:19:08 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:52.288 15:19:08 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:52.288 15:19:08 -- common/autotest_common.sh@10 -- # set +x 00:08:52.288 15:19:08 -- nvmf/common.sh@470 -- # nvmfpid=1481581 00:08:52.288 15:19:08 -- nvmf/common.sh@471 -- # waitforlisten 1481581 00:08:52.288 15:19:08 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:52.288 15:19:08 -- common/autotest_common.sh@817 -- # '[' -z 1481581 ']' 00:08:52.288 15:19:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:52.288 15:19:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:52.288 15:19:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:52.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:52.288 15:19:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:52.288 15:19:08 -- common/autotest_common.sh@10 -- # set +x 00:08:52.288 [2024-04-26 15:19:08.712955] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:08:52.288 [2024-04-26 15:19:08.713021] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:52.288 EAL: No free 2048 kB hugepages reported on node 1 00:08:52.288 [2024-04-26 15:19:08.785296] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:52.288 [2024-04-26 15:19:08.858094] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:52.288 [2024-04-26 15:19:08.858135] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:52.288 [2024-04-26 15:19:08.858145] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:52.288 [2024-04-26 15:19:08.858152] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:52.288 [2024-04-26 15:19:08.858159] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:52.288 [2024-04-26 15:19:08.858317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:52.288 [2024-04-26 15:19:08.858448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:52.288 [2024-04-26 15:19:08.858606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.288 [2024-04-26 15:19:08.858606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:52.288 15:19:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:52.288 15:19:09 -- common/autotest_common.sh@850 -- # return 0 00:08:52.288 15:19:09 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:52.288 15:19:09 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:52.288 15:19:09 -- common/autotest_common.sh@10 -- # set +x 00:08:52.288 15:19:09 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:52.288 15:19:09 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:52.288 15:19:09 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:52.288 15:19:09 -- target/multitarget.sh@21 -- # jq length 00:08:52.288 15:19:09 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:08:52.288 15:19:09 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:08:52.288 "nvmf_tgt_1" 00:08:52.288 15:19:09 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:08:52.548 "nvmf_tgt_2" 00:08:52.548 15:19:09 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:52.548 15:19:09 -- target/multitarget.sh@28 -- # jq length 00:08:52.548 15:19:09 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:08:52.548 15:19:09 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:08:52.809 true 00:08:52.809 15:19:10 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:08:52.809 true 00:08:52.809 15:19:10 -- target/multitarget.sh@35 -- # jq length 00:08:52.809 15:19:10 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:52.809 15:19:10 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:08:52.809 15:19:10 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:52.809 15:19:10 -- target/multitarget.sh@41 -- # nvmftestfini 00:08:52.809 15:19:10 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:52.809 15:19:10 -- nvmf/common.sh@117 -- # sync 00:08:52.809 15:19:10 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:52.809 15:19:10 -- nvmf/common.sh@120 -- # set +e 00:08:52.809 15:19:10 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:52.809 15:19:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:52.809 rmmod nvme_tcp 00:08:52.809 rmmod nvme_fabrics 00:08:52.809 rmmod nvme_keyring 00:08:53.070 15:19:10 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:53.070 15:19:10 -- nvmf/common.sh@124 -- # set -e 00:08:53.070 15:19:10 -- nvmf/common.sh@125 -- # return 0 00:08:53.070 15:19:10 -- nvmf/common.sh@478 -- # '[' -n 1481581 ']' 00:08:53.070 15:19:10 -- nvmf/common.sh@479 -- # killprocess 1481581 00:08:53.070 15:19:10 -- common/autotest_common.sh@936 -- # '[' -z 1481581 ']' 00:08:53.070 15:19:10 -- common/autotest_common.sh@940 -- # kill -0 1481581 00:08:53.070 15:19:10 -- common/autotest_common.sh@941 -- # uname 00:08:53.070 15:19:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:53.070 15:19:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1481581 00:08:53.070 15:19:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:53.070 15:19:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:53.070 15:19:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1481581' 00:08:53.070 killing process with pid 1481581 00:08:53.070 15:19:10 -- common/autotest_common.sh@955 -- # kill 1481581 00:08:53.070 15:19:10 -- common/autotest_common.sh@960 -- # wait 1481581 00:08:53.070 15:19:10 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:53.070 15:19:10 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:53.070 15:19:10 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:53.070 15:19:10 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:53.070 15:19:10 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:53.070 15:19:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:53.070 15:19:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:53.070 15:19:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:55.615 15:19:12 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:55.615 00:08:55.615 real 0m11.285s 00:08:55.615 user 0m9.158s 00:08:55.615 sys 0m5.895s 00:08:55.615 15:19:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:55.615 15:19:12 -- common/autotest_common.sh@10 -- # set +x 00:08:55.615 ************************************ 00:08:55.615 END TEST nvmf_multitarget 00:08:55.615 ************************************ 00:08:55.615 15:19:12 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:55.615 15:19:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:55.615 15:19:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:55.615 15:19:12 -- common/autotest_common.sh@10 -- # set +x 00:08:55.615 ************************************ 00:08:55.615 START TEST nvmf_rpc 00:08:55.615 ************************************ 00:08:55.615 15:19:12 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:55.615 * Looking for test storage... 00:08:55.615 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:55.615 15:19:12 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:55.615 15:19:12 -- nvmf/common.sh@7 -- # uname -s 00:08:55.615 15:19:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:55.615 15:19:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:55.615 15:19:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:55.615 15:19:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:55.615 15:19:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:55.615 15:19:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:55.615 15:19:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:55.615 15:19:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:55.615 15:19:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:55.615 15:19:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:55.615 15:19:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:55.615 15:19:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:55.615 15:19:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:55.615 15:19:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:55.615 15:19:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:55.615 15:19:12 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:55.615 15:19:12 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:55.615 15:19:12 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:55.615 15:19:12 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:55.615 15:19:12 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:55.615 15:19:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.615 15:19:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.615 15:19:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.615 15:19:12 -- paths/export.sh@5 -- # export PATH 00:08:55.616 15:19:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.616 15:19:12 -- nvmf/common.sh@47 -- # : 0 00:08:55.616 15:19:12 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:55.616 15:19:12 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:55.616 15:19:12 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:55.616 15:19:12 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:55.616 15:19:12 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:55.616 15:19:12 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:55.616 15:19:12 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:55.616 15:19:12 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:55.616 15:19:12 -- target/rpc.sh@11 -- # loops=5 00:08:55.616 15:19:12 -- target/rpc.sh@23 -- # nvmftestinit 00:08:55.616 15:19:12 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:55.616 15:19:12 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:55.616 15:19:12 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:55.616 15:19:12 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:55.616 15:19:12 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:55.616 15:19:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:55.616 15:19:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:55.616 15:19:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:55.616 15:19:12 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:55.616 15:19:12 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:55.616 15:19:12 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:55.616 15:19:12 -- common/autotest_common.sh@10 -- # set +x 00:09:03.761 15:19:19 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:03.761 15:19:19 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:03.761 15:19:19 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:03.761 15:19:19 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:03.761 15:19:19 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:03.761 15:19:19 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:03.761 15:19:19 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:03.761 15:19:19 -- nvmf/common.sh@295 -- # net_devs=() 00:09:03.761 15:19:19 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:03.761 15:19:19 -- nvmf/common.sh@296 -- # e810=() 00:09:03.761 15:19:19 -- nvmf/common.sh@296 -- # local -ga e810 00:09:03.761 15:19:19 -- nvmf/common.sh@297 -- # x722=() 00:09:03.761 15:19:19 -- nvmf/common.sh@297 -- # local -ga x722 00:09:03.761 15:19:19 -- nvmf/common.sh@298 -- # mlx=() 00:09:03.761 15:19:19 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:03.761 15:19:19 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:03.761 15:19:19 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:03.761 15:19:19 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:03.761 15:19:19 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:03.761 15:19:19 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:03.761 15:19:19 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:03.761 15:19:19 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:03.761 15:19:19 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:03.761 15:19:19 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:03.761 15:19:19 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:03.761 15:19:19 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:03.761 15:19:19 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:03.761 15:19:19 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:03.761 15:19:19 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:03.761 15:19:19 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:03.761 15:19:19 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:03.761 15:19:19 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:03.761 15:19:19 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:03.761 15:19:19 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:03.761 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:03.761 15:19:19 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:03.761 15:19:19 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:03.761 15:19:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:03.761 15:19:19 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:03.761 15:19:19 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:03.761 15:19:19 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:03.761 15:19:19 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:03.761 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:03.761 15:19:19 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:03.761 15:19:19 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:03.761 15:19:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:03.761 15:19:19 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:03.761 15:19:19 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:03.761 15:19:19 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:03.761 15:19:19 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:03.761 15:19:19 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:03.761 15:19:19 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:03.761 15:19:19 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:03.761 15:19:19 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:03.761 15:19:19 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:03.761 15:19:19 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:03.761 Found net devices under 0000:31:00.0: cvl_0_0 00:09:03.761 15:19:19 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:03.761 15:19:19 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:03.761 15:19:19 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:03.761 15:19:19 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:03.761 15:19:19 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:03.761 15:19:19 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:03.761 Found net devices under 0000:31:00.1: cvl_0_1 00:09:03.761 15:19:19 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:03.761 15:19:19 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:03.761 15:19:19 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:03.761 15:19:19 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:03.761 15:19:19 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:09:03.761 15:19:19 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:09:03.761 15:19:19 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:03.761 15:19:19 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:03.761 15:19:19 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:03.762 15:19:19 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:03.762 15:19:19 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:03.762 15:19:19 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:03.762 15:19:19 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:03.762 15:19:19 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:03.762 15:19:19 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:03.762 15:19:19 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:03.762 15:19:19 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:03.762 15:19:19 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:03.762 15:19:19 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:03.762 15:19:19 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:03.762 15:19:19 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:03.762 15:19:20 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:03.762 15:19:20 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:03.762 15:19:20 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:03.762 15:19:20 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:03.762 15:19:20 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:03.762 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:03.762 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.600 ms 00:09:03.762 00:09:03.762 --- 10.0.0.2 ping statistics --- 00:09:03.762 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:03.762 rtt min/avg/max/mdev = 0.600/0.600/0.600/0.000 ms 00:09:03.762 15:19:20 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:03.762 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:03.762 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:09:03.762 00:09:03.762 --- 10.0.0.1 ping statistics --- 00:09:03.762 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:03.762 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:09:03.762 15:19:20 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:03.762 15:19:20 -- nvmf/common.sh@411 -- # return 0 00:09:03.762 15:19:20 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:03.762 15:19:20 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:03.762 15:19:20 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:09:03.762 15:19:20 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:09:03.762 15:19:20 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:03.762 15:19:20 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:09:03.762 15:19:20 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:09:03.762 15:19:20 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:09:03.762 15:19:20 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:03.762 15:19:20 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:03.762 15:19:20 -- common/autotest_common.sh@10 -- # set +x 00:09:03.762 15:19:20 -- nvmf/common.sh@470 -- # nvmfpid=1486071 00:09:03.762 15:19:20 -- nvmf/common.sh@471 -- # waitforlisten 1486071 00:09:03.762 15:19:20 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:03.762 15:19:20 -- common/autotest_common.sh@817 -- # '[' -z 1486071 ']' 00:09:03.762 15:19:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:03.762 15:19:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:03.762 15:19:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:03.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:03.762 15:19:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:03.762 15:19:20 -- common/autotest_common.sh@10 -- # set +x 00:09:03.762 [2024-04-26 15:19:20.262726] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:09:03.762 [2024-04-26 15:19:20.262790] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:03.762 EAL: No free 2048 kB hugepages reported on node 1 00:09:03.762 [2024-04-26 15:19:20.335097] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:03.762 [2024-04-26 15:19:20.410705] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:03.762 [2024-04-26 15:19:20.410749] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:03.762 [2024-04-26 15:19:20.410757] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:03.762 [2024-04-26 15:19:20.410769] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:03.762 [2024-04-26 15:19:20.410774] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:03.762 [2024-04-26 15:19:20.410934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:03.762 [2024-04-26 15:19:20.411238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:03.762 [2024-04-26 15:19:20.411398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:03.762 [2024-04-26 15:19:20.411399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.762 15:19:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:03.762 15:19:21 -- common/autotest_common.sh@850 -- # return 0 00:09:03.762 15:19:21 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:03.762 15:19:21 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:03.762 15:19:21 -- common/autotest_common.sh@10 -- # set +x 00:09:03.762 15:19:21 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:03.762 15:19:21 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:09:03.762 15:19:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:03.762 15:19:21 -- common/autotest_common.sh@10 -- # set +x 00:09:03.762 15:19:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:03.762 15:19:21 -- target/rpc.sh@26 -- # stats='{ 00:09:03.762 "tick_rate": 2400000000, 00:09:03.762 "poll_groups": [ 00:09:03.762 { 00:09:03.762 "name": "nvmf_tgt_poll_group_0", 00:09:03.762 "admin_qpairs": 0, 00:09:03.762 "io_qpairs": 0, 00:09:03.762 "current_admin_qpairs": 0, 00:09:03.762 "current_io_qpairs": 0, 00:09:03.762 "pending_bdev_io": 0, 00:09:03.762 "completed_nvme_io": 0, 00:09:03.762 "transports": [] 00:09:03.762 }, 00:09:03.762 { 00:09:03.762 "name": "nvmf_tgt_poll_group_1", 00:09:03.762 "admin_qpairs": 0, 00:09:03.762 "io_qpairs": 0, 00:09:03.762 "current_admin_qpairs": 0, 00:09:03.762 "current_io_qpairs": 0, 00:09:03.762 "pending_bdev_io": 0, 00:09:03.762 "completed_nvme_io": 0, 00:09:03.762 "transports": [] 00:09:03.762 }, 00:09:03.762 { 00:09:03.762 "name": "nvmf_tgt_poll_group_2", 00:09:03.762 "admin_qpairs": 0, 00:09:03.762 "io_qpairs": 0, 00:09:03.762 "current_admin_qpairs": 0, 00:09:03.762 "current_io_qpairs": 0, 00:09:03.762 "pending_bdev_io": 0, 00:09:03.762 "completed_nvme_io": 0, 00:09:03.762 "transports": [] 00:09:03.762 }, 00:09:03.762 { 00:09:03.762 "name": "nvmf_tgt_poll_group_3", 00:09:03.762 "admin_qpairs": 0, 00:09:03.762 "io_qpairs": 0, 00:09:03.762 "current_admin_qpairs": 0, 00:09:03.762 "current_io_qpairs": 0, 00:09:03.762 "pending_bdev_io": 0, 00:09:03.762 "completed_nvme_io": 0, 00:09:03.762 "transports": [] 00:09:03.762 } 00:09:03.762 ] 00:09:03.762 }' 00:09:03.762 15:19:21 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:09:03.762 15:19:21 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:09:03.762 15:19:21 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:09:03.762 15:19:21 -- target/rpc.sh@15 -- # wc -l 00:09:03.762 15:19:21 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:09:03.762 15:19:21 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:09:03.762 15:19:21 -- target/rpc.sh@29 -- # [[ null == null ]] 00:09:03.762 15:19:21 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:03.762 15:19:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:03.762 15:19:21 -- common/autotest_common.sh@10 -- # set +x 00:09:03.762 [2024-04-26 15:19:21.204750] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:04.024 15:19:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:04.024 15:19:21 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:09:04.024 15:19:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:04.024 15:19:21 -- common/autotest_common.sh@10 -- # set +x 00:09:04.024 15:19:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:04.024 15:19:21 -- target/rpc.sh@33 -- # stats='{ 00:09:04.024 "tick_rate": 2400000000, 00:09:04.024 "poll_groups": [ 00:09:04.024 { 00:09:04.024 "name": "nvmf_tgt_poll_group_0", 00:09:04.024 "admin_qpairs": 0, 00:09:04.024 "io_qpairs": 0, 00:09:04.024 "current_admin_qpairs": 0, 00:09:04.024 "current_io_qpairs": 0, 00:09:04.024 "pending_bdev_io": 0, 00:09:04.024 "completed_nvme_io": 0, 00:09:04.024 "transports": [ 00:09:04.024 { 00:09:04.024 "trtype": "TCP" 00:09:04.024 } 00:09:04.024 ] 00:09:04.024 }, 00:09:04.024 { 00:09:04.024 "name": "nvmf_tgt_poll_group_1", 00:09:04.024 "admin_qpairs": 0, 00:09:04.024 "io_qpairs": 0, 00:09:04.024 "current_admin_qpairs": 0, 00:09:04.024 "current_io_qpairs": 0, 00:09:04.024 "pending_bdev_io": 0, 00:09:04.024 "completed_nvme_io": 0, 00:09:04.024 "transports": [ 00:09:04.024 { 00:09:04.024 "trtype": "TCP" 00:09:04.024 } 00:09:04.024 ] 00:09:04.024 }, 00:09:04.024 { 00:09:04.024 "name": "nvmf_tgt_poll_group_2", 00:09:04.024 "admin_qpairs": 0, 00:09:04.024 "io_qpairs": 0, 00:09:04.024 "current_admin_qpairs": 0, 00:09:04.024 "current_io_qpairs": 0, 00:09:04.024 "pending_bdev_io": 0, 00:09:04.024 "completed_nvme_io": 0, 00:09:04.024 "transports": [ 00:09:04.024 { 00:09:04.024 "trtype": "TCP" 00:09:04.024 } 00:09:04.024 ] 00:09:04.024 }, 00:09:04.024 { 00:09:04.024 "name": "nvmf_tgt_poll_group_3", 00:09:04.024 "admin_qpairs": 0, 00:09:04.024 "io_qpairs": 0, 00:09:04.024 "current_admin_qpairs": 0, 00:09:04.024 "current_io_qpairs": 0, 00:09:04.024 "pending_bdev_io": 0, 00:09:04.024 "completed_nvme_io": 0, 00:09:04.024 "transports": [ 00:09:04.024 { 00:09:04.024 "trtype": "TCP" 00:09:04.024 } 00:09:04.024 ] 00:09:04.024 } 00:09:04.024 ] 00:09:04.024 }' 00:09:04.024 15:19:21 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:09:04.024 15:19:21 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:04.024 15:19:21 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:04.024 15:19:21 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:04.024 15:19:21 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:09:04.024 15:19:21 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:09:04.024 15:19:21 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:04.024 15:19:21 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:04.024 15:19:21 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:04.024 15:19:21 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:09:04.024 15:19:21 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:09:04.024 15:19:21 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:09:04.024 15:19:21 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:09:04.024 15:19:21 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:09:04.024 15:19:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:04.024 15:19:21 -- common/autotest_common.sh@10 -- # set +x 00:09:04.024 Malloc1 00:09:04.024 15:19:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:04.024 15:19:21 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:04.024 15:19:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:04.024 15:19:21 -- common/autotest_common.sh@10 -- # set +x 00:09:04.024 15:19:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:04.024 15:19:21 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:04.024 15:19:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:04.024 15:19:21 -- common/autotest_common.sh@10 -- # set +x 00:09:04.024 15:19:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:04.024 15:19:21 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:09:04.024 15:19:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:04.024 15:19:21 -- common/autotest_common.sh@10 -- # set +x 00:09:04.024 15:19:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:04.024 15:19:21 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:04.024 15:19:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:04.024 15:19:21 -- common/autotest_common.sh@10 -- # set +x 00:09:04.024 [2024-04-26 15:19:21.396503] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:04.024 15:19:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:04.024 15:19:21 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:09:04.024 15:19:21 -- common/autotest_common.sh@638 -- # local es=0 00:09:04.024 15:19:21 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:09:04.024 15:19:21 -- common/autotest_common.sh@626 -- # local arg=nvme 00:09:04.024 15:19:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:04.024 15:19:21 -- common/autotest_common.sh@630 -- # type -t nvme 00:09:04.024 15:19:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:04.024 15:19:21 -- common/autotest_common.sh@632 -- # type -P nvme 00:09:04.024 15:19:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:04.024 15:19:21 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:09:04.024 15:19:21 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:09:04.024 15:19:21 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:09:04.024 [2024-04-26 15:19:21.423295] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:09:04.024 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:04.024 could not add new controller: failed to write to nvme-fabrics device 00:09:04.024 15:19:21 -- common/autotest_common.sh@641 -- # es=1 00:09:04.024 15:19:21 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:04.024 15:19:21 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:04.024 15:19:21 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:04.024 15:19:21 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:04.024 15:19:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:04.024 15:19:21 -- common/autotest_common.sh@10 -- # set +x 00:09:04.024 15:19:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:04.024 15:19:21 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:05.937 15:19:22 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:09:05.937 15:19:22 -- common/autotest_common.sh@1184 -- # local i=0 00:09:05.937 15:19:22 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:05.937 15:19:22 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:05.937 15:19:22 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:07.851 15:19:24 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:07.851 15:19:24 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:07.851 15:19:24 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:07.851 15:19:24 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:07.851 15:19:24 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:07.851 15:19:24 -- common/autotest_common.sh@1194 -- # return 0 00:09:07.851 15:19:24 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:07.851 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:07.851 15:19:25 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:07.851 15:19:25 -- common/autotest_common.sh@1205 -- # local i=0 00:09:07.851 15:19:25 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:07.851 15:19:25 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:07.851 15:19:25 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:07.851 15:19:25 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:07.851 15:19:25 -- common/autotest_common.sh@1217 -- # return 0 00:09:07.851 15:19:25 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:07.851 15:19:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:07.851 15:19:25 -- common/autotest_common.sh@10 -- # set +x 00:09:07.851 15:19:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:07.851 15:19:25 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:07.851 15:19:25 -- common/autotest_common.sh@638 -- # local es=0 00:09:07.851 15:19:25 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:07.851 15:19:25 -- common/autotest_common.sh@626 -- # local arg=nvme 00:09:07.851 15:19:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:07.851 15:19:25 -- common/autotest_common.sh@630 -- # type -t nvme 00:09:07.851 15:19:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:07.851 15:19:25 -- common/autotest_common.sh@632 -- # type -P nvme 00:09:07.851 15:19:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:07.851 15:19:25 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:09:07.851 15:19:25 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:09:07.851 15:19:25 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:07.851 [2024-04-26 15:19:25.150047] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:09:07.851 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:07.851 could not add new controller: failed to write to nvme-fabrics device 00:09:07.851 15:19:25 -- common/autotest_common.sh@641 -- # es=1 00:09:07.851 15:19:25 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:07.851 15:19:25 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:07.851 15:19:25 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:07.851 15:19:25 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:09:07.851 15:19:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:07.851 15:19:25 -- common/autotest_common.sh@10 -- # set +x 00:09:07.851 15:19:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:07.852 15:19:25 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:09.252 15:19:26 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:09:09.252 15:19:26 -- common/autotest_common.sh@1184 -- # local i=0 00:09:09.252 15:19:26 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:09.252 15:19:26 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:09.252 15:19:26 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:11.256 15:19:28 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:11.256 15:19:28 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:11.256 15:19:28 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:11.256 15:19:28 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:11.256 15:19:28 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:11.256 15:19:28 -- common/autotest_common.sh@1194 -- # return 0 00:09:11.256 15:19:28 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:11.517 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:11.517 15:19:28 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:11.517 15:19:28 -- common/autotest_common.sh@1205 -- # local i=0 00:09:11.517 15:19:28 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:11.517 15:19:28 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:11.517 15:19:28 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:11.517 15:19:28 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:11.517 15:19:28 -- common/autotest_common.sh@1217 -- # return 0 00:09:11.517 15:19:28 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:11.517 15:19:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:11.517 15:19:28 -- common/autotest_common.sh@10 -- # set +x 00:09:11.517 15:19:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:11.517 15:19:28 -- target/rpc.sh@81 -- # seq 1 5 00:09:11.517 15:19:28 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:11.517 15:19:28 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:11.517 15:19:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:11.517 15:19:28 -- common/autotest_common.sh@10 -- # set +x 00:09:11.517 15:19:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:11.517 15:19:28 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:11.517 15:19:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:11.517 15:19:28 -- common/autotest_common.sh@10 -- # set +x 00:09:11.517 [2024-04-26 15:19:28.902402] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:11.517 15:19:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:11.517 15:19:28 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:11.517 15:19:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:11.517 15:19:28 -- common/autotest_common.sh@10 -- # set +x 00:09:11.517 15:19:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:11.517 15:19:28 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:11.517 15:19:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:11.517 15:19:28 -- common/autotest_common.sh@10 -- # set +x 00:09:11.517 15:19:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:11.517 15:19:28 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:13.430 15:19:30 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:13.430 15:19:30 -- common/autotest_common.sh@1184 -- # local i=0 00:09:13.430 15:19:30 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:13.430 15:19:30 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:13.430 15:19:30 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:15.346 15:19:32 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:15.346 15:19:32 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:15.346 15:19:32 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:15.346 15:19:32 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:15.346 15:19:32 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:15.346 15:19:32 -- common/autotest_common.sh@1194 -- # return 0 00:09:15.346 15:19:32 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:15.346 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:15.346 15:19:32 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:15.346 15:19:32 -- common/autotest_common.sh@1205 -- # local i=0 00:09:15.346 15:19:32 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:15.346 15:19:32 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:15.346 15:19:32 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:15.346 15:19:32 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:15.346 15:19:32 -- common/autotest_common.sh@1217 -- # return 0 00:09:15.346 15:19:32 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:15.346 15:19:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:15.346 15:19:32 -- common/autotest_common.sh@10 -- # set +x 00:09:15.346 15:19:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:15.346 15:19:32 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:15.346 15:19:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:15.346 15:19:32 -- common/autotest_common.sh@10 -- # set +x 00:09:15.346 15:19:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:15.346 15:19:32 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:15.346 15:19:32 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:15.346 15:19:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:15.346 15:19:32 -- common/autotest_common.sh@10 -- # set +x 00:09:15.346 15:19:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:15.346 15:19:32 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:15.346 15:19:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:15.346 15:19:32 -- common/autotest_common.sh@10 -- # set +x 00:09:15.346 [2024-04-26 15:19:32.604090] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:15.346 15:19:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:15.346 15:19:32 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:15.346 15:19:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:15.346 15:19:32 -- common/autotest_common.sh@10 -- # set +x 00:09:15.346 15:19:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:15.346 15:19:32 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:15.346 15:19:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:15.346 15:19:32 -- common/autotest_common.sh@10 -- # set +x 00:09:15.346 15:19:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:15.346 15:19:32 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:16.732 15:19:34 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:16.732 15:19:34 -- common/autotest_common.sh@1184 -- # local i=0 00:09:16.732 15:19:34 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:16.732 15:19:34 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:16.732 15:19:34 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:19.326 15:19:36 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:19.326 15:19:36 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:19.326 15:19:36 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:19.326 15:19:36 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:19.326 15:19:36 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:19.326 15:19:36 -- common/autotest_common.sh@1194 -- # return 0 00:09:19.326 15:19:36 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:19.326 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:19.326 15:19:36 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:19.326 15:19:36 -- common/autotest_common.sh@1205 -- # local i=0 00:09:19.326 15:19:36 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:19.326 15:19:36 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:19.326 15:19:36 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:19.326 15:19:36 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:19.326 15:19:36 -- common/autotest_common.sh@1217 -- # return 0 00:09:19.326 15:19:36 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:19.326 15:19:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:19.326 15:19:36 -- common/autotest_common.sh@10 -- # set +x 00:09:19.326 15:19:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:19.326 15:19:36 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:19.326 15:19:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:19.326 15:19:36 -- common/autotest_common.sh@10 -- # set +x 00:09:19.326 15:19:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:19.326 15:19:36 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:19.326 15:19:36 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:19.326 15:19:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:19.326 15:19:36 -- common/autotest_common.sh@10 -- # set +x 00:09:19.326 15:19:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:19.326 15:19:36 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:19.326 15:19:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:19.326 15:19:36 -- common/autotest_common.sh@10 -- # set +x 00:09:19.326 [2024-04-26 15:19:36.344333] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:19.326 15:19:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:19.326 15:19:36 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:19.326 15:19:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:19.326 15:19:36 -- common/autotest_common.sh@10 -- # set +x 00:09:19.326 15:19:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:19.326 15:19:36 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:19.326 15:19:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:19.326 15:19:36 -- common/autotest_common.sh@10 -- # set +x 00:09:19.326 15:19:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:19.326 15:19:36 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:20.714 15:19:37 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:20.714 15:19:37 -- common/autotest_common.sh@1184 -- # local i=0 00:09:20.714 15:19:37 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:20.714 15:19:37 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:20.714 15:19:37 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:22.631 15:19:39 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:22.631 15:19:39 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:22.631 15:19:39 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:22.631 15:19:39 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:22.631 15:19:39 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:22.631 15:19:39 -- common/autotest_common.sh@1194 -- # return 0 00:09:22.631 15:19:39 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:22.631 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:22.631 15:19:39 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:22.632 15:19:39 -- common/autotest_common.sh@1205 -- # local i=0 00:09:22.632 15:19:39 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:22.632 15:19:39 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:22.632 15:19:39 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:22.632 15:19:39 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:22.632 15:19:40 -- common/autotest_common.sh@1217 -- # return 0 00:09:22.632 15:19:40 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:22.632 15:19:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:22.632 15:19:40 -- common/autotest_common.sh@10 -- # set +x 00:09:22.632 15:19:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:22.632 15:19:40 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:22.632 15:19:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:22.632 15:19:40 -- common/autotest_common.sh@10 -- # set +x 00:09:22.632 15:19:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:22.632 15:19:40 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:22.632 15:19:40 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:22.632 15:19:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:22.632 15:19:40 -- common/autotest_common.sh@10 -- # set +x 00:09:22.632 15:19:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:22.632 15:19:40 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:22.632 15:19:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:22.632 15:19:40 -- common/autotest_common.sh@10 -- # set +x 00:09:22.632 [2024-04-26 15:19:40.044989] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:22.632 15:19:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:22.632 15:19:40 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:22.632 15:19:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:22.632 15:19:40 -- common/autotest_common.sh@10 -- # set +x 00:09:22.632 15:19:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:22.632 15:19:40 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:22.632 15:19:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:22.632 15:19:40 -- common/autotest_common.sh@10 -- # set +x 00:09:22.632 15:19:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:22.632 15:19:40 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:24.545 15:19:41 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:24.545 15:19:41 -- common/autotest_common.sh@1184 -- # local i=0 00:09:24.545 15:19:41 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:24.545 15:19:41 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:24.545 15:19:41 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:26.459 15:19:43 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:26.459 15:19:43 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:26.459 15:19:43 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:26.459 15:19:43 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:26.459 15:19:43 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:26.459 15:19:43 -- common/autotest_common.sh@1194 -- # return 0 00:09:26.459 15:19:43 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:26.459 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:26.459 15:19:43 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:26.459 15:19:43 -- common/autotest_common.sh@1205 -- # local i=0 00:09:26.459 15:19:43 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:26.459 15:19:43 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:26.459 15:19:43 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:26.459 15:19:43 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:26.459 15:19:43 -- common/autotest_common.sh@1217 -- # return 0 00:09:26.459 15:19:43 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:26.459 15:19:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:26.459 15:19:43 -- common/autotest_common.sh@10 -- # set +x 00:09:26.459 15:19:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:26.459 15:19:43 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:26.459 15:19:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:26.459 15:19:43 -- common/autotest_common.sh@10 -- # set +x 00:09:26.459 15:19:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:26.459 15:19:43 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:26.459 15:19:43 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:26.459 15:19:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:26.459 15:19:43 -- common/autotest_common.sh@10 -- # set +x 00:09:26.459 15:19:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:26.459 15:19:43 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:26.459 15:19:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:26.459 15:19:43 -- common/autotest_common.sh@10 -- # set +x 00:09:26.459 [2024-04-26 15:19:43.789592] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:26.459 15:19:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:26.459 15:19:43 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:26.459 15:19:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:26.459 15:19:43 -- common/autotest_common.sh@10 -- # set +x 00:09:26.459 15:19:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:26.459 15:19:43 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:26.459 15:19:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:26.459 15:19:43 -- common/autotest_common.sh@10 -- # set +x 00:09:26.459 15:19:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:26.459 15:19:43 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:28.373 15:19:45 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:28.373 15:19:45 -- common/autotest_common.sh@1184 -- # local i=0 00:09:28.373 15:19:45 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:28.373 15:19:45 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:28.373 15:19:45 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:30.288 15:19:47 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:30.288 15:19:47 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:30.288 15:19:47 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:30.288 15:19:47 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:30.288 15:19:47 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:30.288 15:19:47 -- common/autotest_common.sh@1194 -- # return 0 00:09:30.288 15:19:47 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:30.288 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.288 15:19:47 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:30.288 15:19:47 -- common/autotest_common.sh@1205 -- # local i=0 00:09:30.288 15:19:47 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:30.288 15:19:47 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:30.288 15:19:47 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:30.288 15:19:47 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:30.288 15:19:47 -- common/autotest_common.sh@1217 -- # return 0 00:09:30.288 15:19:47 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:30.288 15:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:30.288 15:19:47 -- common/autotest_common.sh@10 -- # set +x 00:09:30.288 15:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:30.288 15:19:47 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:30.288 15:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:30.288 15:19:47 -- common/autotest_common.sh@10 -- # set +x 00:09:30.288 15:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:30.288 15:19:47 -- target/rpc.sh@99 -- # seq 1 5 00:09:30.288 15:19:47 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:30.288 15:19:47 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:30.288 15:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:30.288 15:19:47 -- common/autotest_common.sh@10 -- # set +x 00:09:30.288 15:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:30.288 15:19:47 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:30.288 15:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:30.288 15:19:47 -- common/autotest_common.sh@10 -- # set +x 00:09:30.288 [2024-04-26 15:19:47.523729] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:30.288 15:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:30.288 15:19:47 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:30.288 15:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:30.288 15:19:47 -- common/autotest_common.sh@10 -- # set +x 00:09:30.288 15:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:30.288 15:19:47 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:30.288 15:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:30.288 15:19:47 -- common/autotest_common.sh@10 -- # set +x 00:09:30.288 15:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:30.288 15:19:47 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:30.288 15:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:30.288 15:19:47 -- common/autotest_common.sh@10 -- # set +x 00:09:30.288 15:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:30.288 15:19:47 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:30.288 15:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:30.288 15:19:47 -- common/autotest_common.sh@10 -- # set +x 00:09:30.288 15:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:30.288 15:19:47 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:30.288 15:19:47 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:30.288 15:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:30.288 15:19:47 -- common/autotest_common.sh@10 -- # set +x 00:09:30.288 15:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:30.288 15:19:47 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:30.288 15:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:30.288 15:19:47 -- common/autotest_common.sh@10 -- # set +x 00:09:30.288 [2024-04-26 15:19:47.583860] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:30.288 15:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:30.288 15:19:47 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:30.288 15:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:30.288 15:19:47 -- common/autotest_common.sh@10 -- # set +x 00:09:30.288 15:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:30.288 15:19:47 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:30.288 15:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:30.288 15:19:47 -- common/autotest_common.sh@10 -- # set +x 00:09:30.288 15:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:30.288 15:19:47 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:30.288 15:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:30.288 15:19:47 -- common/autotest_common.sh@10 -- # set +x 00:09:30.288 15:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:30.288 15:19:47 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:30.288 15:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:30.288 15:19:47 -- common/autotest_common.sh@10 -- # set +x 00:09:30.288 15:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:30.288 15:19:47 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:30.288 15:19:47 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:30.288 15:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:30.288 15:19:47 -- common/autotest_common.sh@10 -- # set +x 00:09:30.289 15:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:30.289 15:19:47 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:30.289 15:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:30.289 15:19:47 -- common/autotest_common.sh@10 -- # set +x 00:09:30.289 [2024-04-26 15:19:47.640009] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:30.289 15:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:30.289 15:19:47 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:30.289 15:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:30.289 15:19:47 -- common/autotest_common.sh@10 -- # set +x 00:09:30.289 15:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:30.289 15:19:47 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:30.289 15:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:30.289 15:19:47 -- common/autotest_common.sh@10 -- # set +x 00:09:30.289 15:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:30.289 15:19:47 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:30.289 15:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:30.289 15:19:47 -- common/autotest_common.sh@10 -- # set +x 00:09:30.289 15:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:30.289 15:19:47 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:30.289 15:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:30.289 15:19:47 -- common/autotest_common.sh@10 -- # set +x 00:09:30.289 15:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:30.289 15:19:47 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:30.289 15:19:47 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:30.289 15:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:30.289 15:19:47 -- common/autotest_common.sh@10 -- # set +x 00:09:30.289 15:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:30.289 15:19:47 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:30.289 15:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:30.289 15:19:47 -- common/autotest_common.sh@10 -- # set +x 00:09:30.289 [2024-04-26 15:19:47.700179] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:30.289 15:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:30.289 15:19:47 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:30.289 15:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:30.289 15:19:47 -- common/autotest_common.sh@10 -- # set +x 00:09:30.289 15:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:30.289 15:19:47 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:30.289 15:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:30.289 15:19:47 -- common/autotest_common.sh@10 -- # set +x 00:09:30.289 15:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:30.289 15:19:47 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:30.289 15:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:30.289 15:19:47 -- common/autotest_common.sh@10 -- # set +x 00:09:30.289 15:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:30.289 15:19:47 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:30.289 15:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:30.289 15:19:47 -- common/autotest_common.sh@10 -- # set +x 00:09:30.550 15:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:30.550 15:19:47 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:30.550 15:19:47 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:30.550 15:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:30.550 15:19:47 -- common/autotest_common.sh@10 -- # set +x 00:09:30.550 15:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:30.550 15:19:47 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:30.550 15:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:30.550 15:19:47 -- common/autotest_common.sh@10 -- # set +x 00:09:30.550 [2024-04-26 15:19:47.760370] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:30.550 15:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:30.550 15:19:47 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:30.550 15:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:30.550 15:19:47 -- common/autotest_common.sh@10 -- # set +x 00:09:30.550 15:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:30.550 15:19:47 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:30.550 15:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:30.550 15:19:47 -- common/autotest_common.sh@10 -- # set +x 00:09:30.550 15:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:30.550 15:19:47 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:30.550 15:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:30.550 15:19:47 -- common/autotest_common.sh@10 -- # set +x 00:09:30.550 15:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:30.550 15:19:47 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:30.550 15:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:30.550 15:19:47 -- common/autotest_common.sh@10 -- # set +x 00:09:30.550 15:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:30.550 15:19:47 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:09:30.550 15:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:30.550 15:19:47 -- common/autotest_common.sh@10 -- # set +x 00:09:30.550 15:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:30.550 15:19:47 -- target/rpc.sh@110 -- # stats='{ 00:09:30.550 "tick_rate": 2400000000, 00:09:30.550 "poll_groups": [ 00:09:30.550 { 00:09:30.550 "name": "nvmf_tgt_poll_group_0", 00:09:30.550 "admin_qpairs": 0, 00:09:30.550 "io_qpairs": 224, 00:09:30.550 "current_admin_qpairs": 0, 00:09:30.550 "current_io_qpairs": 0, 00:09:30.550 "pending_bdev_io": 0, 00:09:30.550 "completed_nvme_io": 225, 00:09:30.550 "transports": [ 00:09:30.550 { 00:09:30.550 "trtype": "TCP" 00:09:30.550 } 00:09:30.550 ] 00:09:30.550 }, 00:09:30.550 { 00:09:30.550 "name": "nvmf_tgt_poll_group_1", 00:09:30.550 "admin_qpairs": 1, 00:09:30.550 "io_qpairs": 223, 00:09:30.550 "current_admin_qpairs": 0, 00:09:30.550 "current_io_qpairs": 0, 00:09:30.550 "pending_bdev_io": 0, 00:09:30.550 "completed_nvme_io": 449, 00:09:30.550 "transports": [ 00:09:30.550 { 00:09:30.550 "trtype": "TCP" 00:09:30.550 } 00:09:30.550 ] 00:09:30.550 }, 00:09:30.550 { 00:09:30.550 "name": "nvmf_tgt_poll_group_2", 00:09:30.550 "admin_qpairs": 6, 00:09:30.550 "io_qpairs": 218, 00:09:30.550 "current_admin_qpairs": 0, 00:09:30.550 "current_io_qpairs": 0, 00:09:30.550 "pending_bdev_io": 0, 00:09:30.550 "completed_nvme_io": 316, 00:09:30.550 "transports": [ 00:09:30.550 { 00:09:30.550 "trtype": "TCP" 00:09:30.550 } 00:09:30.550 ] 00:09:30.550 }, 00:09:30.550 { 00:09:30.550 "name": "nvmf_tgt_poll_group_3", 00:09:30.550 "admin_qpairs": 0, 00:09:30.550 "io_qpairs": 224, 00:09:30.550 "current_admin_qpairs": 0, 00:09:30.550 "current_io_qpairs": 0, 00:09:30.550 "pending_bdev_io": 0, 00:09:30.550 "completed_nvme_io": 249, 00:09:30.550 "transports": [ 00:09:30.550 { 00:09:30.550 "trtype": "TCP" 00:09:30.550 } 00:09:30.550 ] 00:09:30.550 } 00:09:30.550 ] 00:09:30.550 }' 00:09:30.550 15:19:47 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:09:30.550 15:19:47 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:30.550 15:19:47 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:30.550 15:19:47 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:30.550 15:19:47 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:09:30.550 15:19:47 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:09:30.550 15:19:47 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:30.551 15:19:47 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:30.551 15:19:47 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:30.551 15:19:47 -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:09:30.551 15:19:47 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:09:30.551 15:19:47 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:09:30.551 15:19:47 -- target/rpc.sh@123 -- # nvmftestfini 00:09:30.551 15:19:47 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:30.551 15:19:47 -- nvmf/common.sh@117 -- # sync 00:09:30.551 15:19:47 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:30.551 15:19:47 -- nvmf/common.sh@120 -- # set +e 00:09:30.551 15:19:47 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:30.551 15:19:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:30.551 rmmod nvme_tcp 00:09:30.551 rmmod nvme_fabrics 00:09:30.551 rmmod nvme_keyring 00:09:30.551 15:19:47 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:30.551 15:19:47 -- nvmf/common.sh@124 -- # set -e 00:09:30.551 15:19:47 -- nvmf/common.sh@125 -- # return 0 00:09:30.551 15:19:47 -- nvmf/common.sh@478 -- # '[' -n 1486071 ']' 00:09:30.551 15:19:47 -- nvmf/common.sh@479 -- # killprocess 1486071 00:09:30.551 15:19:47 -- common/autotest_common.sh@936 -- # '[' -z 1486071 ']' 00:09:30.551 15:19:47 -- common/autotest_common.sh@940 -- # kill -0 1486071 00:09:30.551 15:19:47 -- common/autotest_common.sh@941 -- # uname 00:09:30.551 15:19:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:30.551 15:19:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1486071 00:09:30.812 15:19:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:30.812 15:19:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:30.812 15:19:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1486071' 00:09:30.812 killing process with pid 1486071 00:09:30.812 15:19:48 -- common/autotest_common.sh@955 -- # kill 1486071 00:09:30.812 15:19:48 -- common/autotest_common.sh@960 -- # wait 1486071 00:09:30.812 15:19:48 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:30.812 15:19:48 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:30.812 15:19:48 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:09:30.812 15:19:48 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:30.812 15:19:48 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:30.812 15:19:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:30.812 15:19:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:30.812 15:19:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:33.360 15:19:50 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:33.360 00:09:33.360 real 0m37.502s 00:09:33.360 user 1m53.084s 00:09:33.360 sys 0m7.219s 00:09:33.360 15:19:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:33.360 15:19:50 -- common/autotest_common.sh@10 -- # set +x 00:09:33.360 ************************************ 00:09:33.360 END TEST nvmf_rpc 00:09:33.360 ************************************ 00:09:33.360 15:19:50 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:33.360 15:19:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:33.360 15:19:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:33.360 15:19:50 -- common/autotest_common.sh@10 -- # set +x 00:09:33.360 ************************************ 00:09:33.360 START TEST nvmf_invalid 00:09:33.360 ************************************ 00:09:33.360 15:19:50 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:33.360 * Looking for test storage... 00:09:33.360 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:33.360 15:19:50 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:33.360 15:19:50 -- nvmf/common.sh@7 -- # uname -s 00:09:33.360 15:19:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:33.360 15:19:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:33.360 15:19:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:33.360 15:19:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:33.360 15:19:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:33.360 15:19:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:33.360 15:19:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:33.360 15:19:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:33.360 15:19:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:33.360 15:19:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:33.360 15:19:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:33.360 15:19:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:33.360 15:19:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:33.360 15:19:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:33.360 15:19:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:33.360 15:19:50 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:33.360 15:19:50 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:33.360 15:19:50 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:33.360 15:19:50 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:33.360 15:19:50 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:33.360 15:19:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.360 15:19:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.360 15:19:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.360 15:19:50 -- paths/export.sh@5 -- # export PATH 00:09:33.360 15:19:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.361 15:19:50 -- nvmf/common.sh@47 -- # : 0 00:09:33.361 15:19:50 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:33.361 15:19:50 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:33.361 15:19:50 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:33.361 15:19:50 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:33.361 15:19:50 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:33.361 15:19:50 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:33.361 15:19:50 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:33.361 15:19:50 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:33.361 15:19:50 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:33.361 15:19:50 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:33.361 15:19:50 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:09:33.361 15:19:50 -- target/invalid.sh@14 -- # target=foobar 00:09:33.361 15:19:50 -- target/invalid.sh@16 -- # RANDOM=0 00:09:33.361 15:19:50 -- target/invalid.sh@34 -- # nvmftestinit 00:09:33.361 15:19:50 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:09:33.361 15:19:50 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:33.361 15:19:50 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:33.361 15:19:50 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:33.361 15:19:50 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:33.361 15:19:50 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:33.361 15:19:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:33.361 15:19:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:33.361 15:19:50 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:33.361 15:19:50 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:33.361 15:19:50 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:33.361 15:19:50 -- common/autotest_common.sh@10 -- # set +x 00:09:41.512 15:19:57 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:41.512 15:19:57 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:41.512 15:19:57 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:41.512 15:19:57 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:41.512 15:19:57 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:41.512 15:19:57 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:41.512 15:19:57 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:41.512 15:19:57 -- nvmf/common.sh@295 -- # net_devs=() 00:09:41.512 15:19:57 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:41.512 15:19:57 -- nvmf/common.sh@296 -- # e810=() 00:09:41.512 15:19:57 -- nvmf/common.sh@296 -- # local -ga e810 00:09:41.512 15:19:57 -- nvmf/common.sh@297 -- # x722=() 00:09:41.512 15:19:57 -- nvmf/common.sh@297 -- # local -ga x722 00:09:41.512 15:19:57 -- nvmf/common.sh@298 -- # mlx=() 00:09:41.512 15:19:57 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:41.512 15:19:57 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:41.512 15:19:57 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:41.512 15:19:57 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:41.512 15:19:57 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:41.512 15:19:57 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:41.512 15:19:57 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:41.512 15:19:57 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:41.512 15:19:57 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:41.512 15:19:57 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:41.512 15:19:57 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:41.512 15:19:57 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:41.512 15:19:57 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:41.512 15:19:57 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:41.512 15:19:57 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:41.512 15:19:57 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:41.512 15:19:57 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:41.512 15:19:57 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:41.512 15:19:57 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:41.512 15:19:57 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:41.512 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:41.512 15:19:57 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:41.512 15:19:57 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:41.512 15:19:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:41.512 15:19:57 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:41.512 15:19:57 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:41.512 15:19:57 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:41.512 15:19:57 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:41.512 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:41.512 15:19:57 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:41.512 15:19:57 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:41.512 15:19:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:41.512 15:19:57 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:41.512 15:19:57 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:41.512 15:19:57 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:41.512 15:19:57 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:41.512 15:19:57 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:41.512 15:19:57 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:41.512 15:19:57 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:41.512 15:19:57 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:41.512 15:19:57 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:41.512 15:19:57 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:41.512 Found net devices under 0000:31:00.0: cvl_0_0 00:09:41.512 15:19:57 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:41.512 15:19:57 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:41.512 15:19:57 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:41.512 15:19:57 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:41.512 15:19:57 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:41.512 15:19:57 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:41.512 Found net devices under 0000:31:00.1: cvl_0_1 00:09:41.512 15:19:57 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:41.512 15:19:57 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:41.512 15:19:57 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:41.512 15:19:57 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:41.512 15:19:57 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:09:41.512 15:19:57 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:09:41.512 15:19:57 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:41.512 15:19:57 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:41.512 15:19:57 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:41.512 15:19:57 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:41.512 15:19:57 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:41.512 15:19:57 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:41.512 15:19:57 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:41.512 15:19:57 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:41.512 15:19:57 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:41.512 15:19:57 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:41.512 15:19:57 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:41.512 15:19:57 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:41.512 15:19:57 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:41.512 15:19:57 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:41.512 15:19:57 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:41.512 15:19:57 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:41.512 15:19:57 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:41.512 15:19:57 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:41.512 15:19:57 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:41.512 15:19:57 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:41.512 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:41.512 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.581 ms 00:09:41.512 00:09:41.512 --- 10.0.0.2 ping statistics --- 00:09:41.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.512 rtt min/avg/max/mdev = 0.581/0.581/0.581/0.000 ms 00:09:41.512 15:19:57 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:41.512 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:41.512 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.236 ms 00:09:41.512 00:09:41.512 --- 10.0.0.1 ping statistics --- 00:09:41.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.512 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:09:41.512 15:19:57 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:41.512 15:19:57 -- nvmf/common.sh@411 -- # return 0 00:09:41.512 15:19:57 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:41.512 15:19:57 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:41.512 15:19:57 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:09:41.512 15:19:57 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:09:41.512 15:19:57 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:41.512 15:19:57 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:09:41.512 15:19:57 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:09:41.512 15:19:57 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:09:41.512 15:19:57 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:41.512 15:19:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:41.512 15:19:57 -- common/autotest_common.sh@10 -- # set +x 00:09:41.512 15:19:57 -- nvmf/common.sh@470 -- # nvmfpid=1495967 00:09:41.512 15:19:57 -- nvmf/common.sh@471 -- # waitforlisten 1495967 00:09:41.512 15:19:57 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:41.512 15:19:57 -- common/autotest_common.sh@817 -- # '[' -z 1495967 ']' 00:09:41.512 15:19:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:41.512 15:19:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:41.512 15:19:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:41.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:41.512 15:19:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:41.512 15:19:57 -- common/autotest_common.sh@10 -- # set +x 00:09:41.512 [2024-04-26 15:19:57.846292] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:09:41.512 [2024-04-26 15:19:57.846374] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:41.512 EAL: No free 2048 kB hugepages reported on node 1 00:09:41.512 [2024-04-26 15:19:57.918823] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:41.512 [2024-04-26 15:19:57.993090] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:41.512 [2024-04-26 15:19:57.993128] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:41.512 [2024-04-26 15:19:57.993141] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:41.512 [2024-04-26 15:19:57.993149] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:41.512 [2024-04-26 15:19:57.993155] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:41.512 [2024-04-26 15:19:57.993239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:41.512 [2024-04-26 15:19:57.993360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:41.512 [2024-04-26 15:19:57.993518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.512 [2024-04-26 15:19:57.993519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:41.512 15:19:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:41.512 15:19:58 -- common/autotest_common.sh@850 -- # return 0 00:09:41.512 15:19:58 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:41.512 15:19:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:41.512 15:19:58 -- common/autotest_common.sh@10 -- # set +x 00:09:41.512 15:19:58 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:41.512 15:19:58 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:41.513 15:19:58 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode17889 00:09:41.513 [2024-04-26 15:19:58.799744] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:09:41.513 15:19:58 -- target/invalid.sh@40 -- # out='request: 00:09:41.513 { 00:09:41.513 "nqn": "nqn.2016-06.io.spdk:cnode17889", 00:09:41.513 "tgt_name": "foobar", 00:09:41.513 "method": "nvmf_create_subsystem", 00:09:41.513 "req_id": 1 00:09:41.513 } 00:09:41.513 Got JSON-RPC error response 00:09:41.513 response: 00:09:41.513 { 00:09:41.513 "code": -32603, 00:09:41.513 "message": "Unable to find target foobar" 00:09:41.513 }' 00:09:41.513 15:19:58 -- target/invalid.sh@41 -- # [[ request: 00:09:41.513 { 00:09:41.513 "nqn": "nqn.2016-06.io.spdk:cnode17889", 00:09:41.513 "tgt_name": "foobar", 00:09:41.513 "method": "nvmf_create_subsystem", 00:09:41.513 "req_id": 1 00:09:41.513 } 00:09:41.513 Got JSON-RPC error response 00:09:41.513 response: 00:09:41.513 { 00:09:41.513 "code": -32603, 00:09:41.513 "message": "Unable to find target foobar" 00:09:41.513 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:09:41.513 15:19:58 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:09:41.513 15:19:58 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode883 00:09:41.775 [2024-04-26 15:19:58.976413] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode883: invalid serial number 'SPDKISFASTANDAWESOME' 00:09:41.775 15:19:59 -- target/invalid.sh@45 -- # out='request: 00:09:41.775 { 00:09:41.775 "nqn": "nqn.2016-06.io.spdk:cnode883", 00:09:41.775 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:41.775 "method": "nvmf_create_subsystem", 00:09:41.775 "req_id": 1 00:09:41.775 } 00:09:41.775 Got JSON-RPC error response 00:09:41.775 response: 00:09:41.775 { 00:09:41.775 "code": -32602, 00:09:41.775 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:41.775 }' 00:09:41.775 15:19:59 -- target/invalid.sh@46 -- # [[ request: 00:09:41.775 { 00:09:41.775 "nqn": "nqn.2016-06.io.spdk:cnode883", 00:09:41.775 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:41.775 "method": "nvmf_create_subsystem", 00:09:41.775 "req_id": 1 00:09:41.775 } 00:09:41.775 Got JSON-RPC error response 00:09:41.775 response: 00:09:41.775 { 00:09:41.775 "code": -32602, 00:09:41.775 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:41.775 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:41.775 15:19:59 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:09:41.775 15:19:59 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode956 00:09:41.775 [2024-04-26 15:19:59.153009] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode956: invalid model number 'SPDK_Controller' 00:09:41.775 15:19:59 -- target/invalid.sh@50 -- # out='request: 00:09:41.775 { 00:09:41.775 "nqn": "nqn.2016-06.io.spdk:cnode956", 00:09:41.775 "model_number": "SPDK_Controller\u001f", 00:09:41.775 "method": "nvmf_create_subsystem", 00:09:41.775 "req_id": 1 00:09:41.775 } 00:09:41.775 Got JSON-RPC error response 00:09:41.775 response: 00:09:41.775 { 00:09:41.775 "code": -32602, 00:09:41.775 "message": "Invalid MN SPDK_Controller\u001f" 00:09:41.775 }' 00:09:41.775 15:19:59 -- target/invalid.sh@51 -- # [[ request: 00:09:41.775 { 00:09:41.775 "nqn": "nqn.2016-06.io.spdk:cnode956", 00:09:41.775 "model_number": "SPDK_Controller\u001f", 00:09:41.775 "method": "nvmf_create_subsystem", 00:09:41.775 "req_id": 1 00:09:41.775 } 00:09:41.775 Got JSON-RPC error response 00:09:41.775 response: 00:09:41.775 { 00:09:41.775 "code": -32602, 00:09:41.775 "message": "Invalid MN SPDK_Controller\u001f" 00:09:41.775 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:41.775 15:19:59 -- target/invalid.sh@54 -- # gen_random_s 21 00:09:41.775 15:19:59 -- target/invalid.sh@19 -- # local length=21 ll 00:09:41.775 15:19:59 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:41.775 15:19:59 -- target/invalid.sh@21 -- # local chars 00:09:41.775 15:19:59 -- target/invalid.sh@22 -- # local string 00:09:41.775 15:19:59 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:41.775 15:19:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:41.775 15:19:59 -- target/invalid.sh@25 -- # printf %x 36 00:09:41.775 15:19:59 -- target/invalid.sh@25 -- # echo -e '\x24' 00:09:41.775 15:19:59 -- target/invalid.sh@25 -- # string+='$' 00:09:41.775 15:19:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:41.775 15:19:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:41.775 15:19:59 -- target/invalid.sh@25 -- # printf %x 65 00:09:41.775 15:19:59 -- target/invalid.sh@25 -- # echo -e '\x41' 00:09:41.775 15:19:59 -- target/invalid.sh@25 -- # string+=A 00:09:41.775 15:19:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:41.775 15:19:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:41.775 15:19:59 -- target/invalid.sh@25 -- # printf %x 120 00:09:41.775 15:19:59 -- target/invalid.sh@25 -- # echo -e '\x78' 00:09:41.775 15:19:59 -- target/invalid.sh@25 -- # string+=x 00:09:41.775 15:19:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:41.775 15:19:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:41.775 15:19:59 -- target/invalid.sh@25 -- # printf %x 47 00:09:41.775 15:19:59 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:09:41.775 15:19:59 -- target/invalid.sh@25 -- # string+=/ 00:09:41.775 15:19:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:41.775 15:19:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.036 15:19:59 -- target/invalid.sh@25 -- # printf %x 86 00:09:42.036 15:19:59 -- target/invalid.sh@25 -- # echo -e '\x56' 00:09:42.036 15:19:59 -- target/invalid.sh@25 -- # string+=V 00:09:42.036 15:19:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.036 15:19:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.036 15:19:59 -- target/invalid.sh@25 -- # printf %x 116 00:09:42.036 15:19:59 -- target/invalid.sh@25 -- # echo -e '\x74' 00:09:42.036 15:19:59 -- target/invalid.sh@25 -- # string+=t 00:09:42.036 15:19:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.036 15:19:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.036 15:19:59 -- target/invalid.sh@25 -- # printf %x 108 00:09:42.036 15:19:59 -- target/invalid.sh@25 -- # echo -e '\x6c' 00:09:42.036 15:19:59 -- target/invalid.sh@25 -- # string+=l 00:09:42.036 15:19:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.036 15:19:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.036 15:19:59 -- target/invalid.sh@25 -- # printf %x 68 00:09:42.036 15:19:59 -- target/invalid.sh@25 -- # echo -e '\x44' 00:09:42.036 15:19:59 -- target/invalid.sh@25 -- # string+=D 00:09:42.036 15:19:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.036 15:19:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.036 15:19:59 -- target/invalid.sh@25 -- # printf %x 59 00:09:42.036 15:19:59 -- target/invalid.sh@25 -- # echo -e '\x3b' 00:09:42.036 15:19:59 -- target/invalid.sh@25 -- # string+=';' 00:09:42.037 15:19:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.037 15:19:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.037 15:19:59 -- target/invalid.sh@25 -- # printf %x 37 00:09:42.037 15:19:59 -- target/invalid.sh@25 -- # echo -e '\x25' 00:09:42.037 15:19:59 -- target/invalid.sh@25 -- # string+=% 00:09:42.037 15:19:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.037 15:19:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.037 15:19:59 -- target/invalid.sh@25 -- # printf %x 59 00:09:42.037 15:19:59 -- target/invalid.sh@25 -- # echo -e '\x3b' 00:09:42.037 15:19:59 -- target/invalid.sh@25 -- # string+=';' 00:09:42.037 15:19:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.037 15:19:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.037 15:19:59 -- target/invalid.sh@25 -- # printf %x 44 00:09:42.037 15:19:59 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:09:42.037 15:19:59 -- target/invalid.sh@25 -- # string+=, 00:09:42.037 15:19:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.037 15:19:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.037 15:19:59 -- target/invalid.sh@25 -- # printf %x 46 00:09:42.037 15:19:59 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:09:42.037 15:19:59 -- target/invalid.sh@25 -- # string+=. 00:09:42.037 15:19:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.037 15:19:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.037 15:19:59 -- target/invalid.sh@25 -- # printf %x 123 00:09:42.037 15:19:59 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:09:42.037 15:19:59 -- target/invalid.sh@25 -- # string+='{' 00:09:42.037 15:19:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.037 15:19:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.037 15:19:59 -- target/invalid.sh@25 -- # printf %x 113 00:09:42.037 15:19:59 -- target/invalid.sh@25 -- # echo -e '\x71' 00:09:42.037 15:19:59 -- target/invalid.sh@25 -- # string+=q 00:09:42.037 15:19:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.037 15:19:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.037 15:19:59 -- target/invalid.sh@25 -- # printf %x 124 00:09:42.037 15:19:59 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:09:42.037 15:19:59 -- target/invalid.sh@25 -- # string+='|' 00:09:42.037 15:19:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.037 15:19:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.037 15:19:59 -- target/invalid.sh@25 -- # printf %x 87 00:09:42.037 15:19:59 -- target/invalid.sh@25 -- # echo -e '\x57' 00:09:42.037 15:19:59 -- target/invalid.sh@25 -- # string+=W 00:09:42.037 15:19:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.037 15:19:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.037 15:19:59 -- target/invalid.sh@25 -- # printf %x 85 00:09:42.037 15:19:59 -- target/invalid.sh@25 -- # echo -e '\x55' 00:09:42.037 15:19:59 -- target/invalid.sh@25 -- # string+=U 00:09:42.037 15:19:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.037 15:19:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.037 15:19:59 -- target/invalid.sh@25 -- # printf %x 80 00:09:42.037 15:19:59 -- target/invalid.sh@25 -- # echo -e '\x50' 00:09:42.037 15:19:59 -- target/invalid.sh@25 -- # string+=P 00:09:42.037 15:19:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.037 15:19:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.037 15:19:59 -- target/invalid.sh@25 -- # printf %x 99 00:09:42.037 15:19:59 -- target/invalid.sh@25 -- # echo -e '\x63' 00:09:42.037 15:19:59 -- target/invalid.sh@25 -- # string+=c 00:09:42.037 15:19:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.037 15:19:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.037 15:19:59 -- target/invalid.sh@25 -- # printf %x 105 00:09:42.037 15:19:59 -- target/invalid.sh@25 -- # echo -e '\x69' 00:09:42.037 15:19:59 -- target/invalid.sh@25 -- # string+=i 00:09:42.037 15:19:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.037 15:19:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.037 15:19:59 -- target/invalid.sh@28 -- # [[ $ == \- ]] 00:09:42.037 15:19:59 -- target/invalid.sh@31 -- # echo '$Ax/VtlD;%;,.{q|WUPci' 00:09:42.037 15:19:59 -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '$Ax/VtlD;%;,.{q|WUPci' nqn.2016-06.io.spdk:cnode27066 00:09:42.299 [2024-04-26 15:19:59.490025] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27066: invalid serial number '$Ax/VtlD;%;,.{q|WUPci' 00:09:42.299 15:19:59 -- target/invalid.sh@54 -- # out='request: 00:09:42.299 { 00:09:42.299 "nqn": "nqn.2016-06.io.spdk:cnode27066", 00:09:42.299 "serial_number": "$Ax/VtlD;%;,.{q|WUPci", 00:09:42.299 "method": "nvmf_create_subsystem", 00:09:42.299 "req_id": 1 00:09:42.299 } 00:09:42.299 Got JSON-RPC error response 00:09:42.299 response: 00:09:42.299 { 00:09:42.299 "code": -32602, 00:09:42.299 "message": "Invalid SN $Ax/VtlD;%;,.{q|WUPci" 00:09:42.299 }' 00:09:42.299 15:19:59 -- target/invalid.sh@55 -- # [[ request: 00:09:42.299 { 00:09:42.299 "nqn": "nqn.2016-06.io.spdk:cnode27066", 00:09:42.299 "serial_number": "$Ax/VtlD;%;,.{q|WUPci", 00:09:42.299 "method": "nvmf_create_subsystem", 00:09:42.299 "req_id": 1 00:09:42.299 } 00:09:42.299 Got JSON-RPC error response 00:09:42.299 response: 00:09:42.299 { 00:09:42.299 "code": -32602, 00:09:42.299 "message": "Invalid SN $Ax/VtlD;%;,.{q|WUPci" 00:09:42.299 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:42.299 15:19:59 -- target/invalid.sh@58 -- # gen_random_s 41 00:09:42.299 15:19:59 -- target/invalid.sh@19 -- # local length=41 ll 00:09:42.299 15:19:59 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:42.299 15:19:59 -- target/invalid.sh@21 -- # local chars 00:09:42.299 15:19:59 -- target/invalid.sh@22 -- # local string 00:09:42.299 15:19:59 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:42.299 15:19:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.299 15:19:59 -- target/invalid.sh@25 -- # printf %x 85 00:09:42.299 15:19:59 -- target/invalid.sh@25 -- # echo -e '\x55' 00:09:42.299 15:19:59 -- target/invalid.sh@25 -- # string+=U 00:09:42.299 15:19:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.299 15:19:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.299 15:19:59 -- target/invalid.sh@25 -- # printf %x 33 00:09:42.299 15:19:59 -- target/invalid.sh@25 -- # echo -e '\x21' 00:09:42.299 15:19:59 -- target/invalid.sh@25 -- # string+='!' 00:09:42.299 15:19:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.299 15:19:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.299 15:19:59 -- target/invalid.sh@25 -- # printf %x 90 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # string+=Z 00:09:42.300 15:19:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.300 15:19:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # printf %x 124 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # string+='|' 00:09:42.300 15:19:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.300 15:19:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # printf %x 122 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # string+=z 00:09:42.300 15:19:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.300 15:19:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # printf %x 70 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # echo -e '\x46' 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # string+=F 00:09:42.300 15:19:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.300 15:19:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # printf %x 64 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # echo -e '\x40' 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # string+=@ 00:09:42.300 15:19:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.300 15:19:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # printf %x 83 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # echo -e '\x53' 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # string+=S 00:09:42.300 15:19:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.300 15:19:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # printf %x 78 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # string+=N 00:09:42.300 15:19:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.300 15:19:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # printf %x 111 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # string+=o 00:09:42.300 15:19:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.300 15:19:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # printf %x 79 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # string+=O 00:09:42.300 15:19:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.300 15:19:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # printf %x 124 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # string+='|' 00:09:42.300 15:19:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.300 15:19:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # printf %x 43 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # string+=+ 00:09:42.300 15:19:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.300 15:19:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # printf %x 111 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # string+=o 00:09:42.300 15:19:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.300 15:19:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # printf %x 80 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # echo -e '\x50' 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # string+=P 00:09:42.300 15:19:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.300 15:19:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # printf %x 100 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # echo -e '\x64' 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # string+=d 00:09:42.300 15:19:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.300 15:19:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # printf %x 101 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # echo -e '\x65' 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # string+=e 00:09:42.300 15:19:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.300 15:19:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # printf %x 69 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # echo -e '\x45' 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # string+=E 00:09:42.300 15:19:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.300 15:19:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # printf %x 40 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # echo -e '\x28' 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # string+='(' 00:09:42.300 15:19:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.300 15:19:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # printf %x 85 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # echo -e '\x55' 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # string+=U 00:09:42.300 15:19:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.300 15:19:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # printf %x 51 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # echo -e '\x33' 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # string+=3 00:09:42.300 15:19:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.300 15:19:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # printf %x 79 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # string+=O 00:09:42.300 15:19:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.300 15:19:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # printf %x 98 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # echo -e '\x62' 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # string+=b 00:09:42.300 15:19:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.300 15:19:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # printf %x 102 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # echo -e '\x66' 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # string+=f 00:09:42.300 15:19:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.300 15:19:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # printf %x 70 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # echo -e '\x46' 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # string+=F 00:09:42.300 15:19:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.300 15:19:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # printf %x 71 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # echo -e '\x47' 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # string+=G 00:09:42.300 15:19:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.300 15:19:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # printf %x 104 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # echo -e '\x68' 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # string+=h 00:09:42.300 15:19:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.300 15:19:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # printf %x 71 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # echo -e '\x47' 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # string+=G 00:09:42.300 15:19:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.300 15:19:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # printf %x 93 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # string+=']' 00:09:42.300 15:19:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.300 15:19:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.300 15:19:59 -- target/invalid.sh@25 -- # printf %x 109 00:09:42.561 15:19:59 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:09:42.561 15:19:59 -- target/invalid.sh@25 -- # string+=m 00:09:42.561 15:19:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.561 15:19:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.561 15:19:59 -- target/invalid.sh@25 -- # printf %x 95 00:09:42.561 15:19:59 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:09:42.561 15:19:59 -- target/invalid.sh@25 -- # string+=_ 00:09:42.561 15:19:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.561 15:19:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.561 15:19:59 -- target/invalid.sh@25 -- # printf %x 119 00:09:42.561 15:19:59 -- target/invalid.sh@25 -- # echo -e '\x77' 00:09:42.561 15:19:59 -- target/invalid.sh@25 -- # string+=w 00:09:42.561 15:19:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.561 15:19:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.561 15:19:59 -- target/invalid.sh@25 -- # printf %x 34 00:09:42.561 15:19:59 -- target/invalid.sh@25 -- # echo -e '\x22' 00:09:42.561 15:19:59 -- target/invalid.sh@25 -- # string+='"' 00:09:42.561 15:19:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.561 15:19:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.561 15:19:59 -- target/invalid.sh@25 -- # printf %x 83 00:09:42.561 15:19:59 -- target/invalid.sh@25 -- # echo -e '\x53' 00:09:42.561 15:19:59 -- target/invalid.sh@25 -- # string+=S 00:09:42.561 15:19:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.561 15:19:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.561 15:19:59 -- target/invalid.sh@25 -- # printf %x 124 00:09:42.561 15:19:59 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:09:42.561 15:19:59 -- target/invalid.sh@25 -- # string+='|' 00:09:42.561 15:19:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.561 15:19:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.561 15:19:59 -- target/invalid.sh@25 -- # printf %x 124 00:09:42.561 15:19:59 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:09:42.561 15:19:59 -- target/invalid.sh@25 -- # string+='|' 00:09:42.561 15:19:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.561 15:19:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.561 15:19:59 -- target/invalid.sh@25 -- # printf %x 109 00:09:42.561 15:19:59 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:09:42.561 15:19:59 -- target/invalid.sh@25 -- # string+=m 00:09:42.561 15:19:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.561 15:19:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.561 15:19:59 -- target/invalid.sh@25 -- # printf %x 80 00:09:42.561 15:19:59 -- target/invalid.sh@25 -- # echo -e '\x50' 00:09:42.561 15:19:59 -- target/invalid.sh@25 -- # string+=P 00:09:42.561 15:19:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.561 15:19:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.561 15:19:59 -- target/invalid.sh@25 -- # printf %x 68 00:09:42.561 15:19:59 -- target/invalid.sh@25 -- # echo -e '\x44' 00:09:42.561 15:19:59 -- target/invalid.sh@25 -- # string+=D 00:09:42.561 15:19:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.561 15:19:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.561 15:19:59 -- target/invalid.sh@25 -- # printf %x 114 00:09:42.561 15:19:59 -- target/invalid.sh@25 -- # echo -e '\x72' 00:09:42.561 15:19:59 -- target/invalid.sh@25 -- # string+=r 00:09:42.561 15:19:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.561 15:19:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.561 15:19:59 -- target/invalid.sh@25 -- # printf %x 109 00:09:42.561 15:19:59 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:09:42.561 15:19:59 -- target/invalid.sh@25 -- # string+=m 00:09:42.561 15:19:59 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.561 15:19:59 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.561 15:19:59 -- target/invalid.sh@28 -- # [[ U == \- ]] 00:09:42.561 15:19:59 -- target/invalid.sh@31 -- # echo 'U!Z|zF@SNoO|+oPdeE(U3ObfFGhG]m_w"S||mPDrm' 00:09:42.562 15:19:59 -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'U!Z|zF@SNoO|+oPdeE(U3ObfFGhG]m_w"S||mPDrm' nqn.2016-06.io.spdk:cnode11171 00:09:42.562 [2024-04-26 15:19:59.975595] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11171: invalid model number 'U!Z|zF@SNoO|+oPdeE(U3ObfFGhG]m_w"S||mPDrm' 00:09:42.562 15:20:00 -- target/invalid.sh@58 -- # out='request: 00:09:42.562 { 00:09:42.562 "nqn": "nqn.2016-06.io.spdk:cnode11171", 00:09:42.562 "model_number": "U!Z|zF@SNoO|+oPdeE(U3ObfFGhG]m_w\"S||mPDrm", 00:09:42.562 "method": "nvmf_create_subsystem", 00:09:42.562 "req_id": 1 00:09:42.562 } 00:09:42.562 Got JSON-RPC error response 00:09:42.562 response: 00:09:42.562 { 00:09:42.562 "code": -32602, 00:09:42.562 "message": "Invalid MN U!Z|zF@SNoO|+oPdeE(U3ObfFGhG]m_w\"S||mPDrm" 00:09:42.562 }' 00:09:42.562 15:20:00 -- target/invalid.sh@59 -- # [[ request: 00:09:42.562 { 00:09:42.562 "nqn": "nqn.2016-06.io.spdk:cnode11171", 00:09:42.562 "model_number": "U!Z|zF@SNoO|+oPdeE(U3ObfFGhG]m_w\"S||mPDrm", 00:09:42.562 "method": "nvmf_create_subsystem", 00:09:42.562 "req_id": 1 00:09:42.562 } 00:09:42.562 Got JSON-RPC error response 00:09:42.562 response: 00:09:42.562 { 00:09:42.562 "code": -32602, 00:09:42.562 "message": "Invalid MN U!Z|zF@SNoO|+oPdeE(U3ObfFGhG]m_w\"S||mPDrm" 00:09:42.562 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:42.562 15:20:00 -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:09:42.823 [2024-04-26 15:20:00.148227] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:42.823 15:20:00 -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:09:43.083 15:20:00 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:09:43.083 15:20:00 -- target/invalid.sh@67 -- # echo '' 00:09:43.083 15:20:00 -- target/invalid.sh@67 -- # head -n 1 00:09:43.083 15:20:00 -- target/invalid.sh@67 -- # IP= 00:09:43.083 15:20:00 -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:09:43.083 [2024-04-26 15:20:00.485278] nvmf_rpc.c: 792:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:09:43.083 15:20:00 -- target/invalid.sh@69 -- # out='request: 00:09:43.083 { 00:09:43.083 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:43.083 "listen_address": { 00:09:43.083 "trtype": "tcp", 00:09:43.083 "traddr": "", 00:09:43.083 "trsvcid": "4421" 00:09:43.083 }, 00:09:43.083 "method": "nvmf_subsystem_remove_listener", 00:09:43.083 "req_id": 1 00:09:43.083 } 00:09:43.083 Got JSON-RPC error response 00:09:43.083 response: 00:09:43.083 { 00:09:43.083 "code": -32602, 00:09:43.083 "message": "Invalid parameters" 00:09:43.083 }' 00:09:43.083 15:20:00 -- target/invalid.sh@70 -- # [[ request: 00:09:43.083 { 00:09:43.083 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:43.083 "listen_address": { 00:09:43.083 "trtype": "tcp", 00:09:43.083 "traddr": "", 00:09:43.083 "trsvcid": "4421" 00:09:43.083 }, 00:09:43.083 "method": "nvmf_subsystem_remove_listener", 00:09:43.083 "req_id": 1 00:09:43.083 } 00:09:43.083 Got JSON-RPC error response 00:09:43.083 response: 00:09:43.083 { 00:09:43.083 "code": -32602, 00:09:43.083 "message": "Invalid parameters" 00:09:43.083 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:09:43.083 15:20:00 -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode29837 -i 0 00:09:43.344 [2024-04-26 15:20:00.657772] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29837: invalid cntlid range [0-65519] 00:09:43.344 15:20:00 -- target/invalid.sh@73 -- # out='request: 00:09:43.344 { 00:09:43.344 "nqn": "nqn.2016-06.io.spdk:cnode29837", 00:09:43.344 "min_cntlid": 0, 00:09:43.344 "method": "nvmf_create_subsystem", 00:09:43.344 "req_id": 1 00:09:43.344 } 00:09:43.344 Got JSON-RPC error response 00:09:43.344 response: 00:09:43.344 { 00:09:43.344 "code": -32602, 00:09:43.344 "message": "Invalid cntlid range [0-65519]" 00:09:43.344 }' 00:09:43.344 15:20:00 -- target/invalid.sh@74 -- # [[ request: 00:09:43.344 { 00:09:43.344 "nqn": "nqn.2016-06.io.spdk:cnode29837", 00:09:43.344 "min_cntlid": 0, 00:09:43.344 "method": "nvmf_create_subsystem", 00:09:43.344 "req_id": 1 00:09:43.344 } 00:09:43.344 Got JSON-RPC error response 00:09:43.344 response: 00:09:43.344 { 00:09:43.344 "code": -32602, 00:09:43.344 "message": "Invalid cntlid range [0-65519]" 00:09:43.344 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:43.344 15:20:00 -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17424 -i 65520 00:09:43.604 [2024-04-26 15:20:00.834335] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17424: invalid cntlid range [65520-65519] 00:09:43.604 15:20:00 -- target/invalid.sh@75 -- # out='request: 00:09:43.604 { 00:09:43.604 "nqn": "nqn.2016-06.io.spdk:cnode17424", 00:09:43.604 "min_cntlid": 65520, 00:09:43.604 "method": "nvmf_create_subsystem", 00:09:43.604 "req_id": 1 00:09:43.604 } 00:09:43.604 Got JSON-RPC error response 00:09:43.604 response: 00:09:43.604 { 00:09:43.604 "code": -32602, 00:09:43.604 "message": "Invalid cntlid range [65520-65519]" 00:09:43.604 }' 00:09:43.604 15:20:00 -- target/invalid.sh@76 -- # [[ request: 00:09:43.604 { 00:09:43.604 "nqn": "nqn.2016-06.io.spdk:cnode17424", 00:09:43.604 "min_cntlid": 65520, 00:09:43.604 "method": "nvmf_create_subsystem", 00:09:43.604 "req_id": 1 00:09:43.604 } 00:09:43.604 Got JSON-RPC error response 00:09:43.604 response: 00:09:43.604 { 00:09:43.604 "code": -32602, 00:09:43.604 "message": "Invalid cntlid range [65520-65519]" 00:09:43.604 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:43.604 15:20:00 -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6985 -I 0 00:09:43.604 [2024-04-26 15:20:01.010937] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6985: invalid cntlid range [1-0] 00:09:43.604 15:20:01 -- target/invalid.sh@77 -- # out='request: 00:09:43.604 { 00:09:43.604 "nqn": "nqn.2016-06.io.spdk:cnode6985", 00:09:43.604 "max_cntlid": 0, 00:09:43.604 "method": "nvmf_create_subsystem", 00:09:43.604 "req_id": 1 00:09:43.604 } 00:09:43.604 Got JSON-RPC error response 00:09:43.604 response: 00:09:43.604 { 00:09:43.604 "code": -32602, 00:09:43.604 "message": "Invalid cntlid range [1-0]" 00:09:43.604 }' 00:09:43.604 15:20:01 -- target/invalid.sh@78 -- # [[ request: 00:09:43.604 { 00:09:43.604 "nqn": "nqn.2016-06.io.spdk:cnode6985", 00:09:43.604 "max_cntlid": 0, 00:09:43.604 "method": "nvmf_create_subsystem", 00:09:43.604 "req_id": 1 00:09:43.604 } 00:09:43.605 Got JSON-RPC error response 00:09:43.605 response: 00:09:43.605 { 00:09:43.605 "code": -32602, 00:09:43.605 "message": "Invalid cntlid range [1-0]" 00:09:43.605 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:43.605 15:20:01 -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23629 -I 65520 00:09:43.865 [2024-04-26 15:20:01.183445] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23629: invalid cntlid range [1-65520] 00:09:43.865 15:20:01 -- target/invalid.sh@79 -- # out='request: 00:09:43.865 { 00:09:43.865 "nqn": "nqn.2016-06.io.spdk:cnode23629", 00:09:43.865 "max_cntlid": 65520, 00:09:43.865 "method": "nvmf_create_subsystem", 00:09:43.865 "req_id": 1 00:09:43.865 } 00:09:43.865 Got JSON-RPC error response 00:09:43.865 response: 00:09:43.865 { 00:09:43.865 "code": -32602, 00:09:43.865 "message": "Invalid cntlid range [1-65520]" 00:09:43.865 }' 00:09:43.865 15:20:01 -- target/invalid.sh@80 -- # [[ request: 00:09:43.865 { 00:09:43.865 "nqn": "nqn.2016-06.io.spdk:cnode23629", 00:09:43.865 "max_cntlid": 65520, 00:09:43.865 "method": "nvmf_create_subsystem", 00:09:43.865 "req_id": 1 00:09:43.865 } 00:09:43.865 Got JSON-RPC error response 00:09:43.865 response: 00:09:43.865 { 00:09:43.865 "code": -32602, 00:09:43.865 "message": "Invalid cntlid range [1-65520]" 00:09:43.865 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:43.865 15:20:01 -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20429 -i 6 -I 5 00:09:44.126 [2024-04-26 15:20:01.356001] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20429: invalid cntlid range [6-5] 00:09:44.126 15:20:01 -- target/invalid.sh@83 -- # out='request: 00:09:44.126 { 00:09:44.126 "nqn": "nqn.2016-06.io.spdk:cnode20429", 00:09:44.126 "min_cntlid": 6, 00:09:44.126 "max_cntlid": 5, 00:09:44.126 "method": "nvmf_create_subsystem", 00:09:44.126 "req_id": 1 00:09:44.126 } 00:09:44.126 Got JSON-RPC error response 00:09:44.126 response: 00:09:44.126 { 00:09:44.126 "code": -32602, 00:09:44.126 "message": "Invalid cntlid range [6-5]" 00:09:44.126 }' 00:09:44.126 15:20:01 -- target/invalid.sh@84 -- # [[ request: 00:09:44.126 { 00:09:44.126 "nqn": "nqn.2016-06.io.spdk:cnode20429", 00:09:44.126 "min_cntlid": 6, 00:09:44.126 "max_cntlid": 5, 00:09:44.126 "method": "nvmf_create_subsystem", 00:09:44.126 "req_id": 1 00:09:44.126 } 00:09:44.126 Got JSON-RPC error response 00:09:44.126 response: 00:09:44.126 { 00:09:44.126 "code": -32602, 00:09:44.126 "message": "Invalid cntlid range [6-5]" 00:09:44.126 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:44.126 15:20:01 -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:09:44.126 15:20:01 -- target/invalid.sh@87 -- # out='request: 00:09:44.126 { 00:09:44.126 "name": "foobar", 00:09:44.126 "method": "nvmf_delete_target", 00:09:44.126 "req_id": 1 00:09:44.126 } 00:09:44.126 Got JSON-RPC error response 00:09:44.126 response: 00:09:44.126 { 00:09:44.126 "code": -32602, 00:09:44.126 "message": "The specified target doesn'\''t exist, cannot delete it." 00:09:44.126 }' 00:09:44.126 15:20:01 -- target/invalid.sh@88 -- # [[ request: 00:09:44.126 { 00:09:44.126 "name": "foobar", 00:09:44.126 "method": "nvmf_delete_target", 00:09:44.126 "req_id": 1 00:09:44.126 } 00:09:44.126 Got JSON-RPC error response 00:09:44.126 response: 00:09:44.126 { 00:09:44.126 "code": -32602, 00:09:44.126 "message": "The specified target doesn't exist, cannot delete it." 00:09:44.126 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:09:44.126 15:20:01 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:09:44.126 15:20:01 -- target/invalid.sh@91 -- # nvmftestfini 00:09:44.126 15:20:01 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:44.126 15:20:01 -- nvmf/common.sh@117 -- # sync 00:09:44.126 15:20:01 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:44.126 15:20:01 -- nvmf/common.sh@120 -- # set +e 00:09:44.126 15:20:01 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:44.126 15:20:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:44.126 rmmod nvme_tcp 00:09:44.126 rmmod nvme_fabrics 00:09:44.126 rmmod nvme_keyring 00:09:44.126 15:20:01 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:44.126 15:20:01 -- nvmf/common.sh@124 -- # set -e 00:09:44.126 15:20:01 -- nvmf/common.sh@125 -- # return 0 00:09:44.126 15:20:01 -- nvmf/common.sh@478 -- # '[' -n 1495967 ']' 00:09:44.126 15:20:01 -- nvmf/common.sh@479 -- # killprocess 1495967 00:09:44.126 15:20:01 -- common/autotest_common.sh@936 -- # '[' -z 1495967 ']' 00:09:44.126 15:20:01 -- common/autotest_common.sh@940 -- # kill -0 1495967 00:09:44.126 15:20:01 -- common/autotest_common.sh@941 -- # uname 00:09:44.126 15:20:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:44.126 15:20:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1495967 00:09:44.386 15:20:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:44.387 15:20:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:44.387 15:20:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1495967' 00:09:44.387 killing process with pid 1495967 00:09:44.387 15:20:01 -- common/autotest_common.sh@955 -- # kill 1495967 00:09:44.387 15:20:01 -- common/autotest_common.sh@960 -- # wait 1495967 00:09:44.387 15:20:01 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:44.387 15:20:01 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:44.387 15:20:01 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:09:44.387 15:20:01 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:44.387 15:20:01 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:44.387 15:20:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:44.387 15:20:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:44.387 15:20:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:46.932 15:20:03 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:46.932 00:09:46.932 real 0m13.378s 00:09:46.932 user 0m19.210s 00:09:46.932 sys 0m6.260s 00:09:46.932 15:20:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:46.932 15:20:03 -- common/autotest_common.sh@10 -- # set +x 00:09:46.932 ************************************ 00:09:46.932 END TEST nvmf_invalid 00:09:46.932 ************************************ 00:09:46.932 15:20:03 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:46.932 15:20:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:46.932 15:20:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:46.932 15:20:03 -- common/autotest_common.sh@10 -- # set +x 00:09:46.932 ************************************ 00:09:46.932 START TEST nvmf_abort 00:09:46.932 ************************************ 00:09:46.932 15:20:04 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:46.932 * Looking for test storage... 00:09:46.932 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:46.932 15:20:04 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:46.932 15:20:04 -- nvmf/common.sh@7 -- # uname -s 00:09:46.932 15:20:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:46.932 15:20:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:46.932 15:20:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:46.932 15:20:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:46.932 15:20:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:46.932 15:20:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:46.932 15:20:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:46.932 15:20:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:46.932 15:20:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:46.932 15:20:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:46.932 15:20:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:46.932 15:20:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:46.932 15:20:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:46.932 15:20:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:46.932 15:20:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:46.932 15:20:04 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:46.932 15:20:04 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:46.932 15:20:04 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:46.932 15:20:04 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:46.932 15:20:04 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:46.932 15:20:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.932 15:20:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.932 15:20:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.932 15:20:04 -- paths/export.sh@5 -- # export PATH 00:09:46.932 15:20:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.932 15:20:04 -- nvmf/common.sh@47 -- # : 0 00:09:46.932 15:20:04 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:46.932 15:20:04 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:46.932 15:20:04 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:46.932 15:20:04 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:46.932 15:20:04 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:46.932 15:20:04 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:46.932 15:20:04 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:46.932 15:20:04 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:46.932 15:20:04 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:46.932 15:20:04 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:09:46.932 15:20:04 -- target/abort.sh@14 -- # nvmftestinit 00:09:46.932 15:20:04 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:09:46.932 15:20:04 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:46.932 15:20:04 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:46.932 15:20:04 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:46.932 15:20:04 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:46.932 15:20:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:46.932 15:20:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:46.932 15:20:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:46.932 15:20:04 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:46.933 15:20:04 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:46.933 15:20:04 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:46.933 15:20:04 -- common/autotest_common.sh@10 -- # set +x 00:09:53.643 15:20:11 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:53.643 15:20:11 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:53.643 15:20:11 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:53.643 15:20:11 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:53.643 15:20:11 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:53.643 15:20:11 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:53.643 15:20:11 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:53.643 15:20:11 -- nvmf/common.sh@295 -- # net_devs=() 00:09:53.643 15:20:11 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:53.643 15:20:11 -- nvmf/common.sh@296 -- # e810=() 00:09:53.643 15:20:11 -- nvmf/common.sh@296 -- # local -ga e810 00:09:53.643 15:20:11 -- nvmf/common.sh@297 -- # x722=() 00:09:53.643 15:20:11 -- nvmf/common.sh@297 -- # local -ga x722 00:09:53.643 15:20:11 -- nvmf/common.sh@298 -- # mlx=() 00:09:53.643 15:20:11 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:53.643 15:20:11 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:53.643 15:20:11 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:53.643 15:20:11 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:53.643 15:20:11 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:53.643 15:20:11 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:53.643 15:20:11 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:53.643 15:20:11 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:53.643 15:20:11 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:53.643 15:20:11 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:53.643 15:20:11 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:53.643 15:20:11 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:53.643 15:20:11 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:53.643 15:20:11 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:53.643 15:20:11 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:53.643 15:20:11 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:53.643 15:20:11 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:53.643 15:20:11 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:53.643 15:20:11 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:53.643 15:20:11 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:53.643 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:53.643 15:20:11 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:53.643 15:20:11 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:53.643 15:20:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:53.643 15:20:11 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:53.643 15:20:11 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:53.643 15:20:11 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:53.643 15:20:11 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:53.643 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:53.643 15:20:11 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:53.643 15:20:11 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:53.643 15:20:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:53.643 15:20:11 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:53.643 15:20:11 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:53.643 15:20:11 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:53.643 15:20:11 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:53.643 15:20:11 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:53.643 15:20:11 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:53.643 15:20:11 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:53.643 15:20:11 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:53.643 15:20:11 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:53.643 15:20:11 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:53.643 Found net devices under 0000:31:00.0: cvl_0_0 00:09:53.643 15:20:11 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:53.643 15:20:11 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:53.643 15:20:11 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:53.904 15:20:11 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:53.904 15:20:11 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:53.904 15:20:11 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:53.904 Found net devices under 0000:31:00.1: cvl_0_1 00:09:53.904 15:20:11 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:53.904 15:20:11 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:53.904 15:20:11 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:53.904 15:20:11 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:53.904 15:20:11 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:09:53.904 15:20:11 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:09:53.904 15:20:11 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:53.904 15:20:11 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:53.905 15:20:11 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:53.905 15:20:11 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:53.905 15:20:11 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:53.905 15:20:11 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:53.905 15:20:11 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:53.905 15:20:11 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:53.905 15:20:11 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:53.905 15:20:11 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:53.905 15:20:11 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:53.905 15:20:11 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:53.905 15:20:11 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:53.905 15:20:11 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:53.905 15:20:11 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:53.905 15:20:11 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:53.905 15:20:11 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:54.166 15:20:11 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:54.166 15:20:11 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:54.166 15:20:11 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:54.166 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:54.166 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.735 ms 00:09:54.166 00:09:54.166 --- 10.0.0.2 ping statistics --- 00:09:54.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.166 rtt min/avg/max/mdev = 0.735/0.735/0.735/0.000 ms 00:09:54.166 15:20:11 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:54.166 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:54.166 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.241 ms 00:09:54.166 00:09:54.166 --- 10.0.0.1 ping statistics --- 00:09:54.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.166 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:09:54.166 15:20:11 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:54.166 15:20:11 -- nvmf/common.sh@411 -- # return 0 00:09:54.166 15:20:11 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:54.166 15:20:11 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:54.166 15:20:11 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:09:54.166 15:20:11 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:09:54.166 15:20:11 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:54.166 15:20:11 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:09:54.166 15:20:11 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:09:54.166 15:20:11 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:09:54.166 15:20:11 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:54.166 15:20:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:54.166 15:20:11 -- common/autotest_common.sh@10 -- # set +x 00:09:54.166 15:20:11 -- nvmf/common.sh@470 -- # nvmfpid=1501215 00:09:54.166 15:20:11 -- nvmf/common.sh@471 -- # waitforlisten 1501215 00:09:54.166 15:20:11 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:54.166 15:20:11 -- common/autotest_common.sh@817 -- # '[' -z 1501215 ']' 00:09:54.166 15:20:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:54.166 15:20:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:54.166 15:20:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:54.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:54.166 15:20:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:54.166 15:20:11 -- common/autotest_common.sh@10 -- # set +x 00:09:54.166 [2024-04-26 15:20:11.516607] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:09:54.166 [2024-04-26 15:20:11.516672] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:54.166 EAL: No free 2048 kB hugepages reported on node 1 00:09:54.166 [2024-04-26 15:20:11.604147] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:54.427 [2024-04-26 15:20:11.695405] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:54.427 [2024-04-26 15:20:11.695459] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:54.427 [2024-04-26 15:20:11.695468] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:54.427 [2024-04-26 15:20:11.695475] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:54.427 [2024-04-26 15:20:11.695481] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:54.427 [2024-04-26 15:20:11.695615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:54.427 [2024-04-26 15:20:11.695779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:54.427 [2024-04-26 15:20:11.695780] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:54.998 15:20:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:54.998 15:20:12 -- common/autotest_common.sh@850 -- # return 0 00:09:54.998 15:20:12 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:54.998 15:20:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:54.998 15:20:12 -- common/autotest_common.sh@10 -- # set +x 00:09:54.998 15:20:12 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:54.998 15:20:12 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:09:54.998 15:20:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:54.998 15:20:12 -- common/autotest_common.sh@10 -- # set +x 00:09:54.998 [2024-04-26 15:20:12.345635] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:54.998 15:20:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:54.998 15:20:12 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:09:54.998 15:20:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:54.998 15:20:12 -- common/autotest_common.sh@10 -- # set +x 00:09:54.998 Malloc0 00:09:54.998 15:20:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:54.998 15:20:12 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:54.998 15:20:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:54.998 15:20:12 -- common/autotest_common.sh@10 -- # set +x 00:09:54.998 Delay0 00:09:54.998 15:20:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:54.998 15:20:12 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:54.998 15:20:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:54.998 15:20:12 -- common/autotest_common.sh@10 -- # set +x 00:09:54.998 15:20:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:54.998 15:20:12 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:09:54.998 15:20:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:54.998 15:20:12 -- common/autotest_common.sh@10 -- # set +x 00:09:54.998 15:20:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:54.998 15:20:12 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:54.998 15:20:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:54.998 15:20:12 -- common/autotest_common.sh@10 -- # set +x 00:09:54.998 [2024-04-26 15:20:12.431983] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:54.998 15:20:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:54.998 15:20:12 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:54.998 15:20:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:54.998 15:20:12 -- common/autotest_common.sh@10 -- # set +x 00:09:55.259 15:20:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:55.259 15:20:12 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:09:55.259 EAL: No free 2048 kB hugepages reported on node 1 00:09:55.259 [2024-04-26 15:20:12.511894] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:57.800 Initializing NVMe Controllers 00:09:57.800 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:57.800 controller IO queue size 128 less than required 00:09:57.800 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:09:57.800 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:09:57.800 Initialization complete. Launching workers. 00:09:57.800 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 124, failed: 33574 00:09:57.800 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 33636, failed to submit 62 00:09:57.800 success 33578, unsuccess 58, failed 0 00:09:57.800 15:20:14 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:57.800 15:20:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:57.800 15:20:14 -- common/autotest_common.sh@10 -- # set +x 00:09:57.800 15:20:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:57.800 15:20:14 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:09:57.800 15:20:14 -- target/abort.sh@38 -- # nvmftestfini 00:09:57.800 15:20:14 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:57.800 15:20:14 -- nvmf/common.sh@117 -- # sync 00:09:57.800 15:20:14 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:57.800 15:20:14 -- nvmf/common.sh@120 -- # set +e 00:09:57.800 15:20:14 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:57.800 15:20:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:57.800 rmmod nvme_tcp 00:09:57.800 rmmod nvme_fabrics 00:09:57.800 rmmod nvme_keyring 00:09:57.800 15:20:14 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:57.800 15:20:14 -- nvmf/common.sh@124 -- # set -e 00:09:57.800 15:20:14 -- nvmf/common.sh@125 -- # return 0 00:09:57.800 15:20:14 -- nvmf/common.sh@478 -- # '[' -n 1501215 ']' 00:09:57.800 15:20:14 -- nvmf/common.sh@479 -- # killprocess 1501215 00:09:57.800 15:20:14 -- common/autotest_common.sh@936 -- # '[' -z 1501215 ']' 00:09:57.800 15:20:14 -- common/autotest_common.sh@940 -- # kill -0 1501215 00:09:57.800 15:20:14 -- common/autotest_common.sh@941 -- # uname 00:09:57.800 15:20:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:57.800 15:20:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1501215 00:09:57.800 15:20:14 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:09:57.800 15:20:14 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:09:57.800 15:20:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1501215' 00:09:57.800 killing process with pid 1501215 00:09:57.800 15:20:14 -- common/autotest_common.sh@955 -- # kill 1501215 00:09:57.800 15:20:14 -- common/autotest_common.sh@960 -- # wait 1501215 00:09:57.800 15:20:14 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:57.800 15:20:14 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:57.800 15:20:14 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:09:57.801 15:20:14 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:57.801 15:20:14 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:57.801 15:20:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.801 15:20:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:57.801 15:20:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:59.715 15:20:17 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:59.715 00:09:59.715 real 0m13.020s 00:09:59.715 user 0m13.702s 00:09:59.715 sys 0m6.248s 00:09:59.715 15:20:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:59.715 15:20:17 -- common/autotest_common.sh@10 -- # set +x 00:09:59.715 ************************************ 00:09:59.715 END TEST nvmf_abort 00:09:59.715 ************************************ 00:09:59.715 15:20:17 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:59.715 15:20:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:59.715 15:20:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:59.715 15:20:17 -- common/autotest_common.sh@10 -- # set +x 00:09:59.977 ************************************ 00:09:59.977 START TEST nvmf_ns_hotplug_stress 00:09:59.977 ************************************ 00:09:59.977 15:20:17 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:59.977 * Looking for test storage... 00:09:59.977 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:59.977 15:20:17 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:59.977 15:20:17 -- nvmf/common.sh@7 -- # uname -s 00:09:59.977 15:20:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:59.977 15:20:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:59.977 15:20:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:59.977 15:20:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:59.977 15:20:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:59.977 15:20:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:59.977 15:20:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:59.977 15:20:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:59.977 15:20:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:59.977 15:20:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:59.977 15:20:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:59.977 15:20:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:59.977 15:20:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:59.977 15:20:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:59.977 15:20:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:59.977 15:20:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:59.977 15:20:17 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:59.977 15:20:17 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:59.977 15:20:17 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:59.977 15:20:17 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:59.977 15:20:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.977 15:20:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.977 15:20:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.977 15:20:17 -- paths/export.sh@5 -- # export PATH 00:09:59.977 15:20:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.977 15:20:17 -- nvmf/common.sh@47 -- # : 0 00:09:59.977 15:20:17 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:59.977 15:20:17 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:59.977 15:20:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:59.977 15:20:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:59.977 15:20:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:59.977 15:20:17 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:59.977 15:20:17 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:59.977 15:20:17 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:59.977 15:20:17 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:59.977 15:20:17 -- target/ns_hotplug_stress.sh@13 -- # nvmftestinit 00:09:59.977 15:20:17 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:09:59.977 15:20:17 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:59.977 15:20:17 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:59.977 15:20:17 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:59.977 15:20:17 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:59.978 15:20:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:59.978 15:20:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:59.978 15:20:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:59.978 15:20:17 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:59.978 15:20:17 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:59.978 15:20:17 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:59.978 15:20:17 -- common/autotest_common.sh@10 -- # set +x 00:10:08.129 15:20:24 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:10:08.129 15:20:24 -- nvmf/common.sh@291 -- # pci_devs=() 00:10:08.129 15:20:24 -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:08.129 15:20:24 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:08.129 15:20:24 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:08.129 15:20:24 -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:08.129 15:20:24 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:08.129 15:20:24 -- nvmf/common.sh@295 -- # net_devs=() 00:10:08.129 15:20:24 -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:08.129 15:20:24 -- nvmf/common.sh@296 -- # e810=() 00:10:08.129 15:20:24 -- nvmf/common.sh@296 -- # local -ga e810 00:10:08.129 15:20:24 -- nvmf/common.sh@297 -- # x722=() 00:10:08.129 15:20:24 -- nvmf/common.sh@297 -- # local -ga x722 00:10:08.129 15:20:24 -- nvmf/common.sh@298 -- # mlx=() 00:10:08.129 15:20:24 -- nvmf/common.sh@298 -- # local -ga mlx 00:10:08.129 15:20:24 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:08.129 15:20:24 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:08.129 15:20:24 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:08.129 15:20:24 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:08.129 15:20:24 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:08.129 15:20:24 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:08.129 15:20:24 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:08.129 15:20:24 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:08.129 15:20:24 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:08.129 15:20:24 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:08.129 15:20:24 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:08.129 15:20:24 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:08.129 15:20:24 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:08.129 15:20:24 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:08.129 15:20:24 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:08.129 15:20:24 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:08.129 15:20:24 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:08.129 15:20:24 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:08.129 15:20:24 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:08.129 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:08.129 15:20:24 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:08.129 15:20:24 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:08.129 15:20:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:08.129 15:20:24 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:08.129 15:20:24 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:08.129 15:20:24 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:08.129 15:20:24 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:08.129 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:08.129 15:20:24 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:08.129 15:20:24 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:08.129 15:20:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:08.129 15:20:24 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:08.129 15:20:24 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:08.129 15:20:24 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:08.129 15:20:24 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:08.129 15:20:24 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:08.129 15:20:24 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:08.129 15:20:24 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:08.129 15:20:24 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:08.129 15:20:24 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:08.130 15:20:24 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:08.130 Found net devices under 0000:31:00.0: cvl_0_0 00:10:08.130 15:20:24 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:08.130 15:20:24 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:08.130 15:20:24 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:08.130 15:20:24 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:08.130 15:20:24 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:08.130 15:20:24 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:08.130 Found net devices under 0000:31:00.1: cvl_0_1 00:10:08.130 15:20:24 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:08.130 15:20:24 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:10:08.130 15:20:24 -- nvmf/common.sh@403 -- # is_hw=yes 00:10:08.130 15:20:24 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:10:08.130 15:20:24 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:10:08.130 15:20:24 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:10:08.130 15:20:24 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:08.130 15:20:24 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:08.130 15:20:24 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:08.130 15:20:24 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:08.130 15:20:24 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:08.130 15:20:24 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:08.130 15:20:24 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:08.130 15:20:24 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:08.130 15:20:24 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:08.130 15:20:24 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:08.130 15:20:24 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:08.130 15:20:24 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:08.130 15:20:24 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:08.130 15:20:24 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:08.130 15:20:24 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:08.130 15:20:24 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:08.130 15:20:24 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:08.130 15:20:24 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:08.130 15:20:24 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:08.130 15:20:24 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:08.130 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:08.130 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.624 ms 00:10:08.130 00:10:08.130 --- 10.0.0.2 ping statistics --- 00:10:08.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.130 rtt min/avg/max/mdev = 0.624/0.624/0.624/0.000 ms 00:10:08.130 15:20:24 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:08.130 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:08.130 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:10:08.130 00:10:08.130 --- 10.0.0.1 ping statistics --- 00:10:08.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.130 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:10:08.130 15:20:24 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:08.130 15:20:24 -- nvmf/common.sh@411 -- # return 0 00:10:08.130 15:20:24 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:08.130 15:20:24 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:08.130 15:20:24 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:10:08.130 15:20:24 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:10:08.130 15:20:24 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:08.130 15:20:24 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:10:08.130 15:20:24 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:10:08.130 15:20:24 -- target/ns_hotplug_stress.sh@14 -- # nvmfappstart -m 0xE 00:10:08.130 15:20:24 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:08.130 15:20:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:08.130 15:20:24 -- common/autotest_common.sh@10 -- # set +x 00:10:08.130 15:20:24 -- nvmf/common.sh@470 -- # nvmfpid=1506298 00:10:08.130 15:20:24 -- nvmf/common.sh@471 -- # waitforlisten 1506298 00:10:08.130 15:20:24 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:08.130 15:20:24 -- common/autotest_common.sh@817 -- # '[' -z 1506298 ']' 00:10:08.130 15:20:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:08.130 15:20:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:08.130 15:20:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:08.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:08.130 15:20:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:08.130 15:20:24 -- common/autotest_common.sh@10 -- # set +x 00:10:08.130 [2024-04-26 15:20:24.796807] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:10:08.130 [2024-04-26 15:20:24.796868] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:08.130 EAL: No free 2048 kB hugepages reported on node 1 00:10:08.130 [2024-04-26 15:20:24.858041] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:08.130 [2024-04-26 15:20:24.914193] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:08.130 [2024-04-26 15:20:24.914229] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:08.130 [2024-04-26 15:20:24.914235] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:08.130 [2024-04-26 15:20:24.914239] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:08.130 [2024-04-26 15:20:24.914243] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:08.130 [2024-04-26 15:20:24.914350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:08.130 [2024-04-26 15:20:24.914504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:08.130 [2024-04-26 15:20:24.914505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:08.392 15:20:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:08.392 15:20:25 -- common/autotest_common.sh@850 -- # return 0 00:10:08.392 15:20:25 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:08.392 15:20:25 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:08.392 15:20:25 -- common/autotest_common.sh@10 -- # set +x 00:10:08.392 15:20:25 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:08.392 15:20:25 -- target/ns_hotplug_stress.sh@16 -- # null_size=1000 00:10:08.392 15:20:25 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:08.392 [2024-04-26 15:20:25.782000] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:08.392 15:20:25 -- target/ns_hotplug_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:08.653 15:20:25 -- target/ns_hotplug_stress.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:08.915 [2024-04-26 15:20:26.123130] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:08.915 15:20:26 -- target/ns_hotplug_stress.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:08.915 15:20:26 -- target/ns_hotplug_stress.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:10:09.176 Malloc0 00:10:09.176 15:20:26 -- target/ns_hotplug_stress.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:09.437 Delay0 00:10:09.437 15:20:26 -- target/ns_hotplug_stress.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:09.437 15:20:26 -- target/ns_hotplug_stress.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:10:09.699 NULL1 00:10:09.699 15:20:27 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:09.960 15:20:27 -- target/ns_hotplug_stress.sh@33 -- # PERF_PID=1506669 00:10:09.960 15:20:27 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:10:09.960 15:20:27 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1506669 00:10:09.960 15:20:27 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:09.960 EAL: No free 2048 kB hugepages reported on node 1 00:10:10.905 Read completed with error (sct=0, sc=11) 00:10:10.905 15:20:28 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:10.905 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:10.905 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:10.905 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:11.166 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:11.166 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:11.166 15:20:28 -- target/ns_hotplug_stress.sh@40 -- # null_size=1001 00:10:11.166 15:20:28 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:10:11.166 true 00:10:11.426 15:20:28 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1506669 00:10:11.426 15:20:28 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:12.370 15:20:29 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:12.370 15:20:29 -- target/ns_hotplug_stress.sh@40 -- # null_size=1002 00:10:12.370 15:20:29 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:10:12.370 true 00:10:12.370 15:20:29 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1506669 00:10:12.370 15:20:29 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:12.631 15:20:29 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:12.892 15:20:30 -- target/ns_hotplug_stress.sh@40 -- # null_size=1003 00:10:12.892 15:20:30 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:10:12.892 true 00:10:12.892 15:20:30 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1506669 00:10:12.892 15:20:30 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:13.153 15:20:30 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:13.413 15:20:30 -- target/ns_hotplug_stress.sh@40 -- # null_size=1004 00:10:13.413 15:20:30 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:10:13.413 true 00:10:13.413 15:20:30 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1506669 00:10:13.413 15:20:30 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:13.673 15:20:30 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:13.934 15:20:31 -- target/ns_hotplug_stress.sh@40 -- # null_size=1005 00:10:13.934 15:20:31 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:10:13.934 true 00:10:13.934 15:20:31 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1506669 00:10:13.934 15:20:31 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:14.194 15:20:31 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:14.455 15:20:31 -- target/ns_hotplug_stress.sh@40 -- # null_size=1006 00:10:14.455 15:20:31 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:10:14.455 true 00:10:14.455 15:20:31 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1506669 00:10:14.455 15:20:31 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:14.715 15:20:31 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:14.715 15:20:32 -- target/ns_hotplug_stress.sh@40 -- # null_size=1007 00:10:14.715 15:20:32 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:10:14.975 true 00:10:14.975 15:20:32 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1506669 00:10:14.975 15:20:32 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:15.236 15:20:32 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:15.236 15:20:32 -- target/ns_hotplug_stress.sh@40 -- # null_size=1008 00:10:15.236 15:20:32 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:10:15.496 true 00:10:15.496 15:20:32 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1506669 00:10:15.496 15:20:32 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:15.757 15:20:32 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:15.757 15:20:33 -- target/ns_hotplug_stress.sh@40 -- # null_size=1009 00:10:15.757 15:20:33 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:10:16.032 true 00:10:16.032 15:20:33 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1506669 00:10:16.032 15:20:33 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:16.032 15:20:33 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:16.295 15:20:33 -- target/ns_hotplug_stress.sh@40 -- # null_size=1010 00:10:16.295 15:20:33 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:10:16.556 true 00:10:16.556 15:20:33 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1506669 00:10:16.556 15:20:33 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:16.556 15:20:33 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:16.816 15:20:34 -- target/ns_hotplug_stress.sh@40 -- # null_size=1011 00:10:16.816 15:20:34 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:10:17.077 true 00:10:17.077 15:20:34 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1506669 00:10:17.077 15:20:34 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:17.077 15:20:34 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:17.337 15:20:34 -- target/ns_hotplug_stress.sh@40 -- # null_size=1012 00:10:17.337 15:20:34 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:10:17.337 true 00:10:17.596 15:20:34 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1506669 00:10:17.596 15:20:34 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:18.537 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:18.537 15:20:35 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:18.537 15:20:35 -- target/ns_hotplug_stress.sh@40 -- # null_size=1013 00:10:18.537 15:20:35 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:10:18.537 true 00:10:18.537 15:20:35 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1506669 00:10:18.537 15:20:35 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:18.798 15:20:36 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:19.058 15:20:36 -- target/ns_hotplug_stress.sh@40 -- # null_size=1014 00:10:19.058 15:20:36 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:10:19.058 true 00:10:19.058 15:20:36 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1506669 00:10:19.058 15:20:36 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:19.318 15:20:36 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:19.579 15:20:36 -- target/ns_hotplug_stress.sh@40 -- # null_size=1015 00:10:19.579 15:20:36 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:10:19.579 true 00:10:19.579 15:20:36 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1506669 00:10:19.579 15:20:36 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:19.839 15:20:37 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:19.839 15:20:37 -- target/ns_hotplug_stress.sh@40 -- # null_size=1016 00:10:19.839 15:20:37 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:10:20.100 true 00:10:20.100 15:20:37 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1506669 00:10:20.100 15:20:37 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:20.361 15:20:37 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:20.361 15:20:37 -- target/ns_hotplug_stress.sh@40 -- # null_size=1017 00:10:20.361 15:20:37 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:10:20.621 true 00:10:20.621 15:20:37 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1506669 00:10:20.621 15:20:37 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:20.882 15:20:38 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:20.882 15:20:38 -- target/ns_hotplug_stress.sh@40 -- # null_size=1018 00:10:20.882 15:20:38 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:10:21.143 true 00:10:21.143 15:20:38 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1506669 00:10:21.143 15:20:38 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:21.403 15:20:38 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:21.403 15:20:38 -- target/ns_hotplug_stress.sh@40 -- # null_size=1019 00:10:21.403 15:20:38 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:10:21.662 true 00:10:21.662 15:20:38 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1506669 00:10:21.662 15:20:38 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:21.922 15:20:39 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:21.922 15:20:39 -- target/ns_hotplug_stress.sh@40 -- # null_size=1020 00:10:21.922 15:20:39 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:10:22.183 true 00:10:22.183 15:20:39 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1506669 00:10:22.183 15:20:39 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:22.443 15:20:39 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:22.443 15:20:39 -- target/ns_hotplug_stress.sh@40 -- # null_size=1021 00:10:22.443 15:20:39 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:10:22.704 true 00:10:22.704 15:20:39 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1506669 00:10:22.704 15:20:39 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:22.704 15:20:40 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:22.965 15:20:40 -- target/ns_hotplug_stress.sh@40 -- # null_size=1022 00:10:22.965 15:20:40 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:10:23.225 true 00:10:23.225 15:20:40 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1506669 00:10:23.225 15:20:40 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:23.225 15:20:40 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:23.485 15:20:40 -- target/ns_hotplug_stress.sh@40 -- # null_size=1023 00:10:23.485 15:20:40 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:10:23.744 true 00:10:23.744 15:20:40 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1506669 00:10:23.745 15:20:40 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:23.745 15:20:41 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:24.005 15:20:41 -- target/ns_hotplug_stress.sh@40 -- # null_size=1024 00:10:24.005 15:20:41 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:10:24.265 true 00:10:24.265 15:20:41 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1506669 00:10:24.265 15:20:41 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:24.265 15:20:41 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:24.525 15:20:41 -- target/ns_hotplug_stress.sh@40 -- # null_size=1025 00:10:24.525 15:20:41 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:10:24.526 true 00:10:24.785 15:20:41 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1506669 00:10:24.785 15:20:41 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:25.777 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:25.777 15:20:42 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:25.777 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:25.777 15:20:43 -- target/ns_hotplug_stress.sh@40 -- # null_size=1026 00:10:25.777 15:20:43 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:10:26.039 true 00:10:26.039 15:20:43 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1506669 00:10:26.039 15:20:43 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:26.039 15:20:43 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:26.299 15:20:43 -- target/ns_hotplug_stress.sh@40 -- # null_size=1027 00:10:26.299 15:20:43 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:10:26.559 true 00:10:26.559 15:20:43 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1506669 00:10:26.559 15:20:43 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:26.559 15:20:43 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:26.819 15:20:44 -- target/ns_hotplug_stress.sh@40 -- # null_size=1028 00:10:26.819 15:20:44 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:10:27.080 true 00:10:27.080 15:20:44 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1506669 00:10:27.080 15:20:44 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.080 15:20:44 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:27.341 15:20:44 -- target/ns_hotplug_stress.sh@40 -- # null_size=1029 00:10:27.341 15:20:44 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:10:27.603 true 00:10:27.603 15:20:44 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1506669 00:10:27.603 15:20:44 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.603 15:20:44 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:27.864 15:20:45 -- target/ns_hotplug_stress.sh@40 -- # null_size=1030 00:10:27.864 15:20:45 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:10:27.864 true 00:10:27.864 15:20:45 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1506669 00:10:27.864 15:20:45 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.125 15:20:45 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:28.385 15:20:45 -- target/ns_hotplug_stress.sh@40 -- # null_size=1031 00:10:28.385 15:20:45 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:10:28.385 true 00:10:28.385 15:20:45 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1506669 00:10:28.385 15:20:45 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.646 15:20:45 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:28.906 15:20:46 -- target/ns_hotplug_stress.sh@40 -- # null_size=1032 00:10:28.906 15:20:46 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:10:28.906 true 00:10:28.906 15:20:46 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1506669 00:10:28.906 15:20:46 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:29.166 15:20:46 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:29.425 15:20:46 -- target/ns_hotplug_stress.sh@40 -- # null_size=1033 00:10:29.425 15:20:46 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:10:29.425 true 00:10:29.425 15:20:46 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1506669 00:10:29.425 15:20:46 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:29.685 15:20:46 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:29.946 15:20:47 -- target/ns_hotplug_stress.sh@40 -- # null_size=1034 00:10:29.946 15:20:47 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:10:29.946 true 00:10:29.946 15:20:47 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1506669 00:10:29.946 15:20:47 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:30.205 15:20:47 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:30.465 15:20:47 -- target/ns_hotplug_stress.sh@40 -- # null_size=1035 00:10:30.465 15:20:47 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:10:30.465 true 00:10:30.465 15:20:47 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1506669 00:10:30.465 15:20:47 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:30.726 15:20:48 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:30.726 15:20:48 -- target/ns_hotplug_stress.sh@40 -- # null_size=1036 00:10:30.726 15:20:48 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:10:31.007 true 00:10:31.007 15:20:48 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1506669 00:10:31.007 15:20:48 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:31.278 15:20:48 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:31.278 15:20:48 -- target/ns_hotplug_stress.sh@40 -- # null_size=1037 00:10:31.278 15:20:48 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:10:31.588 true 00:10:31.588 15:20:48 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1506669 00:10:31.588 15:20:48 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:31.588 15:20:49 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:31.848 15:20:49 -- target/ns_hotplug_stress.sh@40 -- # null_size=1038 00:10:31.848 15:20:49 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:10:32.109 true 00:10:32.109 15:20:49 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1506669 00:10:32.109 15:20:49 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:32.109 15:20:49 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:32.370 15:20:49 -- target/ns_hotplug_stress.sh@40 -- # null_size=1039 00:10:32.370 15:20:49 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:10:32.631 true 00:10:32.631 15:20:49 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1506669 00:10:32.631 15:20:49 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:32.631 15:20:50 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:32.892 15:20:50 -- target/ns_hotplug_stress.sh@40 -- # null_size=1040 00:10:32.892 15:20:50 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:10:32.892 true 00:10:33.153 15:20:50 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1506669 00:10:33.154 15:20:50 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:33.154 15:20:50 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:33.414 15:20:50 -- target/ns_hotplug_stress.sh@40 -- # null_size=1041 00:10:33.414 15:20:50 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:10:33.414 true 00:10:33.414 15:20:50 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1506669 00:10:33.414 15:20:50 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:33.675 15:20:51 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:33.936 15:20:51 -- target/ns_hotplug_stress.sh@40 -- # null_size=1042 00:10:33.936 15:20:51 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:10:33.936 true 00:10:33.936 15:20:51 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1506669 00:10:33.936 15:20:51 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:34.197 15:20:51 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:34.459 15:20:51 -- target/ns_hotplug_stress.sh@40 -- # null_size=1043 00:10:34.459 15:20:51 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:10:34.459 true 00:10:34.459 15:20:51 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1506669 00:10:34.459 15:20:51 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:34.719 15:20:52 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:34.980 15:20:52 -- target/ns_hotplug_stress.sh@40 -- # null_size=1044 00:10:34.980 15:20:52 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:10:34.980 true 00:10:34.980 15:20:52 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1506669 00:10:34.980 15:20:52 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:35.241 15:20:52 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:35.502 15:20:52 -- target/ns_hotplug_stress.sh@40 -- # null_size=1045 00:10:35.502 15:20:52 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:10:35.502 true 00:10:35.502 15:20:52 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1506669 00:10:35.502 15:20:52 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:35.764 15:20:53 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:36.024 15:20:53 -- target/ns_hotplug_stress.sh@40 -- # null_size=1046 00:10:36.024 15:20:53 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:10:36.024 true 00:10:36.024 15:20:53 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1506669 00:10:36.024 15:20:53 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:36.285 15:20:53 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:36.285 15:20:53 -- target/ns_hotplug_stress.sh@40 -- # null_size=1047 00:10:36.285 15:20:53 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:10:36.546 true 00:10:36.546 15:20:53 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1506669 00:10:36.546 15:20:53 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:36.806 15:20:54 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:36.806 15:20:54 -- target/ns_hotplug_stress.sh@40 -- # null_size=1048 00:10:36.806 15:20:54 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:10:37.067 true 00:10:37.067 15:20:54 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1506669 00:10:37.067 15:20:54 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:38.011 15:20:55 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:38.011 15:20:55 -- target/ns_hotplug_stress.sh@40 -- # null_size=1049 00:10:38.011 15:20:55 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:10:38.272 true 00:10:38.272 15:20:55 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1506669 00:10:38.272 15:20:55 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:38.532 15:20:55 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:38.532 15:20:55 -- target/ns_hotplug_stress.sh@40 -- # null_size=1050 00:10:38.532 15:20:55 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:10:38.793 true 00:10:38.793 15:20:56 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1506669 00:10:38.794 15:20:56 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:38.794 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:38.794 15:20:56 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:39.070 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:39.070 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:39.070 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:39.070 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:39.070 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:39.070 [2024-04-26 15:20:56.384705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.070 [2024-04-26 15:20:56.384772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.070 [2024-04-26 15:20:56.384806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.070 [2024-04-26 15:20:56.384832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.070 [2024-04-26 15:20:56.384865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.070 [2024-04-26 15:20:56.384892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.384927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.384955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.384988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.385018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.385044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.385078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.385110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.385140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.385191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.385221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.385250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.385276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.385331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.385359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.385393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.385422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.385451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.385482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.385512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.385550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.385579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.385615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.385645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.385674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.385708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.385740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.385769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.385801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.385832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.385871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.385899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.385931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.385962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.385996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.386023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.386056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.386083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.386116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.386148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.386179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.386211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.386242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.386269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.386300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.386333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.386363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.386389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.386420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.386455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.386484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.386512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.386541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.386572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.386601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.386631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.386659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.386692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.386728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.387143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.387179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.387211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.387245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.387278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.387307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.387336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.387366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.387395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.387424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.387456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.387486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.387514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.387542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.387576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.387605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.387639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.387669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.387701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.387733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.387767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.387797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.387832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.387864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.387897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.387930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.387964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.387995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.388051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.388078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.388113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.388145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.388174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.388208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.388238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.388268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.071 [2024-04-26 15:20:56.388301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.388336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.388368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.388394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.388425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.388457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.388490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.388521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.388549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.388578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.388607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.388636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.388672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.388703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.388727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.388765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.388805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.388835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.388870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.388900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.388929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.388960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.388988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.389011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.389045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.389076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.389106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.389562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.389596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.389630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.389666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.389699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.389731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.389765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.389797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.389828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.389864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.389895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.389924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.389953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.389989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.390020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.390049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.390081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.390110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.390143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.390175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.390204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.390245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.390274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.390303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.390333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.390368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.390397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.390425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.390454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.390484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.390511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.390543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.390575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.390606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.390642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.390671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.390699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.390737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.390769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.390802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.390834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.390869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.390901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.390928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.390958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.390987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.391013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.391046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.391075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.391105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.391137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.391166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.391193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.391220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.391247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.391283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.391316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.391356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.391392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.391420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.391451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.391481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.391510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.391541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.392018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.392051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.392079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.072 [2024-04-26 15:20:56.392116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.392147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.392178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.392206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.392245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.392275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.392305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.392333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.392360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.392394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.392425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.392453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.392486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.392519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.392553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.392585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.392640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.392673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.392706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.392738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.392770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.392801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.392843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.392877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.392908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.392939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.392979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.393012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.393037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.393065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.393095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.393123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.393155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.393185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.393217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.393256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.393286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.393314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.393339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.393373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.393402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.393433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.393469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.393502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.393533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.393570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.393600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.393631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.393663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.393692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.393723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.393758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.393786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.393814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.393848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.393878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.393909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.393939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.393970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.394002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.394368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.394428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.394456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.394490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.394519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.394566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.394595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.394641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.394672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.394700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.394729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.394756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.394786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.394810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.394847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.394883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.394921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.394960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.394995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.395028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.395058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.395095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.395125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.395151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.395181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.395210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.395237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.395267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.395294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.395325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.395361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.395395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.395424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.395452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.395485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.073 [2024-04-26 15:20:56.395511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.395541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.395572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.395606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.395641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.395668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.395701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.395731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.395760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.395792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.395824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.395858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.395893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.395923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.395954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.395986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.396019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.396053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.396086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.396120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.396157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.396188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.396222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.396252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.396281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.396309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.396336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.396365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.396396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.396737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.396767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.396797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.396832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.396865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.396892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.396922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.396953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.396985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.397021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.397049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.397079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.397110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.397159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.397187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.397220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.397247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.397272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.397303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.397331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.397362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.397392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.397416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.397447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.397478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.397512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.397544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.397575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.397606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.397640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.397664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.397696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.397727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.397758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.397793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.397825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.397863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.397890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.397921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.397956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.397990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.398016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.398048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.398077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.398108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.398141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.398174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.398205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.398236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.398269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.398303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.398331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.398364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.398391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.398419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.398449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.398476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.398508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.398533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.398568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.398598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.398625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.398650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.399032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.399063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.074 [2024-04-26 15:20:56.399094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.075 [2024-04-26 15:20:56.399122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.075 [2024-04-26 15:20:56.399150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.075 [2024-04-26 15:20:56.399180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.075 [2024-04-26 15:20:56.399213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.075 [2024-04-26 15:20:56.399244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.075 [2024-04-26 15:20:56.399273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.075 [2024-04-26 15:20:56.399308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.075 [2024-04-26 15:20:56.399333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.075 [2024-04-26 15:20:56.399363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.075 [2024-04-26 15:20:56.399393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.075 [2024-04-26 15:20:56.399423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.075 [2024-04-26 15:20:56.399455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.075 [2024-04-26 15:20:56.399489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.075 [2024-04-26 15:20:56.399523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.075 [2024-04-26 15:20:56.399553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.075 [2024-04-26 15:20:56.399580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.075 [2024-04-26 15:20:56.399609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.075 [2024-04-26 15:20:56.399640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.075 [2024-04-26 15:20:56.399671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.075 [2024-04-26 15:20:56.399702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.075 [2024-04-26 15:20:56.399736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.075 [2024-04-26 15:20:56.399762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.075 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:39.075 [2024-04-26 15:20:56.399792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.075 [2024-04-26 15:20:56.399822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.075 [2024-04-26 15:20:56.399861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.075 [2024-04-26 15:20:56.399888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.075 [2024-04-26 15:20:56.399924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.075 [2024-04-26 15:20:56.399954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.075 [2024-04-26 15:20:56.399988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.075 [2024-04-26 15:20:56.400019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.075 [2024-04-26 15:20:56.400048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.075 [2024-04-26 15:20:56.400077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.075 [2024-04-26 15:20:56.400106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.075 [2024-04-26 15:20:56.400141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.075 [2024-04-26 15:20:56.400168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.075 [2024-04-26 15:20:56.400204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.075 [2024-04-26 15:20:56.400236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.075 [2024-04-26 15:20:56.400266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.075 [2024-04-26 15:20:56.400300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.075 [2024-04-26 15:20:56.400329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.075 [2024-04-26 15:20:56.400359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.075 [2024-04-26 15:20:56.400395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.075 [2024-04-26 15:20:56.400425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.075 [2024-04-26 15:20:56.400459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.075 [2024-04-26 15:20:56.400490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.075 [2024-04-26 15:20:56.400522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.075 [2024-04-26 15:20:56.400551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.075 [2024-04-26 15:20:56.400583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.075 [2024-04-26 15:20:56.400615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.075 [2024-04-26 15:20:56.400647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.075 [2024-04-26 15:20:56.400678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.075 [2024-04-26 15:20:56.400712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.075 [2024-04-26 15:20:56.400746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.075 [2024-04-26 15:20:56.400772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.075 [2024-04-26 15:20:56.400800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.075 [2024-04-26 15:20:56.400829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.075 [2024-04-26 15:20:56.400864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.075 [2024-04-26 15:20:56.400899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.075 [2024-04-26 15:20:56.400929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.075 [2024-04-26 15:20:56.400960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.075 [2024-04-26 15:20:56.400997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.075 [2024-04-26 15:20:56.401346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.075 [2024-04-26 15:20:56.401374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.075 [2024-04-26 15:20:56.401404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.075 [2024-04-26 15:20:56.401439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.075 [2024-04-26 15:20:56.401473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.075 [2024-04-26 15:20:56.401504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.075 [2024-04-26 15:20:56.401543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.075 [2024-04-26 15:20:56.401576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.075 [2024-04-26 15:20:56.401609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.075 [2024-04-26 15:20:56.401641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.075 [2024-04-26 15:20:56.401677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.075 [2024-04-26 15:20:56.401708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.075 [2024-04-26 15:20:56.401739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.075 [2024-04-26 15:20:56.401766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.075 [2024-04-26 15:20:56.401796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.075 [2024-04-26 15:20:56.401826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.075 [2024-04-26 15:20:56.401864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.401898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.401927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.401958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.401991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.402023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.402055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.402083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.402118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.402150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.402189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.402218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.402264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.402294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.402345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.402373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.402402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.402432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.402466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.402499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.402531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.402555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.402589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.402618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.402649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.402687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.402719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.402749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.402784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.402818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.402854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.402888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.402918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.402947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.402977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.403012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.403042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.403073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.403102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.403135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.403166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.403197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.403227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.403255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.403282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.403311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.403338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.403690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.403722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.403752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.403781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.403811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.403847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.403876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.403906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.403934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.403963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.403993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.404016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.404053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.404088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.404118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.404147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.404175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.404204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.404241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.404276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.404305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.404333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.404359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.404385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.404415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.404442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.404471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.404498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.404528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.404560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.404592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.404621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.404655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.404682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.404711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.404744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.404773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.404805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.404842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.404871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.404906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.404938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.404968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.404997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.405025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.405058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.405095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.405125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.405156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.405180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.405217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.405247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.076 [2024-04-26 15:20:56.405278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.405316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.405345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.405373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.405398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.405432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.405460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.405489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.405522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.405552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.405588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.405620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.405976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.406009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.406058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.406088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.406134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.406168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.406203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.406235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.406269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.406300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.406356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.406382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.406419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.406452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.406485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.406520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.406547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.406577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.406604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.406639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.406668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.406694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.406726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.406755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.406791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.406818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.406852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.406881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.406911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.406946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.406974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.407003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.407031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.407062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.407094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.407124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.407157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.407183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.407216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.407249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.407282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.407316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.407348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.407379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.407405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.407437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.407466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.407497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.407529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.407565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.407597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.407627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.407659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.407688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.407716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.407746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.407773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.407800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.407835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.407874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.407901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.407932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.407962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.408330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.408365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.408396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.408428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.408458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.408485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.408512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.408541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.408569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.408599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.408629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.408665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.408700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.408735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.408770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.408807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.408841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.408878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.408918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.408953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.408982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.409012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.409037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.077 [2024-04-26 15:20:56.409070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.409103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.409133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.409159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.409190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.409224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.409253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.409282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.409308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.409339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.409374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.409405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.409435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.409471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.409502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.409530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.409560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.409593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.409620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.409654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.409686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.409717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.409743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.409773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.409808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.409851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.409887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.409917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.409948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.409977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.410009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.410036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.410066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.410125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.410153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.410198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.410226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.410266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.410296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.410331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.410361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.410704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.410731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.410758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.410793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.410825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.410862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.410895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.410931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.410962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.410990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.411017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.411047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.411077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.411107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.411132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.411166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.411199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.411231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.411267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.411298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.411331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.411363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.411394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.411422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.411453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.411480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.411512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.411543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.411576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.411613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.411639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.411666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.411696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.411731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.411765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.411796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.411828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.411865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.411894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.411926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.411953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.411983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.412014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.412045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.412078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.412112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.412142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.412171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.412202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.412229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.412264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.412293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.412325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.412354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.412388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.412419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.412454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.412485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.078 [2024-04-26 15:20:56.412517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.412548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.412577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.412605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.412636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.412997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.413036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.413068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.413098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.413122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.413151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.413181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.413210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.413246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.413276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.413303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.413333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.413362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.413391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.413417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.413444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.413473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.413505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.413533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.413570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.413600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.413630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.413663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.413691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.413721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.413753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.413783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.413810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.413840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.413874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.413909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.413940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.413972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.414002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.414035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.414063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.414092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.414120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.414151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.414184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.414212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.414241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.414273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.414309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.414338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.414366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.414394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.414431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.414461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.414490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.414516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.414549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.414586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.414620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.414650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.414678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.414706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.414737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.414765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.414795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.414828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.414864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.414894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.414921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.415282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.415343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.415370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.415429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.415458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.415490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.415520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.415566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.415598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.415652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.415682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.415714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.415746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.415776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.415803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.415831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.415863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.415895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.415925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.415956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.415984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.416010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.416041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.416072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.416096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.416130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.416161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.079 [2024-04-26 15:20:56.416196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.416228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.416258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.416285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.416314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.416344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.416373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.416403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.416434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.416468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.416500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.416535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.416566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.416601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.416631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.416666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.416698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.416729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.416758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.416782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.416812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.416851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.416882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.416910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.416944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.416977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.417024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.417054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.417092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.417125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.417162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.417193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.417228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.417258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.417296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.417329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 15:20:56 -- target/ns_hotplug_stress.sh@40 -- # null_size=1051 00:10:39.080 [2024-04-26 15:20:56.417684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.417721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.417750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.417787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.417819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 15:20:56 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:10:39.080 [2024-04-26 15:20:56.417855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.417889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.417922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.417952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.417983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.418014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.418044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.418084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.418116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.418148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.418183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.418219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.418253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.418288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.418319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.418350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.418376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.418400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.418436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.418466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.418493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.418522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.418553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.418580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.418609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.418644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.418677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.418712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.418740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.418768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.418794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.418825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.418861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.418893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.418925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.418957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.418987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.419022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.419055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.419085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.419117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.419147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.080 [2024-04-26 15:20:56.419183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.419217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.419250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.419281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.419311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.419340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.419369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.419403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.419437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.419476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.419505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.419539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.419570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.419604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.419638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.419669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.419704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.420074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.420108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.420139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.420185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.420216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.420249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.420281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.420317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.420349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.420386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.420416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.420452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.420486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.420517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.420548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.420587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.420619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.420650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.420687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.420722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.420762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.420793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.420827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.420865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.420895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.420926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.420960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.421000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.421035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.421064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.421090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.421119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.421152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.421184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.421210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.421242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.421276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.421302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.421336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.421368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.421397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.421430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.421467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.421496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.421523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.421552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.421585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.421611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.421643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.421673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.421705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.421738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.421772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.421810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.421844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.421872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.421908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.421940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.421971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.421999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.422034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.422064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.422092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.422448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.422481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.422513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.422545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.422583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.422613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.422645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.422684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.422716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.422755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.422787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.422816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.422854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.422884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.422947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.422978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.423011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.423043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.423077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.081 [2024-04-26 15:20:56.423115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.423149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.423185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.423218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.423251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.423282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.423314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.423345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.423378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.423408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.423437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.423474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.423508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.423540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.423573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.423608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.423639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.423671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.423701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.423730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.423760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.423796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.423824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.423863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.423892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.423924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.423958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.423990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.424023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.424056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.424087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.424122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.424148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.424176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.424208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.424243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.424275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.424308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.424345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.424378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.424412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.424451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.424487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.424517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.424548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.424903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.424942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.424973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.425009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.425042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.425074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.425105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.425135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.425167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.425202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.425234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.425264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.425293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.425328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.425359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.425391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.425424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.425455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.425487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.425515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.425562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.425595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.425627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.425658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.425690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.425723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.425756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.425799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.425832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.425869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.425902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.425932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.425960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.425987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.426013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.426041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.426072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.426113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.426144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.426176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.426206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.426242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.426275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.426310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.426339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.426370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.426403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.426434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.426462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.426492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.426522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.426552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.426582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.426615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.082 [2024-04-26 15:20:56.426644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.426672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.426702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.426734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.426766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.426796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.426827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.426868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.426908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.427296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.427331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.427364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.427409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.427440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.427476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.427505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.427538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.427567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.427599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.427635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.427664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.427702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.427734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.427794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.427827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.427862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.427892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.427924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.427956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.427985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.428020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.428052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.428091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.428125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.428158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.428189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.428221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.428263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.428295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.428327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.428358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.428392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.428428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.428458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.428491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.428526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.428555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.428583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.428610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.428648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.428677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.428707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.428745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.428775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.428805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.428835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.428874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.428907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.428936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.428971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.428998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.429033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.429071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.429101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.429128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.429155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.429188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.429219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.429246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.429281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.429317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.429349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.429380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.429752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.429786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.429819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.429855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.429884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.429919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.429952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.429983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.430019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.430051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.430087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.430118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.430152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.430186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.430215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.430245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.430277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.430313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.430345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.430385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.430418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.430452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.430486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.430514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.430549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.430580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.430615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.083 [2024-04-26 15:20:56.430647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.430684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.430719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.430748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.430774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.430811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.430850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.430881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.430917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.430952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.430985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.431019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.431049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.431073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.431100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.431135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.431170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.431203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.431234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.431266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.431296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.431332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.431362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.431399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.431431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.431463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.431494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.431531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.431562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.431593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.431626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.431659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.431688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.431722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.431753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.431782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.432154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.432186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.432219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.432255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.432285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.432314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.432344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.432372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.432406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.432443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.432473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.432501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.432533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.432565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.432591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.432620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.432653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.432684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.432716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.432749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.432784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.432820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.432861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.432895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.432929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.432959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.433001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.433029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.433064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.433096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.433127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.433164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.433193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.433226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.433258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.433290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.433327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.433357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.433390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.433421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.433451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.433485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.433515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.433550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.433583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.433618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.433648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.433678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.084 [2024-04-26 15:20:56.433711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.433746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.433779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.433815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.433850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.433916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.433949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.433986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.434018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.434046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.434078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.434109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.434142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.434174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.434205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.434242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.434605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.434640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.434676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.434704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.434742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.434771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.434800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.434830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.434868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.434902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.434938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.434969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.435000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.435032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.435068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.435102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.435133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.435177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.435208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.435245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.435277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.435310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.435343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.435375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.435413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.435443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.435487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.435519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.435548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.435580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.435612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.435642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.435676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.435714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.435745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.435787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.435818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.435851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.435885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.435918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.435949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.435978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.436005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.436035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.436064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.436096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.436123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.436160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.436192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.436223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.436256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.436287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.436318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.436350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.436382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.436409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.436440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.436469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.436500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.436532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.436560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.436592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.436626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.436990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.437021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.437054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.437084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.437114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.437145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.437179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.437225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.437257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:39.085 [2024-04-26 15:20:56.437291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.437323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.437353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.437385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.437415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.437445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.437477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.437506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.437537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.437574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.085 [2024-04-26 15:20:56.437605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.437637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.437670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.437704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.437743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.437771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.437806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.437847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.437882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.437915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.437946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.437977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.438008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.438038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.438067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.438099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.438129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.438160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.438192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.438220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.438263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.438295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.438326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.438358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.438388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.438419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.438457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.438493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.438525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.438557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.438587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.438625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.438656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.438687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.438721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.438747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.438776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.438809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.438848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.438883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.438916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.438951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.438985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.439015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.439051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.439414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.439445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.439477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.439510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.439540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.439577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.439610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.439642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.439678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.439708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.439742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.439774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.439804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.439847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.439878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.439909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.439939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.439971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.440004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.440033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.440094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.440127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.440168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.440200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.440231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.440273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.440305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.440338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.440368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.440404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.440432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.440464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.440491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.440524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.440553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.440583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.440614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.440646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.440675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.440704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.440733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.440770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.440803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.440833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.440869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.440897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.440932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.440959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.440988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.441023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.441058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.441085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.441118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.441148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.441179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.086 [2024-04-26 15:20:56.441208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.441238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.441273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.441305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.441341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.441375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.441414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.441449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.441814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.441851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.441893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.441926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.441954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.441983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.442018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.442057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.442090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.442138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.442172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.442202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.442235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.442265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.442329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.442360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.442406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.442438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.442464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.442491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.442527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.442563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.442592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.442622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.442657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.442691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.442721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.442751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.442776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.442809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.442846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.442878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.442915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.442944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.442977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.443009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.443035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.443071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.443103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.443135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.443169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.443202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.443237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.443274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.443303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.443338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.443372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.443406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.443435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.443465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.443520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.443553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.443587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.443621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.443651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.443686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.443714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.443750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.443780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.443811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.443848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.443878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.443907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.443935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.444297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.444334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.444363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.444413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.444446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.444479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.444507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.444540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.444572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.444603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.444638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.444672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.444703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.444737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.444768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.444801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.444842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.444877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.444934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.444965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.444996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.445027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.445057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.445087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.445118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.445151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.445184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.087 [2024-04-26 15:20:56.445219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.445248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.445281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.445306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.445342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.445373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.445404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.445435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.445466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.445495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.445529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.445567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.445597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.445626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.445652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.445686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.445721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.445758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.445787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.445823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.445860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.445892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.445926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.445962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.445993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.446024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.446056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.446086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.446126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.446157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.446192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.446220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.446250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.446281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.446313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.446348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.446713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.446746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.446774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.446809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.446847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.446880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.446915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.446949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.446979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.447009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.447041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.447074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.447103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.447132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.447167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.447199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.447226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.447261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.447298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.447334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.447365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.447397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.447430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.447459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.447489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.447522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.447552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.447580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.447607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.447643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.447675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.447708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.447743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.447771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.447800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.447832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.447876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.447906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.447937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.447968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.448000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.448036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.448066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.448096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.448131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.448163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.448188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.448224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.448258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.448285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.448316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.448348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.448385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.448414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.448448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.448480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.448512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.448546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.448580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.448614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.448647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.448702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.448732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.088 [2024-04-26 15:20:56.448764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.449114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.449150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.449185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.449217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.449251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.449279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.449321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.449355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.449383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.449420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.449455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.449484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.449512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.449552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.449584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.449615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.449652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.449681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.449712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.449741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.449770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.449805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.449841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.449875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.449907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.449936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.449973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.450001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.450038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.450076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.450105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.450137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.450169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.450228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.450261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.450298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.450331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.450363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.450398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.450427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.450463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.450496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.450530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.450561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.450595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.450626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.450656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.450692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.450722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.450755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.450788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.450821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.450862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.450893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.450923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.450953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.450983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.451019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.451049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.451079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.451110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.451141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.451174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.451537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.451570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.451604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.451638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.451667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.451702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.451734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.451770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.451800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.451828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.451868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.451900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.451932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.451961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.452020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.452051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.452080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.452114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.452145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.089 [2024-04-26 15:20:56.452204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.452234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.452273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.452304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.452335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.452365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.452396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.452426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.452459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.452493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.452522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.452548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.452578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.452613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.452644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.452672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.452711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.452743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.452774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.452811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.452846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.452884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.452916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.452942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.452970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.453004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.453031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.453062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.453091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.453121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.453149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.453178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.453209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.453242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.453275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.453306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.453339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.453369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.453401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.453435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.453465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.453498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.453531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.453566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.453596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.453958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.453988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.454017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.454055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.454088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.454121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.454150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.454182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.454214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.454248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.454274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.454301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.454334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.454365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.454397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.454436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.454467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.454499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.454533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.454565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.454594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.454627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.454660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.454691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.454730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.454759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.454789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.454825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.454858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.454892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.454925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.454962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.454997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.455028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.455061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.455099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.455134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.455167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.455199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.455229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.455263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.455295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.455320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.455352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.455380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.455405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.455430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.455456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.455482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.455515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.455548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.455582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.455611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.455645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.090 [2024-04-26 15:20:56.455687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.455717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.455750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.455779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.455811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.455855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.455885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.455918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.455952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.456302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.456332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.456364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.456403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.456435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.456463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.456492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.456528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.456561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.456591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.456623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.456650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.456685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.456724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.456760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.456792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.456826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.456863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.456894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.456930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.456961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.457018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.457049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.457080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.457122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.457151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.457184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.457214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.457246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.457277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.457307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.457335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.457365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.457396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.457430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.457461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.457492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.457530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.457558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.457588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.457622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.457655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.457688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.457722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.457752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.457787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.457818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.457857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.457887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.457920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.457952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.457990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.458022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.458053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.458083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.458117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.458149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.458181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.458219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.458250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.458280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.458313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.458343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.458374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.458731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.458779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.458813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.458852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.458883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.458911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.458944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.458974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.459014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.459043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.459077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.459106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.459138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.459167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.459200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.459235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.459266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.459296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.459328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.459362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.459395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.459423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.459450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.459489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.459520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.459550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.091 [2024-04-26 15:20:56.459575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.459608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.459645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.459678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.459709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.459741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.459771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.459803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.459841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.459877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.459910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.459938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.459972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.460010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.460041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.460071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.460106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.460138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.460169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.460201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.460236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.460268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.460298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.460331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.460366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.460395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.460430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.460467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.460494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.460520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.460554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.460585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.460617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.460646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.460680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.460711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.460743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.461116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.461149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.461180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.461212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.461245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.461281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.461317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.461346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.461377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.461407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.461436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.461472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.461503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.461532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.461562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.461593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.461627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.461654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.461685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.461719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.461759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.461788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.461815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.461848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.461877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.461908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.461943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.461978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.462010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.462042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.462075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.462108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.462139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.462168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.462200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.462232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.462281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.462314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.462353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.462384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.462416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.462452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.462482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.462519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.462550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.462581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.462611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.462644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.462677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.462710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.462744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.462775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.462807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.462845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.462874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.462907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.462937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.462977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.463005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.463041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.463072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.463104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.463136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.463165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.463514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.463553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.463581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.463610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.092 [2024-04-26 15:20:56.463646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.093 [2024-04-26 15:20:56.463681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.093 [2024-04-26 15:20:56.463710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.093 [2024-04-26 15:20:56.463741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.093 [2024-04-26 15:20:56.463771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.093 [2024-04-26 15:20:56.463807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.093 [2024-04-26 15:20:56.463832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.093 [2024-04-26 15:20:56.463869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.093 [2024-04-26 15:20:56.463909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.093 [2024-04-26 15:20:56.463940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.093 [2024-04-26 15:20:56.463970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.093 [2024-04-26 15:20:56.464000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.093 [2024-04-26 15:20:56.464039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.093 [2024-04-26 15:20:56.464070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.093 [2024-04-26 15:20:56.464101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.093 [2024-04-26 15:20:56.464133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.093 [2024-04-26 15:20:56.464165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.093 [2024-04-26 15:20:56.464202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.093 [2024-04-26 15:20:56.464234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.093 [2024-04-26 15:20:56.464266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.093 [2024-04-26 15:20:56.464302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.093 [2024-04-26 15:20:56.464334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.093 [2024-04-26 15:20:56.464363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.093 [2024-04-26 15:20:56.464393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.093 [2024-04-26 15:20:56.464432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.093 [2024-04-26 15:20:56.464463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.093 [2024-04-26 15:20:56.464494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.093 [2024-04-26 15:20:56.464528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.093 [2024-04-26 15:20:56.464560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.093 [2024-04-26 15:20:56.464596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.093 [2024-04-26 15:20:56.464625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.093 [2024-04-26 15:20:56.464662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.093 [2024-04-26 15:20:56.464692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.093 [2024-04-26 15:20:56.464723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.093 [2024-04-26 15:20:56.464751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.093 [2024-04-26 15:20:56.464782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.093 [2024-04-26 15:20:56.464815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.093 [2024-04-26 15:20:56.464849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.093 [2024-04-26 15:20:56.464886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.093 [2024-04-26 15:20:56.464915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.093 [2024-04-26 15:20:56.464947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.093 [2024-04-26 15:20:56.464983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.093 [2024-04-26 15:20:56.465010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.093 [2024-04-26 15:20:56.465043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.093 [2024-04-26 15:20:56.465074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.093 [2024-04-26 15:20:56.465106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.093 [2024-04-26 15:20:56.465136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.093 [2024-04-26 15:20:56.465171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.093 [2024-04-26 15:20:56.465202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.093 [2024-04-26 15:20:56.465231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.093 [2024-04-26 15:20:56.465260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.093 [2024-04-26 15:20:56.465285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.093 [2024-04-26 15:20:56.465314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.093 [2024-04-26 15:20:56.465340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.093 [2024-04-26 15:20:56.465365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.093 [2024-04-26 15:20:56.465389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.093 [2024-04-26 15:20:56.465414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.093 [2024-04-26 15:20:56.465446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.093 [2024-04-26 15:20:56.465483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.093 [2024-04-26 15:20:56.465842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.093 [2024-04-26 15:20:56.465874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.093 [2024-04-26 15:20:56.465904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.093 [2024-04-26 15:20:56.465936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.093 [2024-04-26 15:20:56.466000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.093 [2024-04-26 15:20:56.466030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.093 [2024-04-26 15:20:56.466067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.093 [2024-04-26 15:20:56.466098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.093 [2024-04-26 15:20:56.466128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.093 [2024-04-26 15:20:56.466163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.093 [2024-04-26 15:20:56.466195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.466225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.466255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.466290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.466321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.466350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.466387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.466419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.466461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.466492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.466525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.466555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.466589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.466619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.466655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.466688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.466716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.466744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.466786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.466815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.466851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.466884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.466920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.466952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.466980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.467012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.467038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.467075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.467103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.467135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.467167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.467199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.467232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.467260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.467290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.467320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.467353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.467381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.467414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.467446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.467478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.467516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.467546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.467576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.467605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.467637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.467673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.467704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.467751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.467783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.467816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.467851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.467879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.467912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.468270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.468299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.468328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.468382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.468417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.468449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.468480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.468512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.468544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.468572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.468601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.468638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.468673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.468709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.468739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.468771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.468804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.468843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.468874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.468912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.468939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.468975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.469003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.469040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.469068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.469098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.469127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.469164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.469193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.469225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.469255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.469287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.469325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.469358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.469389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.469419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.469452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.469480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.469511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.469539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.469570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.469600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.469631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.469662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.469696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.469723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.469751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.469777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.469802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.469833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.469866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.469905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.469940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.094 [2024-04-26 15:20:56.469969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.470004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.470035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.470068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.470099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.470131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.470162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.470197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.470223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.470255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.470640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.470671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.470701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.470731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.470761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.470795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.470829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.470862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.470891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.470920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.470959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.470992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.471018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.471049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.471078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.471108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.471136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.471166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.471197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.471225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.471259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.471289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.471325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.471358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.471391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.471421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.471454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.471484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.471516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.471554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.471589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.471629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.471661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.471696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.471729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.471759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.471793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.471822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.471855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.471891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.471924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.471951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.471977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.472008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.472042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.472079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.472111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.472139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.472171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.472202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.472233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.472266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.472299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.472330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.472364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.472397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.472429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.472459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.472488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.472520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.472552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.472592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.472625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.472656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.473006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.473043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.473077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.473111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.473145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.473179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.473208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.473236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.473270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.473306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.473335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.473368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.473400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.473435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.473469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.473502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.473532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.473564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.473596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.473626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.473659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.473696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.473728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.473758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.473788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.095 [2024-04-26 15:20:56.473816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.473854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.473885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.473925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.473957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.473991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.474025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.474058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.474090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.474125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.474173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.474202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.474234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.474265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.474304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.474341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.474380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.474408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.474440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.474472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.474502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.474530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.474564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.474593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.474622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.474653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.474687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.474719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.474754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.474786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.474818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.474857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.474889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.474916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.474956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.474985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.475016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.475053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.475422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.475455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.475486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:39.096 [2024-04-26 15:20:56.475520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.475551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.475582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.475620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.475658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.475687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.475718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.475752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.475779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.475809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.475845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.475879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.475908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.475934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.475970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.476001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.476033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.476062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.476102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.476129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.476160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.476195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.476228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.476254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.476284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.476318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.476349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.476381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.476410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.476441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.476471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.476504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.476532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.476565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.476599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.476631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.476666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.476704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.476735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.476772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.476804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.476834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.476874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.476905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.476942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.476972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.477003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.477035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.477070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.477135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.477168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.477204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.477236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.477267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.477297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.477324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.477356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.096 [2024-04-26 15:20:56.477394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.477425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.477451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.477486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.477840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.477873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.477909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.477945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.477977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.478013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.478046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.478080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.478113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.478144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.478174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.478208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.478241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.478274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.478309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.478340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.478372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.478402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.478438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.478483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.478516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.478553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.478586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.478620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.478659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.478692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.478736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.478768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.478799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.478826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.478859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.478890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.478919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.478952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.478983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.479016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.479046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.479078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.479111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.479143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.479180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.479210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.479246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.479277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.479309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.479346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.479378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.479410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.479441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.479471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.479503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.479540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.479594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.479629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.479657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.479688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.479717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.479757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.479786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.479845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.479876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.479912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.479944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.480306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.480341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.480375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.480412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.480442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.480472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.480503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.480529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.480560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.480586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.480624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.480655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.480685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.480719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.480749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.480783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.480813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.480855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.480884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.480919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.480953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.480983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.481020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.481054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.481087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.481119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.481149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.481184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.481217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.481251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.481281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.481316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.097 [2024-04-26 15:20:56.481348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.481386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.481418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.481449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.481487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.481518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.481550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.481583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.481615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.481647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.481682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.481718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.481749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.481779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.481812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.481846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.481876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.481907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.481938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.481965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.481996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.482029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.482064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.482099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.482129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.482160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.482194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.482224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.482266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.482298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.482329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.482361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.482707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.482736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.482770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.482798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.482829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.482868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.482901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.482932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.482961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.482992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.483025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.483056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.483086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.483114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.483143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.483172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.483210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.483240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.483301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.483331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.483370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.483404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.483439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.483471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.483503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.483530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.483560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.483598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.483633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.483670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.483699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.483728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.483755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.483785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.483819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.483855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.483888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.483919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.483950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.483983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.484022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.484052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.484088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.484124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.484159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.484192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.484235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.484265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.484302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.484331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.484363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.484402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.484432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.098 [2024-04-26 15:20:56.484461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.484490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.484524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.484554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.484594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.484621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.484650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.484681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.484709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.484739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.485075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.485109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.485141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.485178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.485210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.485248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.485277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.485308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.485343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.485375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.485408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.485438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.485474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.485506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.485540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.485587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.485618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.485650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.485683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.485717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.485751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.485780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.485842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.485873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.485904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.485932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.485965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.485995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.486032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.486065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.486094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.486125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.486157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.486214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.486246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.486278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.486306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.486340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.486370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.486399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.486429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.486459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.486512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.486544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.486579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.486609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.486642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.486676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.486708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.486738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.486773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.486803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.486833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.486867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.486900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.486930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.486961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.486996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.487028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.487058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.487092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.487128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.487158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.487191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.487896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.487936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.487966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.487996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.488029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.488060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.488098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.488129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.488169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.488202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.488236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.488270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.488301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.488331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.488363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.488427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.488458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.488489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.488519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.488553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.488589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.488624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.488656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.488688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.488721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.099 [2024-04-26 15:20:56.488752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.488789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.488819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.488857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.488895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.488925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.488957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.488987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.489017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.489047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.489079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.489110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.489142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.489169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.489203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.489230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.489259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.489297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.489329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.489365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.489398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.489430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.489458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.489489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.489519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.489547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.489577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.489610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.489640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.489672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.489703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.489736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.489763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.489796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.489829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.489865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.489899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.489931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.490076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.490107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.490144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.490172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.490200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.490229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.490265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.490298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.490331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.490361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.490392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.490426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.490457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.490493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.490524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.490560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.490592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.490622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.490659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.490688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.490722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.490750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.490782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.490812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.490847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.490872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.490910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.490948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.490980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.491012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.491042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.491076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.491109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.491135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.491168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.491201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.491238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.491268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.491299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.491338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.491364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.491397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.491431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.491466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.491499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.491533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.491562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.491596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.491629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.491664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.491695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.491730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.491762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.491788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.491819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.491857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.491888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.491924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.491954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.491987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.100 [2024-04-26 15:20:56.492017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.492047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.492080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.492112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.492448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.492477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.492508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.492541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.492574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.492609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.492642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.492672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.492711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.492743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.492775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.492810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.492847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.492885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.492918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.492953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.492987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.493018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.493054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.493082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.493113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.493141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.493169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.493199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.493229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.493277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.493309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.493346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.493378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.493412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.493442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.493473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.493520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.493550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.493582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.493610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.493641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.493672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.493703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.493740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.493773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.493803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.493833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.493870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.493902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.493931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.493968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.494001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.494032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.494061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.494095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.494125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.494160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.494186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.494218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.494249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.494278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.494310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.494342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.494377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.494404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.494436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.494471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.495111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.495143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.495180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.495213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.495242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.495273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.495304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.495340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.495368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.495406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.495438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.495469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.495501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.495536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.495585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.495617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.495648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.495679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.495711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.495742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.495774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.495811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.495845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.495883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.495909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.495939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.495974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.496008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.496038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.496071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.496105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.496134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.101 [2024-04-26 15:20:56.496168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.496200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.496237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.496265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.496295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.496323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.496360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.496394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.496423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.496453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.496485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.496517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.496553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.496580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.496612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.496646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.496679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.496713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.496748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.496780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.496812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.496847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.496877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.496908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.496950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.496980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.497010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.497038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.497069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.497102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.497132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.497177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.497385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.497420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.497451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.497486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.497523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.497552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.497586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.497645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.497677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.497710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.497740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.497773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.497807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.497844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.497877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.497914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.497950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.497994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.498028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.498068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.498098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.498127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.498158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.498190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.498225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.498257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.498290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.498321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.498354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.498386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.498421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.498449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.498491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.498528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.498559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.498587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.498620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.498652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.498684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.498715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.498750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.498783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.498818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.498857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.498892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.498929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.498964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.498992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.499021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.499053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.499090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.499120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.499155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.102 [2024-04-26 15:20:56.499185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.103 [2024-04-26 15:20:56.499232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.103 [2024-04-26 15:20:56.499263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.103 [2024-04-26 15:20:56.499298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.103 [2024-04-26 15:20:56.499326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.103 [2024-04-26 15:20:56.499357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.103 [2024-04-26 15:20:56.499391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.103 [2024-04-26 15:20:56.499420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.103 [2024-04-26 15:20:56.499450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.103 [2024-04-26 15:20:56.499486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.103 [2024-04-26 15:20:56.499851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.103 [2024-04-26 15:20:56.499882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.103 [2024-04-26 15:20:56.499914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.103 [2024-04-26 15:20:56.499949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.103 [2024-04-26 15:20:56.499977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.103 [2024-04-26 15:20:56.500019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.103 [2024-04-26 15:20:56.500052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.103 [2024-04-26 15:20:56.500087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.103 [2024-04-26 15:20:56.500117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.103 [2024-04-26 15:20:56.500151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.103 [2024-04-26 15:20:56.500182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.103 [2024-04-26 15:20:56.500215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.103 [2024-04-26 15:20:56.500249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.103 [2024-04-26 15:20:56.500283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.103 [2024-04-26 15:20:56.500314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.103 [2024-04-26 15:20:56.500350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.103 [2024-04-26 15:20:56.500380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.103 [2024-04-26 15:20:56.500414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.103 [2024-04-26 15:20:56.500445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.103 [2024-04-26 15:20:56.500476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.103 [2024-04-26 15:20:56.500505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.103 [2024-04-26 15:20:56.500540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.103 [2024-04-26 15:20:56.500567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.103 [2024-04-26 15:20:56.500603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.103 [2024-04-26 15:20:56.500640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.103 [2024-04-26 15:20:56.500669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.103 [2024-04-26 15:20:56.500702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.103 [2024-04-26 15:20:56.500739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.103 [2024-04-26 15:20:56.500771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.103 [2024-04-26 15:20:56.500802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.103 [2024-04-26 15:20:56.500842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.103 [2024-04-26 15:20:56.500872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.103 [2024-04-26 15:20:56.500903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.103 [2024-04-26 15:20:56.500935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.103 [2024-04-26 15:20:56.500961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.103 [2024-04-26 15:20:56.500993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.103 [2024-04-26 15:20:56.501024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.103 [2024-04-26 15:20:56.501061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.103 [2024-04-26 15:20:56.501092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.103 [2024-04-26 15:20:56.501125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.103 [2024-04-26 15:20:56.501159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.103 [2024-04-26 15:20:56.501194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.103 [2024-04-26 15:20:56.501226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.103 [2024-04-26 15:20:56.501256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.103 [2024-04-26 15:20:56.501284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.103 [2024-04-26 15:20:56.501320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.103 [2024-04-26 15:20:56.501352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.103 [2024-04-26 15:20:56.501383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.103 [2024-04-26 15:20:56.501416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.103 [2024-04-26 15:20:56.501442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.103 [2024-04-26 15:20:56.501476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.103 [2024-04-26 15:20:56.501510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.103 [2024-04-26 15:20:56.501548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.103 [2024-04-26 15:20:56.501581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.103 [2024-04-26 15:20:56.501614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.103 [2024-04-26 15:20:56.501651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.103 [2024-04-26 15:20:56.501684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.103 [2024-04-26 15:20:56.501714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.103 [2024-04-26 15:20:56.501745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.103 [2024-04-26 15:20:56.501778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.103 [2024-04-26 15:20:56.501815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.103 [2024-04-26 15:20:56.501852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.103 [2024-04-26 15:20:56.501882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.103 [2024-04-26 15:20:56.501915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.103 [2024-04-26 15:20:56.502270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.103 [2024-04-26 15:20:56.502300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.103 [2024-04-26 15:20:56.502333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.103 [2024-04-26 15:20:56.502365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.103 [2024-04-26 15:20:56.502398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.104 [2024-04-26 15:20:56.502428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.104 [2024-04-26 15:20:56.502460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.104 [2024-04-26 15:20:56.502491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.104 [2024-04-26 15:20:56.502526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.104 [2024-04-26 15:20:56.502558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.104 [2024-04-26 15:20:56.502588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.104 [2024-04-26 15:20:56.502622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.104 [2024-04-26 15:20:56.502651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.104 [2024-04-26 15:20:56.502686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.104 [2024-04-26 15:20:56.502713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.104 [2024-04-26 15:20:56.502749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.104 [2024-04-26 15:20:56.502779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.104 [2024-04-26 15:20:56.502809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.104 [2024-04-26 15:20:56.502850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.104 [2024-04-26 15:20:56.502881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.104 [2024-04-26 15:20:56.502910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.104 [2024-04-26 15:20:56.502942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.104 [2024-04-26 15:20:56.502972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.104 [2024-04-26 15:20:56.503000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.104 [2024-04-26 15:20:56.503028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.104 [2024-04-26 15:20:56.503061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.104 [2024-04-26 15:20:56.503098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.104 [2024-04-26 15:20:56.503135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.104 [2024-04-26 15:20:56.503172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.104 [2024-04-26 15:20:56.503202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.104 [2024-04-26 15:20:56.503233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.104 [2024-04-26 15:20:56.503262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.104 [2024-04-26 15:20:56.503299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.104 [2024-04-26 15:20:56.503326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.104 [2024-04-26 15:20:56.503355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.104 [2024-04-26 15:20:56.503390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.104 [2024-04-26 15:20:56.503418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.104 [2024-04-26 15:20:56.503451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.104 [2024-04-26 15:20:56.503483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.104 [2024-04-26 15:20:56.503514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.104 [2024-04-26 15:20:56.503552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.104 [2024-04-26 15:20:56.503586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.104 [2024-04-26 15:20:56.503616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.104 [2024-04-26 15:20:56.503655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.104 [2024-04-26 15:20:56.503683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.104 [2024-04-26 15:20:56.503713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.104 [2024-04-26 15:20:56.503743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.104 [2024-04-26 15:20:56.503777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.104 [2024-04-26 15:20:56.503807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.104 [2024-04-26 15:20:56.503840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.104 [2024-04-26 15:20:56.503873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.104 [2024-04-26 15:20:56.503909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.104 [2024-04-26 15:20:56.503943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.104 [2024-04-26 15:20:56.503978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.104 [2024-04-26 15:20:56.504009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.104 [2024-04-26 15:20:56.504039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.403 [2024-04-26 15:20:56.504072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.403 [2024-04-26 15:20:56.504109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.403 [2024-04-26 15:20:56.504140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.403 [2024-04-26 15:20:56.504172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.403 [2024-04-26 15:20:56.504222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.403 [2024-04-26 15:20:56.504255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.403 [2024-04-26 15:20:56.504288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.403 [2024-04-26 15:20:56.504655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.403 [2024-04-26 15:20:56.504689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.403 [2024-04-26 15:20:56.504718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.403 [2024-04-26 15:20:56.504751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.403 [2024-04-26 15:20:56.504782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.403 [2024-04-26 15:20:56.504815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.403 [2024-04-26 15:20:56.504865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.403 [2024-04-26 15:20:56.504896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.403 [2024-04-26 15:20:56.504921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.403 [2024-04-26 15:20:56.504953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.403 [2024-04-26 15:20:56.504984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.403 [2024-04-26 15:20:56.505014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.403 [2024-04-26 15:20:56.505047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.403 [2024-04-26 15:20:56.505083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.403 [2024-04-26 15:20:56.505117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.505146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.505176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.505213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.505245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.505277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.505302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.505334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.505365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.505392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.505428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.505461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.505496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.505530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.505560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.505591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.505627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.505660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.505692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.505719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.505758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.505790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.505823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.505861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.505891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.505921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.505947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.505972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.505998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.506032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.506066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.506095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.506128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.506156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.506181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.506207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.506233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.506266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.506297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.506328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.506358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.506393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.506433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.506461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.506491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.506522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.506553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.506586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.506618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.506652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.507026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.507059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.507089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.507120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.507149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.507178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.507210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.507241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.507276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.507326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.507358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.507394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.507424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.507456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.507489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.507520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.507571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.507604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.507636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.507667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.507699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.507732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.507759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.507787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.507815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.507854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.507888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.507915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.507946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.507977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.508009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.508041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.508072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.508102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.508137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.508170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.508199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.508236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.508266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.508291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.508320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.508353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.508380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.508410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.508443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.508474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.404 [2024-04-26 15:20:56.508504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.508543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.508577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.508609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.508648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.508679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.508739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.508770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.508807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.508835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.508874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.508926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.508958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.508987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.509018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.509054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.509093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.509459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.509491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.509523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.509556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.509588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.509621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.509650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.509699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.509733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.509772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.509801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.509833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.509870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.509902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.509932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.509959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.509992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.510018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.510051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.510082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.510110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.510145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.510179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.510209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.510242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.510276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.510310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.510340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.510373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.510403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.510432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.510463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.510492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.510523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.510554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.510589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.510616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.510647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.510677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.510708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.510739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.510777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.510812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.510847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.510877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.510907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.510939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.510967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.510999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.511032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.511059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.511091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.511127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.511160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.511198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.511231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.511267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.511300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.511330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.511370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.511406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.511442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.511475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.511506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.511869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.511899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.511929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.511968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.512000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.512028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.512066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.512099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.512133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.512171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.512200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.512228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.512261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.512294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.512324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.512359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.512392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.405 [2024-04-26 15:20:56.512423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.512459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.512491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.512521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.512552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.512582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.512612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.512645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.512678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.512723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.512756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.512795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.512827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.512866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.512898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.512932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.512965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.512996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.513026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.513058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.513089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.513128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.513157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.513190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.513218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.513247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.513276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.513306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.513342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.513375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.513408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.513448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.513477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.513537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.513571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.513611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.513640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.513670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.513707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.513741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.513775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.513806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.513843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.513881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.513910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.513943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:39.406 [2024-04-26 15:20:56.514302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.514345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.514374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.514400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.514430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.514465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.514495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.514527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.514566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.514605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.514635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.514664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.514694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.514731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.514764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.514791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.514819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.514849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.514881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.514914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.514947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.514977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.515009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.515043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.515078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.515107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.515138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.515171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.515204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.515233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.515270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.515299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.515332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.515362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.515395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.515428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.515456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.515488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.515520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.515556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.515587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.515621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.515651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.515680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.515713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.515745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.515778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.515811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.515848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.515879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.515910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.515941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.406 [2024-04-26 15:20:56.515974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.516010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.516039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.516070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.516107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.516137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.516168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.516198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.516231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.516264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.516294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.516328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.516674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.516707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.516737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.516765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.516799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.516829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.516874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.516900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.516931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.516961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.516993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.517022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.517051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.517080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.517113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.517141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.517170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.517199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.517236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.517265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.517296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.517328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.517357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.517390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.517419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.517453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.517483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.517514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.517545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.517578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.517603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.517645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.517676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.517707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.517741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.517773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.517807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.517844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.517874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.517909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.517941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.517970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.518000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.518033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.518064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.518099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.518130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.518166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.518201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.518233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.518263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.518292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.518320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.518353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.518397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.518428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.518459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.518489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.518522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.518553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.518584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.518616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.518647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.519029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.519060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.519096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.519125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.519155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.519187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.519224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.519256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.519284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.519317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.519343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.519374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.519410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.519441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.519472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.519508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.519541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.519570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.519603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.519636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.519671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.519701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.519735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.407 [2024-04-26 15:20:56.519771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.408 [2024-04-26 15:20:56.519805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.408 [2024-04-26 15:20:56.519833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.408 [2024-04-26 15:20:56.519864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.408 [2024-04-26 15:20:56.519896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.408 [2024-04-26 15:20:56.519925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.408 [2024-04-26 15:20:56.519957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.408 [2024-04-26 15:20:56.519988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.408 [2024-04-26 15:20:56.520019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.408 [2024-04-26 15:20:56.520046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.408 [2024-04-26 15:20:56.520078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.408 [2024-04-26 15:20:56.520109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.408 [2024-04-26 15:20:56.520137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.408 [2024-04-26 15:20:56.520163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.408 [2024-04-26 15:20:56.520199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.408 [2024-04-26 15:20:56.520233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.408 [2024-04-26 15:20:56.520272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.408 [2024-04-26 15:20:56.520304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.408 [2024-04-26 15:20:56.520339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.408 [2024-04-26 15:20:56.520380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.408 [2024-04-26 15:20:56.520412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.408 [2024-04-26 15:20:56.520446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.408 [2024-04-26 15:20:56.520476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.408 [2024-04-26 15:20:56.520508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.408 [2024-04-26 15:20:56.520544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.408 [2024-04-26 15:20:56.520577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.408 [2024-04-26 15:20:56.520614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.408 [2024-04-26 15:20:56.520647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.408 [2024-04-26 15:20:56.520680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.408 [2024-04-26 15:20:56.520732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.408 [2024-04-26 15:20:56.520760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.408 [2024-04-26 15:20:56.520792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.408 [2024-04-26 15:20:56.520825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.408 [2024-04-26 15:20:56.520863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.408 [2024-04-26 15:20:56.520894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.408 [2024-04-26 15:20:56.520927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.408 [2024-04-26 15:20:56.520960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.408 [2024-04-26 15:20:56.520996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.408 [2024-04-26 15:20:56.521028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.408 [2024-04-26 15:20:56.521055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.408 [2024-04-26 15:20:56.521095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.408 [2024-04-26 15:20:56.521842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.408 [2024-04-26 15:20:56.521877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.408 [2024-04-26 15:20:56.521908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.408 [2024-04-26 15:20:56.521937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.408 [2024-04-26 15:20:56.521969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.408 [2024-04-26 15:20:56.521997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.408 [2024-04-26 15:20:56.522028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.408 [2024-04-26 15:20:56.522056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.408 [2024-04-26 15:20:56.522091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.408 [2024-04-26 15:20:56.522120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.408 [2024-04-26 15:20:56.522157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.408 [2024-04-26 15:20:56.522184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.408 [2024-04-26 15:20:56.522215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.408 [2024-04-26 15:20:56.522244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.408 [2024-04-26 15:20:56.522276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.408 [2024-04-26 15:20:56.522309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.408 [2024-04-26 15:20:56.522337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.408 [2024-04-26 15:20:56.522384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.408 [2024-04-26 15:20:56.522414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.408 [2024-04-26 15:20:56.522451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.408 [2024-04-26 15:20:56.522485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.408 [2024-04-26 15:20:56.522517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.408 [2024-04-26 15:20:56.522577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.408 [2024-04-26 15:20:56.522608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.408 [2024-04-26 15:20:56.522644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.408 [2024-04-26 15:20:56.522674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.408 [2024-04-26 15:20:56.522706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.408 [2024-04-26 15:20:56.522736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.408 [2024-04-26 15:20:56.522765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.408 [2024-04-26 15:20:56.522801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.408 [2024-04-26 15:20:56.522832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.408 [2024-04-26 15:20:56.522877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.408 [2024-04-26 15:20:56.522911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.408 [2024-04-26 15:20:56.522942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.408 [2024-04-26 15:20:56.522973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.408 [2024-04-26 15:20:56.523007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.523064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.523097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.523130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.523161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.523194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.523230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.523258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.523295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.523327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.523362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.523400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.523428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.523461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.523498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.523529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.523555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.523589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.523621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.523651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.523686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.523721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.523748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.523778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.523811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.523846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.523875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.523914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.524085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.524122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.524152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.524177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.524208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.524240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.524273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.524304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.524333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.524365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.524395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.524428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.524459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.524491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.524524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.524558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.524588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.524622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.524654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.524688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.524720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.524749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.524780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.524815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.524853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.524884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.524914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.524955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.524988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.525022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.525055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.525086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.525115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.525146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.525196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.525227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.525262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.525294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.525328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.525364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.525394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.525427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.525458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.525491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.525519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.525545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.525578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.525616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.525643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.525673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.525707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.525738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.525770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.525799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.525834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.525869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.525899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.525931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.525963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.525997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.526029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.526055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.526087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.526118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.526832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.526869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.526906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.409 [2024-04-26 15:20:56.526937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.526967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.527001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.527034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.527091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.527122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.527155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.527190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.527222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.527253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.527287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.527337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.527369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.527398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.527429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.527458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.527492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.527517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.527551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.527587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.527617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.527647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.527679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.527708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.527748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.527779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.527810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.527842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.527878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.527907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.527938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.527971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.528004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.528037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.528069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.528103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.528133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.528166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.528194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.528227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.528264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.528295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.528325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.528360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.528394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.528426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.528458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.528496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.528537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.528567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.528604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.528637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.528667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.528700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.528741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.528772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.528802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.528846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.528877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.528916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.529060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.529091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.529130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.529162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.529193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.529223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.529259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.529297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.529329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.529360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.529391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.529417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.529450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.529483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.529519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.529548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.529582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.529613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.529645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.529677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.529707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.529737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.529767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.529801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.529832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.529866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.529899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.529935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.529968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.530006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.530039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.530073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.530110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.530139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.530177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.530209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.410 [2024-04-26 15:20:56.530244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.530275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.530309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.530338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.530368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.530406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.530435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.530467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.530500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.530534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.530565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.530597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.530655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.530687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.530717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.530749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.530779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.530814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.530850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.530878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.530908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.530943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.530976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.531009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.531039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.531073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.531103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.531132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.531493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.531524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.531554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.531585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.531616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.531652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.531684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.531718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.531753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.531784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.531817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.531856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.531889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.531920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.531949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.531980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.532009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.532054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.532085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.532116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.532147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.532179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.532214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.532246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.532280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.532309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.532348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.532382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.532412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.532437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.532475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.532505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.532536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.532567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.532594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.532628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.532653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.532679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.532705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.532731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.532756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.532781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.532811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.532846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.532879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.532906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.532936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.532975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.533008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.533040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.533072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.533107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.533140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.533173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.533205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.533238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.533269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.533302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.533332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.533380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.533412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.533449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.533481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.533850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.533881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.533917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.533948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.411 [2024-04-26 15:20:56.533984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.534020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.534049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.534077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.534111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.534143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.534174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.534205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.534238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.534268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.534297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.534328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.534356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.534389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.534422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.534452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.534481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.534516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.534542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.534576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.534608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.534637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.534673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.534707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.534737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.534767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.534797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.534826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.534862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.534892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.534923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.534955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.534988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.535021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.535053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.535085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.535116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.535149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.535180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.535209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.535263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.535296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.535329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.535365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.535398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.535428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.535459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.535512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.535543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.535579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.535614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.535645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.535674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.535703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.535765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.535798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.535831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.535865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.535899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.535932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.536276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.536306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.536340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.536373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.536407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.536438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.536468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.536500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.536539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.536567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.536601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.536636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.536666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.536694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.536730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.536761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.536791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.536822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.536857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.536886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.536918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.536947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.536974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.537013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.537044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.537072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.537102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.537133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.412 [2024-04-26 15:20:56.537165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.537195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.537232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.537263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.537293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.537341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.537373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.537403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.537432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.537461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.537493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.537525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.537555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.537587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.537617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.537650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.537684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.537714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.537747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.537786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.537820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.537857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.537886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.537925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.537961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.537992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.538019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.538047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.538079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.538116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.538146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.538176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.538205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.538232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.538263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.538616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.538652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.538681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.538715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.538745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.538778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.538832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.538868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.538903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.538935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.538963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.538995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.539022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.539051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.539083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.539115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.539152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.539188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.539221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.539252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.539290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.539320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.539349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.539374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.539404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.539434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.539473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.539503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.539535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.539570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.539605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.539641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.539672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.539708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.539736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.539765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.539797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.539829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.539864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.539892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.539922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.539948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.539984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.540013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.540045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.540074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.540123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.540157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.540188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.540218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.540250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.540282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.540312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.540343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.413 [2024-04-26 15:20:56.540378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.540415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.540444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.540474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.540509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.540541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.540573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.540607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.540639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.540670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.541075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.541106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.541141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.541170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.541197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.541230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.541263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.541297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.541329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.541360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.541390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.541423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.541451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.541485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.541516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.541545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.541578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.541608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.541644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.541673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.541732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.541764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.541805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.541841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.541873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.541908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.541937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.541996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.542029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.542063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.542100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.542133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.542169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.542204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.542233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.542263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.542291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.542321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.542357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.542392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.542426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.542454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.542481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.542514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.542545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.542574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.542609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.542639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.542667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.542708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.542742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.542776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.542806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.542846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.542880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.542914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.542949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.542979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.543009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.543043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.543071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.543105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.543136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.543542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.543578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.543608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.543643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.543672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.543703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.543740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.543773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.543807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.543847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.543879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.543911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.543943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.543976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.544007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.544061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.544092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.544123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.544153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.544184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.544219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.414 [2024-04-26 15:20:56.544251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.544282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.544311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.544348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.544379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.544408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.544440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.544467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.544504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.544534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.544562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.544597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.544633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.544663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.544693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.544718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.544748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.544780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.544810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.544845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.544881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.544909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.544939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.544974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.545005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.545038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.545073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.545110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.545141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.545173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.545207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.545243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.545276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.545301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.545330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.545374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.545408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.545439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.545469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.545501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.545561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.545592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.545626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.545992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.546024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.546057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.546091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.546126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.546156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.546185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.546216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.546244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.546279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.546314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.546349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.546379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.546410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.546448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.546478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.546515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.546546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.546579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.546608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.546639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.546674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.546709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.546740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.546778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.546810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.546843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.546874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.546913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.546945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.546975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.547006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.547042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.547073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.547103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.547136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.547173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.547200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.547228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.547254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.547284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.547312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.547344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.547373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.547415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.547447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.547476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.547506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.547541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.547571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.547600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.547632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.547665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.415 [2024-04-26 15:20:56.547698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.547728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.547761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.547796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.547827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.547861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.547895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.547931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.547964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.548007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.548367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.548399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.548431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.548465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.548498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.548533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.548567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.548597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.548625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.548661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.548698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.548733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.548771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.548801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.548832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.548875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.548906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.548941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.548967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.549002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.549035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.549066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.549097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.549126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.549159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.549185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.549214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.549239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.549266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.549297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.549328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.549365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.549395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.549430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.549460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.549496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.549526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.549557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.549589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.549616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.549650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.549676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.549701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.549727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.549751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.549777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.549803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.549828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.549857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.549887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.549917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.549945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.549974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.550005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.550040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.550072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.550103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.550132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.550176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.550210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.550240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.550271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.550303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.550332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.550682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.550714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.550749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.550781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.550812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.550844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.550877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.550917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.550948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.550985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.551015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.551044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.551078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.551111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.551141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.551174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.551210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.551242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.551272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.551302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.551332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 [2024-04-26 15:20:56.551361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.416 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:39.417 [2024-04-26 15:20:56.551395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.551426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.551459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.551493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.551524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.551554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.551584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.551612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.551645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.551681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.551722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.551749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.551784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.551812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.551846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.551883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.551915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.551945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.551980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.552007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.552040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.552070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.552106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.552137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.552164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.552195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.552228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.552257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.552304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.552336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.552369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.552399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.552434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.552464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.552496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.552525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.552556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.552596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.552624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.552655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.552685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.553056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.553091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.553125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.553154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.553191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.553223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.553257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.553288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.553325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.553357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.553387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.553417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.553446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.553499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.553531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.553564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.553591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.553623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.553655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.553685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.553716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.553752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.553778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.553811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.553846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.553879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.553916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.553945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.553975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.554005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.554039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.554073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.554102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.554134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.554166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.554197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.554226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.554265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.554294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.554324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.417 [2024-04-26 15:20:56.554356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.554384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.554411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.554441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.554471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.554505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.554538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.554572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.554605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.554638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.554667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.554701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.554734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.554761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.554795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.554826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.554864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.554926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.554958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.554996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.555030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.555059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.555095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.555125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.555474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.555503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.555538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.555573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.555606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.555641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.555674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.555706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.555737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.555766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.555794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.555827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.555864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.555901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.555930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.555967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.556002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.556031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.556057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.556083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.556109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.556135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.556169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.556201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.556233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.556265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.556303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.556335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.556366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.556396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.556431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.556464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.556497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.556525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.556562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.556587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.556612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.556638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.556664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.556689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.556715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.556740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.556766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.556792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.556816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.556846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.556872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.556898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.556923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.556948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.556974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.556999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.557024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.557049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.557075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.557100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.557125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.557151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.557176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.418 [2024-04-26 15:20:56.557201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.557227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.557252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.557277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.557770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.557805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.557843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.557875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.557906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.557943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.557975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.558006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.558039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.558074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.558104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.558133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.558187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.558219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.558250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.558280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.558313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.558342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.558375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.558403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.558434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.558466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.558496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.558540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.558572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.558606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.558637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.558669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.558705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.558740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.558777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.558809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.558844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.558876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.558907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.558954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.558985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.559017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.559052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.559086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.559114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.559145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.559176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.559205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.559237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.559272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.559307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.559335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.559363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.559392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.559422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.559451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.559488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.559521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.559552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.559586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.559614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.559645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.559678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.559707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.559734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.559764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.559799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.559834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.560215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.560248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.560276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.560308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.560340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.560372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.560417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.560449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.560485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.560518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.560554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.560588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.560619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.560663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.560697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.560736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.560765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.560794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.560832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.560870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.560904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.560939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.560973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.561009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.561041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.561074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.561108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.561145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.561181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.561211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.561244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.561273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.561304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.561338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.561374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.561406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.419 [2024-04-26 15:20:56.561431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.561465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.561496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.561523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.561557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.561588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.561623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.561655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.561690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.561719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.561752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.561781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.561814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.561849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.561880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.561914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.561947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.561976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.562005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.562037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.562070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.562101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.562137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.562171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.562201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.562236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.562266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.562651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.562688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 true 00:10:39.420 [2024-04-26 15:20:56.562720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.562765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.562797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.562843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.562872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.562904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.562938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.562968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.563003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.563034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.563067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.563103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.563135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.563168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.563200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.563225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.563261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.563296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.563326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.563357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.563394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.563423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.563458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.563482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.563516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.563551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.563583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.563617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.563644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.563683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.563711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.563739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.563765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.563790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.563815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.563844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.563868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.563893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.563918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.563944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.563968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.564003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.564038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.564073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.564110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.564141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.564176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.564205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.564242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.564273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.564302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.564333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.564363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.564391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.564424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.564454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.564498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.564529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.564569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.564601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.420 [2024-04-26 15:20:56.564638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.564669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.565033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.565066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.565102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.565134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.565165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.565195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.565231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.565262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.565294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.565327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.565356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.565403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.565438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.565471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.565504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.565536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.565568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.565609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.565638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.565669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.565705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.565743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.565773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.565802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.565832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.565876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.565905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.565942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.565971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.566005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.566032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.566062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.566101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.566132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.566166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.566201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.566234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.566270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.566302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.566334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.566367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.566398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.566454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.566486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.566519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.566549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.566582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.566613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.566644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.566676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.566706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.566746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.566779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.566810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.566855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.566887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.566918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.566947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.566980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.567016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.567048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.567086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.567117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.567477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.567511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.567547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.567576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.567608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.567638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.567671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.567705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.567737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.567771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.567803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.567849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.567881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.567913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.567947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.567978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.568007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.568037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.568067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.568103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.568133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.568159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.568188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.568220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.568250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.568281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.568312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.568342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.568371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.568408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.568441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.568470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.421 [2024-04-26 15:20:56.568504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.422 [2024-04-26 15:20:56.568534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.422 [2024-04-26 15:20:56.568560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.422 [2024-04-26 15:20:56.568590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.422 [2024-04-26 15:20:56.568620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.422 [2024-04-26 15:20:56.568653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.422 [2024-04-26 15:20:56.568683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.422 [2024-04-26 15:20:56.568721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.422 [2024-04-26 15:20:56.568753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.422 [2024-04-26 15:20:56.568784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.422 [2024-04-26 15:20:56.568816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.422 [2024-04-26 15:20:56.568848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.422 [2024-04-26 15:20:56.568877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.422 [2024-04-26 15:20:56.568904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.422 [2024-04-26 15:20:56.568939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.422 [2024-04-26 15:20:56.568972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.422 [2024-04-26 15:20:56.569007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.422 [2024-04-26 15:20:56.569042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.422 [2024-04-26 15:20:56.569075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.422 [2024-04-26 15:20:56.569104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.422 [2024-04-26 15:20:56.569131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.422 [2024-04-26 15:20:56.569164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.422 [2024-04-26 15:20:56.569197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.422 [2024-04-26 15:20:56.569229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.422 [2024-04-26 15:20:56.569262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.422 [2024-04-26 15:20:56.569293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.422 [2024-04-26 15:20:56.569321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.422 [2024-04-26 15:20:56.569353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.422 [2024-04-26 15:20:56.569383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.422 [2024-04-26 15:20:56.569420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.422 [2024-04-26 15:20:56.569450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.422 [2024-04-26 15:20:56.569482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.423 [2024-04-26 15:20:56.569831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.423 [2024-04-26 15:20:56.569870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.423 [2024-04-26 15:20:56.569932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.423 [2024-04-26 15:20:56.569961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.423 [2024-04-26 15:20:56.569995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.423 [2024-04-26 15:20:56.570026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.423 [2024-04-26 15:20:56.570056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.423 [2024-04-26 15:20:56.570088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.423 [2024-04-26 15:20:56.570120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.423 [2024-04-26 15:20:56.570159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.423 [2024-04-26 15:20:56.570186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.423 [2024-04-26 15:20:56.570214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.423 [2024-04-26 15:20:56.570250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.423 [2024-04-26 15:20:56.570286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.423 [2024-04-26 15:20:56.570316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.423 [2024-04-26 15:20:56.570345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.423 [2024-04-26 15:20:56.570380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.423 [2024-04-26 15:20:56.570416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.423 [2024-04-26 15:20:56.570445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.423 [2024-04-26 15:20:56.570479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.423 [2024-04-26 15:20:56.570508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.423 [2024-04-26 15:20:56.570536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.423 [2024-04-26 15:20:56.570568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.423 [2024-04-26 15:20:56.570603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.423 [2024-04-26 15:20:56.570634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.423 [2024-04-26 15:20:56.570665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.423 [2024-04-26 15:20:56.570699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.423 [2024-04-26 15:20:56.570731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.423 [2024-04-26 15:20:56.570762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.423 [2024-04-26 15:20:56.570797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.423 [2024-04-26 15:20:56.570833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.423 [2024-04-26 15:20:56.570871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.423 [2024-04-26 15:20:56.570905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.423 [2024-04-26 15:20:56.570936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.423 [2024-04-26 15:20:56.570974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.423 [2024-04-26 15:20:56.571003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.423 [2024-04-26 15:20:56.571034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.423 [2024-04-26 15:20:56.571059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.423 [2024-04-26 15:20:56.571084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.423 [2024-04-26 15:20:56.571111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.423 [2024-04-26 15:20:56.571136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.423 [2024-04-26 15:20:56.571160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.423 [2024-04-26 15:20:56.571185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.423 [2024-04-26 15:20:56.571212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.423 [2024-04-26 15:20:56.571238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.423 [2024-04-26 15:20:56.571264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.423 [2024-04-26 15:20:56.571289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.423 [2024-04-26 15:20:56.571315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.423 [2024-04-26 15:20:56.571340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.423 [2024-04-26 15:20:56.571365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.423 [2024-04-26 15:20:56.571390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.423 [2024-04-26 15:20:56.571416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.423 [2024-04-26 15:20:56.571442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.423 [2024-04-26 15:20:56.571467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.423 [2024-04-26 15:20:56.571492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.423 [2024-04-26 15:20:56.571517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.423 [2024-04-26 15:20:56.571543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.423 [2024-04-26 15:20:56.571568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.423 [2024-04-26 15:20:56.571594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.423 [2024-04-26 15:20:56.571619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.423 [2024-04-26 15:20:56.571644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.423 [2024-04-26 15:20:56.571677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.423 [2024-04-26 15:20:56.571708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.423 [2024-04-26 15:20:56.572072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.423 [2024-04-26 15:20:56.572106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.423 [2024-04-26 15:20:56.572139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.423 [2024-04-26 15:20:56.572170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.423 [2024-04-26 15:20:56.572203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.423 [2024-04-26 15:20:56.572235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.572293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.572325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.572358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.572390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.572425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.572458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.572491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.572523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.572553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.572584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.572616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.572647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.572680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.572711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.572742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.572785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.572818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.572855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.572887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.572917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.572951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.572980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.573012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.573044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.573075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.573107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.573140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.573169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.573200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.573233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.573263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.573291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.573320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.573354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.573386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.573415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 15:20:56 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1506669 00:10:39.424 [2024-04-26 15:20:56.573446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.573477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.573506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.573536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.573579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.573608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.573640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.573677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.573706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.573737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.573766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 15:20:56 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:39.424 [2024-04-26 15:20:56.573795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.573829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.573864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.573898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.573932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.573966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.573998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.574033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.574061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.574124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.574159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.574518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.574552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.574578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.574611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.574638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.574669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.574699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.574725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.574763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.574793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.574827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.574862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.574892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.574929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.574962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.575019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.575049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.575085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.575119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.575152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.575186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.575217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.575249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.575276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.575308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.575341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.575375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.575406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.575434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.575465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.575495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.575519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.575553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.575587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.575616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.575652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.575686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.424 [2024-04-26 15:20:56.575718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.575753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.575780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.575807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.575835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.575871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.575903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.575929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.575963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.575994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.576025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.576054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.576086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.576117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.576149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.576178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.576209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.576239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.576276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.576308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.576343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.576373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.576403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.576437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.576471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.576501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.576872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.576907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.576937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.576970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.577000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.577031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.577059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.577088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.577119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.577150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.577178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.577208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.577236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.577267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.577293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.577323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.577355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.577389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.577419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.577449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.577482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.577508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.577534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.577565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.577596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.577625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.577658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.577687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.577717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.577742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.577774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.577802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.577843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.577875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.577905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.577938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.577968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.577992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.578017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.578045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.578078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.578108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.578141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.578170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.578206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.578236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.578261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.578292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.578315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.578340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.578363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.578387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.578417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.578446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.578481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.578515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.578546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.578578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.578607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.578635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.578667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.578696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.578731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.578759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.579114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.579150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.579179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.579210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.579240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.579272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.579304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.579339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.425 [2024-04-26 15:20:56.579369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.579413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.579447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.579479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.579508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.579549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.579578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.579605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.579635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.579667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.579700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.579730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.579760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.579792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.579818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.579859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.579890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.579917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.579947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.579977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.580001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.580034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.580065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.580097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.580128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.580160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.580187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.580214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.580242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.580268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.580301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.580329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.580360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.580392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.580425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.580455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.580483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.580515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.580546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.580578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.580606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.580637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.580670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.580700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.580740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.580771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.580803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.580841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.580876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.580910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.580940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.580971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.581000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.581031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.581070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.581430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.581471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.581504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.581541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.581573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.581603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.581642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.581675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.581708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.581737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.581766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.581797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.581830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.581867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.581898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.581929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.581960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.581989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.582019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.582054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.582085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.582116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.582145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.582171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.582205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.582235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.582266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.582305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.582336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.582363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.582400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.582433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.582464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.582492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.582523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.582549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.582572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.582603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.582630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.582656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.582684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.582711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.582746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.582782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.582812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.582849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.582878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.582907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.582943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.582974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.583007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.583039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.583067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.583101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.583135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.583171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.583201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.583228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.583268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.426 [2024-04-26 15:20:56.583299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.583332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.583362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.583396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.583432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.583789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.583820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.583855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.583888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.583912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.583947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.583978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.584007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.584035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.584067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.584095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.584133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.584166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.584201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.584241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.584276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.584305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.584338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.584374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.584404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.584431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.584463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.584487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.584510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.584535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.584558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.584582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.584605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.584631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.584670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.584700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.584733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.584766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.584801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.584832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.584867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.584898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.584931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.584958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.584994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.585021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.585055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.585079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.585103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.585126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.585150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.585173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.585197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.585220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.585245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.585268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.585292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.585315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.585339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.585362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.585386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.585410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.585433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.585457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.585481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.585504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.585533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.585565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.585953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.585984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.586012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.586043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.586073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.586104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.586133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.586165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.586192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.586233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.586260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.586293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.586323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.586363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.586392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.586419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.586449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.586480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.586511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.586541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.586588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.586618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.586651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.586681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.586712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.586743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.586774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.586805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.586835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.586869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.586899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.586926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.586961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.586995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.587046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.427 [2024-04-26 15:20:56.587075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.587103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.587135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.587166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.587195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.587222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.587252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.587281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.587312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.587343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.587370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.587401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.587428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.587459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.587488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.587516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.587549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.587587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.587615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.587645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.587675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.587703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.587733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.587763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.587795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.587823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.587858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.587889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.587917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.588251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.588285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.588319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.588350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.588393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.588420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.588451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.588483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.588518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.588545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.588577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.588604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.588634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.588668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.588697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.588728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.588757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.588811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.588845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.588881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.588910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.588945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.588979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.589010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.589041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:39.428 [2024-04-26 15:20:56.589080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.589112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.589145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.589176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.589231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.589261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.589290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.589320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.589363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.589394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.589426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.589459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.589487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.589518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.589549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.589582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.589613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.589641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.589672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.589700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.589729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.589755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.589784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.589815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.589851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.589887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.589926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.589955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.589982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.590011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.590044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.590070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.590099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.590128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.590160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.590186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.590213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.590247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.590596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.590624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.590648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.590672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.590696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.590719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.590746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.590777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.590805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.590842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.590867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.590892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.590916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.590939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.590963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.590987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.591011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.591034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.591059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.591083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.591107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.591131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.591155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.591178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.591202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.591226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.591249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.428 [2024-04-26 15:20:56.591272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.591297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.591321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.591346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.591370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.591393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.591417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.591444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.591468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.591492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.591516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.591541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.591565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.591589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.591613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.591636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.591661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.591685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.591708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.591732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.591757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.591782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.591807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.591833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.591868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.591901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.591930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.591963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.591993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.592021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.592056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.592086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.592114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.592145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.592182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.592216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.592245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.592597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.592627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.592658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.592690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.592718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.592746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.592778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.592809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.592844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.592874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.592908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.592939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.592972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.593006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.593036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.593073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.593106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.593384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.593418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.593447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.593482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.593514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.593558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.593587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.593615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.593642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.593668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.593695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.593724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.593752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.593785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.593815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.593849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.593882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.593917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.593941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.593973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.594003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.594032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.594064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.594092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.594120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.594150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.594178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.594208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.594244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.594279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.594313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.594341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.594372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.594402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.594437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.594467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.594495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.594528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.594557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.594586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.594619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.594649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.594679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.594712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.594748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.594779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.594959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.594993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.595025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.595069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.595099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.595129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.595158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.595190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.595218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.595252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.595281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.595315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.595348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.595382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.595413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.595465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.595495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.595531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.595560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.595594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.595626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.595661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.595690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.595722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.595752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.595783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.595812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.595849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.595877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.595905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.429 [2024-04-26 15:20:56.595937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.595968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.595992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.596022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.596056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.596084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.596113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.596151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.596184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.596217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.596251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.596287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.596316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.596343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.596375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.596400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.596429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.596458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.596492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.596517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.596545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.596577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.596610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.596646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.596678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.596707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.596737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.596769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.596800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.596828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.596869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.596900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.596931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.596960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.597099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.597129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.597159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.597189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.597224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.597253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.597285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.597316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.597346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.597376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.597410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.597448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.597480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.597514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.597545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.597582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.597614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.598040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.598078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.598115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.598149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.598176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.598204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.598231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.598259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.598287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.598316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.598350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.598380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.598413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.598440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.598468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.598498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.598528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.598559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.598591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.598622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.598657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.598686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.598722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.598748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.598772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.598796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.598820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.598850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.598874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.598900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.598924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.598947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.598972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.598997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.599020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.599068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.599096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.599124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.599153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.599186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.599217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.599245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.599273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.599303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.599329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.599358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.599388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.599419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.599451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.599480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.599508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.599538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.599574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.599607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.599642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.599675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.599706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.599742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.599775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.599816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.599848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.599906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.599935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.600141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.600177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.600209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.600236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.600268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.600298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.600328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.600357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.600384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.600412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.600445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.600479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.600510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.600540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.600571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.600603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.430 [2024-04-26 15:20:56.600633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.600665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.600701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.600729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.600784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.600813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.600860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.600892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.600928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.600964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.600995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.601025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.601054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.601082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.601115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.601146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.601177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.601210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.601241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.601271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.601306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.601338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.601369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.601397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.601427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.601457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.601496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.601525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.601557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.601585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.601619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.601940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.601973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.602002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.602032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.602061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.602095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.602126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.602160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.602189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.602221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.602253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.602285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.602315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.602346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.602379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.602408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.602436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.602465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.602491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.602521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.602547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.602579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.602607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.602637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.602668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.602693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.602723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.602757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.602793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.602827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.602863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.602896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.602924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.602953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.602980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.603017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.603053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.603081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.603112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.603139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.603168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.603195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.603228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.603256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.603288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.603319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.603349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.603378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.603408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.603436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.603464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.603496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.603529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.603559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.603590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.603621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.603651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.603683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.603714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.603743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.603772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.603800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.603825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.603864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.604056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.604081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.604106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.604131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.604156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.604180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.604206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.604236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.604264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.604293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.604326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.604355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.604387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.604415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.604440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.604466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.605010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.605036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.605060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.431 [2024-04-26 15:20:56.605085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.605109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.605133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.605157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.605181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.605214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.605246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.605282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.605312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.605376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.605406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.605443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.605475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.605509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.605541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.605574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.605605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.605633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.605671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.605708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.605738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.605766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.605794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.605819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.605846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.605871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.605896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.605921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.605946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.605971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.605995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.606021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.606045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.606068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.606099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.606126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.606159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.606184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.606209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.606233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.606258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.606283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.606307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.606331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.606513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.606547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.606581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.606608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.606645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.606675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.606712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.606740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.606773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.606805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.606842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.606875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.606906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.606935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.606969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.607027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.607056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.607092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.607120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.607158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.607189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.607219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.607251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.607282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.607313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.607343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.607380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.607410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.607441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.607475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.607506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.607535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.607566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.607594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.607619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.607648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.607694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.607727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.607759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.607792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.607822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.607854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.607886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.607918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.607944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.607976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.608002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.608033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.608066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.608096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.608126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.608157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.608193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.608223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.608255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.608284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.608318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.608350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.608380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.608413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.608447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.608478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.608509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.608540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.608857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.608895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.608926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.608968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.609002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.609035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.609068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.609098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.609131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.609162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.609198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.609230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.609264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.609293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.609324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.609361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.609393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.609430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.609462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.609491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.609521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.609549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.432 [2024-04-26 15:20:56.609587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.609619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.609662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.609696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.609733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.609765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.609794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.609826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.609861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.609898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.609933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.609963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.609996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.610031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.610061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.610090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.610122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.610157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.610189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.610221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.610254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.610286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.610314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.610343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.610371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.610399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.610436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.610463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.610499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.610535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.610567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.610600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.610637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.610671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.610700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.610733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.610770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.610803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.610846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.610878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.610908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.611268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.611302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.611333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.611363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.611396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.611429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.611458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.611491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.611527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.611559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.611598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.611628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.611660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.611689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.611720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.611755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.611784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.611815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.611852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.611887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.611921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.611948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.611979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.612010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.612046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.612075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.612109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.612146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.612171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.612202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.612239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.612270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.612297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.612329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.612356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.612382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.612407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.612432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.612462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.612494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.612526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.612563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.612593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.612624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.612658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.612687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.612719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.612757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.612782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.612818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.612851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.612883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.612917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.612954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.612984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.613013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.613048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.613079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.613118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.613149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.613185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.613217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.613246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.613275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.613632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.613665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.613694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.613727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.613758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.613789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.613823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.613857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.613883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.613915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.613951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.613983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.614019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.614052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.614086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.614117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.614145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.614175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.614205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.614244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.614279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.614315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.614344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.614372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.614406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.614438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.614473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.614512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.614543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.614571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.614602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.614632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.614663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.614690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.614721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.433 [2024-04-26 15:20:56.614752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.614784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.614813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.614857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.614889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.614921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.614958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.614990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.615022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.615052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.615091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.615127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.615159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.615190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.615232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.615265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.615295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.615322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.615354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.615386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.615417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.615457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.615486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.615518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.615549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.615580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.615610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.615639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.615996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.616031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.616063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.616094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.616126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.616160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.616192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.616221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.616251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.616281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.616308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.616340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.616373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.616409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.616438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.616475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.616510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.616542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.616576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.616608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.616645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.616680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.616708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.616736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.616771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.616804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.616845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.616877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.616904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.616941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.616971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.617006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.617037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.617068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.617100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.617129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.617159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.617190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.617225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.617257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.617287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.617323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.617356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.617386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.617420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.617450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.617480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.617511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.617543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.617597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.617631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.617663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.617692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.617722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.617754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.617786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.617823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.617856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.617910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.617942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.617979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.618014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.618046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.618101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.618883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.618913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.618946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.618977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.619006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.619037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.619088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.619119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.619150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.619181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.619214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.619246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.619277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.619305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.619335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.619367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.619395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.619431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.619460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.619489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.619521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.619555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.619589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.619625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.619652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.619679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.619714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.619757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.619790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.619821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.619863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.619899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.434 [2024-04-26 15:20:56.619937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.619969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.620000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.620030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.620064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.620094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.620124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.620156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.620195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.620229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.620259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.620292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.620323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.620354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.620385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.620411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.620437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.620462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.620488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.620512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.620538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.620575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.620601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.620634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.620663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.620689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.620721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.620756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.620789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.620822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.620860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.621019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.621051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.621083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.621128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.621160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.621194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.621224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.621253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.621288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.621319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.621351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.621380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.621413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.621451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.621481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.621520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.621549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.621577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.621611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.621646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.621676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.621707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.621736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.621767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.621796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.621827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.621884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.621916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.621946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.621977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.622010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.622053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.622084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.622144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.622175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.622213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.622242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.622282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.622312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.622342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.622371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.622400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.622435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.622470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.622498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.622533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.622558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.622591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.622628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.622657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.622692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.622725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.622764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.622799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.622829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.622870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.622904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.622936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.622968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.622999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.623028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.623058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.623094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.623126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.623498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.623534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.623570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.623602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.623636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.623670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.623699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.623761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.623794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.623829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.623861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.623892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.623926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.623960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.623990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.624023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.624054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.624088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.624121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.624152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.624179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.624215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.624245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.624278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.624310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.624339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.624367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.624401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.624459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.624493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.624525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.624554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.624590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.624619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.624651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.624682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.624707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.624743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.624777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.624805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.624841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.624871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.624902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.624935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.624966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.435 [2024-04-26 15:20:56.624995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.625026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.625056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.625087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.625121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.625153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.625182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.625213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.625240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.625270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.625301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.625332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.625363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.625395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.625426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.625458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.625491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.625525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.625886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.625921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.625951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.625983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.626018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:39.436 [2024-04-26 15:20:56.626048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.626085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.626112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.626148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.626180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.626216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.626245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.626276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.626311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.626340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.626369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.626400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.626436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.626473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.626503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.626534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.626564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.626596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.626626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.626657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.626691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.626725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.626756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.626790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.626817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.626852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.626885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.626918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.626951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.626983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.627021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.627058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.627089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.627122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.627155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.627187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.627222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.627255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.627289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.627317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.627348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.627385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.627417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.627449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.627481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.627515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.627549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.627578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.627631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.627662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.627695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.627727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.627756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.627789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.627826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.627860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.627892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.627928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.627958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.628323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.628357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.628395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.628427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.628480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.628512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.628541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.628575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.628607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.628640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.628674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.628707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.628737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.628770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.628798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.628828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.628868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.628899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.628934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.628972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.629001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.629031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.629066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.629102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.629133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.629163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.629194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.629233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.629261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.629295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.629327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.629357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.629391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.629422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.629455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.629484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.629515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.629548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.629582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.629612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.629646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.629680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.629713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.629742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.629775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.629808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.629845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.629875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.629906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.629938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.629969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.630021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.630053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.630087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.630116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.630151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.630188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.436 [2024-04-26 15:20:56.630219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.630251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.630279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.630315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.630353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.630388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.630751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.630787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.630821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.630861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.630903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.630937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.630974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.631005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.631036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.631069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.631101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.631133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.631162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.631197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.631229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.631260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.631291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.631347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.631378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.631409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.631438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.631471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.631499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.631530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.631560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.631589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.631619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.631651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.631690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.631722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.631750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.631779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.631812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.631845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.631877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.631908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.631938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.631968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.632001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.632034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.632069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.632102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.632138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.632166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.632197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.632229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.632263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.632298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.632326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.632362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.632394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.632423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.632470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.632504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.632541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.632572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.632603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.632637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.632669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.632709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.632738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.632774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.632809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.632844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.633619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.633655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.633689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.633719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.633754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.633792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.633822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.633859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.633892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.633923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.633953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.633986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.634022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.634054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.634090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.634118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.634149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.634181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.634211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.634242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.634277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.634311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.634339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.634368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.634399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.634431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.634459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.634490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.634523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.634555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.634582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.634614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.634649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.634678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.634713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.634742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.634776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.634807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.634844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.634877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.634907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.634960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.634989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.635040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.635073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.635107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.635141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.635172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.635202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.635234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.635262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.635292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.635322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.635352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.635396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.635429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.635464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.437 [2024-04-26 15:20:56.635497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.635527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.635564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.635597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.635629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.635659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.635800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.635841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.635871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.635904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.635936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.635965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.636018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.636052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.636082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.636110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.636141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.636179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.636209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.636239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.636273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.636302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.636332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.636367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.636402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.636431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.636460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.636489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.636528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.636566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.636597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.636628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.636658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.636695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.636729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.636758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.636790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.636819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.636852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.636882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.636911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.636945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.636977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.637008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.637038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.637072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.637103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.637135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.637188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.637219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.637252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.637282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.637315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.637345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.637378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.637405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.637436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.637466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.637499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.637533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.637563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.637602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.637630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.637663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.637694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.637728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.637758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.637790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.637820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.637862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.638574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.638608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.638640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.638673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.638708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.638740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.638774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.638805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.638846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.638884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.638920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.638944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.638969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.638993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.639016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.639041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.639065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.639092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.639124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.639158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.639188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.639221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.639250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.639276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.639300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.639326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.639351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.639375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.639399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.639424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.639448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.639472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.639497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.639521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.639553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.639590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.639622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.639652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.639685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.639716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.639750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.639782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.639814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.639856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.639888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.639919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.639950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.639986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.640022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.640053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.640085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.640116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.640149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.640178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.640208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.640239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.640268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.640301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.640341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.640372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.640399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.640426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.640462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.640608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.640635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.640665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.640694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.640725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.640757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.640790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.640845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.640876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.640908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.640943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.640976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.641007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.438 [2024-04-26 15:20:56.641036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.641067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.641099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.641130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.641409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.641443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.641472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.641505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.641534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.641567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.641601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.641632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.641667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.641703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.641733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.641769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.641797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.641830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.641873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.641904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.641955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.641988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.642022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.642051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.642080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.642112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.642150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.642179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.642208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.642240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.642275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.642306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.642342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.642370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.642401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.642433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.642463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.642492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.642533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.642563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.642592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.642623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.642653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.642686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.642718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.642749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.642778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.642807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.642841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.642874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.642909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.643088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.643121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.643154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.643198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.643233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.643266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.643300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.643337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.643369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.643400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.643431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.643461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.643492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.643525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.643559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.643587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.643620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.643655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.643681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.643715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.643747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.643780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.643813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.643855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.643883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.643910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.643939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.643971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.644008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.644038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.644071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.644098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.644133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.644167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.644198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.644229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.644255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.644289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.644314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.644339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.644364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.644388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.644423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.644454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.644489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.644522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.644555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.644588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.644617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.644649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.644684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.644721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.644751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.644785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.644813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.644846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.644878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.644910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.644938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.644980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.645012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.645043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.645071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.645102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.645248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.645281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.645320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.645351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.645383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.645413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.645442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.645473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.645503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.645544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.645575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.645609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.645640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.645669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.645700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.645731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.646148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.646177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.646215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.646241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.646272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.646303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.646334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.646368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.646403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.646435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.646466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.646502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.646533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.646562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.646599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.646636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.646668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.646700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.646731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.646765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.646794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.646840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.646874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.439 [2024-04-26 15:20:56.646905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.646935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.646967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.646996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.647026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.647068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.647098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.647136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.647172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.647206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.647239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.647270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.647302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.647336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.647368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.647417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.647452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.647482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.647515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.647547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.647580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.647614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.647673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.647707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.647884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.647920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.647951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.647998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.648029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.648059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.648089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.648119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.648153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.648185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.648222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.648252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.648286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.648323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.648355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.648386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.648426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.648455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.648485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.648514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.648543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.648574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.648607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.648649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.648684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.648713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.648747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.648778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.648810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.648846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.648877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.648908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.648933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.648964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.648999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.649029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.649062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.649092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.649123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.649153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.649183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.649219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.649251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.649284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.649313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.649344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.649379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.649405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.649437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.649471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.649502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.649536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.649569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.649597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.649629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.649662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.649693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.649728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.649760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.649792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.649822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.649859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.649892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.649925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.650054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.650089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.650121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.650154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.650187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.650220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.650250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.650277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.650306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.650337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.650372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.650402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.650435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.650464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.650491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.650519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.650990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.651029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.651057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.651089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.651120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.651150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.651181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.651206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.651239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.651274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.651299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.651325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.651350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.651376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.651402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.651427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.651452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.651478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.651502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.651527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.651552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.651578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.651604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.651629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.651654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.651678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.651702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.651728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.651752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.651778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.651803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.651829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.651858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.651886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.651913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.651938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.440 [2024-04-26 15:20:56.651963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.651988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.652013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.652038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.652064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.652090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.652119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.652148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.652177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.652214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.652247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.652282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.652313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.652342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.652372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.652421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.652454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.652490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.652520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.652549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.652580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.652611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.652662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.652692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.652726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.652756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.652787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.652827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.653036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.653068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.653105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.653147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.653178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.653241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.653273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.653310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.653339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.653369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.653397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.653429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.653466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.653497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.653528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.653556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.653590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.653630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.653658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.653687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.653724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.653752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.653783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.653815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.653850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.653878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.653905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.653934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.653971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.654002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.654036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.654068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.654108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.654140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.654170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.654209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.654239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.654276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.654314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.654345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.654370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.654406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.654435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.654467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.654498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.654534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.654565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.654935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.654970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.655006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.655042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.655075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.655108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.655148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.655184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.655222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.655257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.655286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.655324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.655357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.655392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.655426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.655459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.655493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.655525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.655566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.655598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.655633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.655685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.655722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.655757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.655792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.655823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.655861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.655917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.655947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.655979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.656018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.656051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.656081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.656112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.656140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.656176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.656210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.656242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.656273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.656301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.656332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.656367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.656398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.656431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.656465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.656498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.656525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.656554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.656590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.656620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.656651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.656694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.656725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.656757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.656793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.656826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.656864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.656895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.656928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.656960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.656994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.657027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.657059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.657087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.657214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.657247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.657280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.657312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.657366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.657398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.657431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.657461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.657491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.657527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.657560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.657590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.657624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.657656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.657692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.657723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.441 [2024-04-26 15:20:56.658163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.658193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.658228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.658263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.658291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.658322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.658354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.658387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.658420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.658453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.658488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.658522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.658550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.658585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.658621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.658650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.658679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.658708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.658739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.658767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.658801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.658827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.658858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.658884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.658909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.658934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.658959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.658985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.659011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.659040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.659079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.659112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.659146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.659178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.659217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.659251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.659280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.659315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.659346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.659385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.659425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.659454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.659489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.659519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.659555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.659588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.659621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.659659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.659688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.659727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.659762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.659792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.659825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.659859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.659893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.659925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.659956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.659989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.660019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.660052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.660085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.660114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.660145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.660177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.660416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.660451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.660481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.660513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.660545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.660575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.660616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.660646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.660680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.660712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.660747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.660787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.660821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.660855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.660895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.660928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.660963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.660992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.661027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.661064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.661094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.661128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.661159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.661188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.661218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.661254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.661284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.661317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.661373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.661405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.661436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.661467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.661502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.661537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.661568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.661605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.661636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.661664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.661699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.661729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.661762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.661798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.661831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.661884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.661915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.661946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.661981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.662013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.662055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.442 [2024-04-26 15:20:56.662086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.662116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.662150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.662181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.662235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.662270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.662308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.662344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.662374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.662408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.662440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.662479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.662508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.662543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.663196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.663230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.663264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.663300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.663333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.663363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.663399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.663440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.663470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.663500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.663529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.663564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.663591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.663625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.663656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.663683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.663716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.663748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.663778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.663807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.663843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:39.443 [2024-04-26 15:20:56.663875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.663913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.663950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.663985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.664017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.664050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.664084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.664109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.664141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.664176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.664212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.664248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.664274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.664309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.664341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.664373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.664407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.664443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.664479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.664514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.664544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.664574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.664610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.664647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.664678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.664711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.664740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.664769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.664798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.664835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.664872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.664904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.664936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.664969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.665006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.665044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.665076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.665106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.665132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.665165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.665197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.665229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.665259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.665406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.665441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.665473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.665522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.665552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.665589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.665623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.665652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.665681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.665713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.665741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.665775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.665807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.665833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.665869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.665908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.665936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.665970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.666000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.443 [2024-04-26 15:20:56.666032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.444 [2024-04-26 15:20:56.666065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.444 [2024-04-26 15:20:56.666111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.444 [2024-04-26 15:20:56.666142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.444 [2024-04-26 15:20:56.666170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.444 [2024-04-26 15:20:56.666200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.444 [2024-04-26 15:20:56.666230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.444 [2024-04-26 15:20:56.666269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.444 [2024-04-26 15:20:56.666300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.444 [2024-04-26 15:20:56.666332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.444 [2024-04-26 15:20:56.666368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.444 [2024-04-26 15:20:56.666400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.444 [2024-04-26 15:20:56.666435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.444 [2024-04-26 15:20:56.666471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.444 [2024-04-26 15:20:56.666504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.444 [2024-04-26 15:20:56.666538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.444 [2024-04-26 15:20:56.666573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.444 [2024-04-26 15:20:56.666608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.444 [2024-04-26 15:20:56.666638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.444 [2024-04-26 15:20:56.666668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.444 [2024-04-26 15:20:56.666698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.444 [2024-04-26 15:20:56.666728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.444 [2024-04-26 15:20:56.666760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.444 [2024-04-26 15:20:56.666792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.444 [2024-04-26 15:20:56.666825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.444 [2024-04-26 15:20:56.666870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.444 [2024-04-26 15:20:56.666905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.444 [2024-04-26 15:20:56.666938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.444 [2024-04-26 15:20:56.666968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.444 [2024-04-26 15:20:56.667001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.444 [2024-04-26 15:20:56.667039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.444 [2024-04-26 15:20:56.667068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.444 [2024-04-26 15:20:56.667103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.444 [2024-04-26 15:20:56.667136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.444 [2024-04-26 15:20:56.667170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.444 [2024-04-26 15:20:56.667204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.444 [2024-04-26 15:20:56.667238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.444 [2024-04-26 15:20:56.667268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.444 [2024-04-26 15:20:56.667296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.444 [2024-04-26 15:20:56.667333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.444 [2024-04-26 15:20:56.667364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.444 [2024-04-26 15:20:56.667403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.444 [2024-04-26 15:20:56.667444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.444 [2024-04-26 15:20:56.667474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.444 [2024-04-26 15:20:56.667822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.444 [2024-04-26 15:20:56.667863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.444 [2024-04-26 15:20:56.667896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.444 [2024-04-26 15:20:56.667928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.444 [2024-04-26 15:20:56.667959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.444 [2024-04-26 15:20:56.667996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.444 [2024-04-26 15:20:56.668026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.444 [2024-04-26 15:20:56.668059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.444 [2024-04-26 15:20:56.668090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.444 [2024-04-26 15:20:56.668126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.444 [2024-04-26 15:20:56.668163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.444 [2024-04-26 15:20:56.668196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.444 [2024-04-26 15:20:56.668227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.444 [2024-04-26 15:20:56.668260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.444 [2024-04-26 15:20:56.668291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.444 [2024-04-26 15:20:56.668324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.444 [2024-04-26 15:20:56.668355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.668387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.668424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.668456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.668494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.668528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.668557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.668585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.668613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.668644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.668675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.668713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.668746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.668784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.668817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.668852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.668884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.668919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.668951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.668992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.669025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.669058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.669092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.669121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.669157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.669184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.669217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.669247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.669275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.669306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.669339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.669373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.669407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.669437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.669469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.669500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.669534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.669566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.669595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.669636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.669662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.669695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.669729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.669759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.669791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.669822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.669856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.669894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.670256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.670293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.670325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.670361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.670393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.670426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.670464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.670496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.670544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.670572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.670604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.670636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.670668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.670698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.670730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.670769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.670804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.670835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.670867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.670896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.670926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.670960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.670993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.671024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.671052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.671085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.671115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.671149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.671184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.671216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.671245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.671281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.671312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.671369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.671402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.671434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.671472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.671502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.671534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.671564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.671594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.671633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.671663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.671713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.671746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.671774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.671802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.671833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.671866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.671899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.445 [2024-04-26 15:20:56.671934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.671965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.671995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.672034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.672063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.672093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.672130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.672161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.672193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.672223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.672252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.672285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.672318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.672688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.672721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.672758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.672794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.672828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.672863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.672897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.672927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.672957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.672991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.673026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.673060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.673096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.673127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.673164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.673201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.673234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.673272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.673302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.673333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.673363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.673394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.673426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.673456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.673488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.673521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.673556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.673591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.673622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.673660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.673694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.673723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.673755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.673787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.673819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.673858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.673892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.673923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.673963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.673995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.674024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.674052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.674081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.674114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.674143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.674173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.674210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.674241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.674269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.674298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.674343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.674378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.674415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.674443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.674481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.674516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.674549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.674581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.674612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.674645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.674674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.674707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.674740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.674768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.675142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.675181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.675214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.675244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.675280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.675311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.675358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.675389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.675415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.675445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.675478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.675511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.675545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.675580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.675611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.675647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.675677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.675706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.675736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.675769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.675803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.446 [2024-04-26 15:20:56.675836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.447 [2024-04-26 15:20:56.675875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.447 [2024-04-26 15:20:56.675907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.447 [2024-04-26 15:20:56.675944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.447 [2024-04-26 15:20:56.675976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.447 [2024-04-26 15:20:56.676008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.447 [2024-04-26 15:20:56.676039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.447 [2024-04-26 15:20:56.676070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.447 [2024-04-26 15:20:56.676102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.447 [2024-04-26 15:20:56.676137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.447 [2024-04-26 15:20:56.676170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.447 [2024-04-26 15:20:56.676202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.447 [2024-04-26 15:20:56.676229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.447 [2024-04-26 15:20:56.676261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.447 [2024-04-26 15:20:56.676297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.447 [2024-04-26 15:20:56.676323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.447 [2024-04-26 15:20:56.676348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.447 [2024-04-26 15:20:56.676373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.447 [2024-04-26 15:20:56.676399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.447 [2024-04-26 15:20:56.676424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.447 [2024-04-26 15:20:56.676450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.447 [2024-04-26 15:20:56.676476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.447 [2024-04-26 15:20:56.676502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.447 [2024-04-26 15:20:56.676532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.447 [2024-04-26 15:20:56.676565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.447 [2024-04-26 15:20:56.676597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.447 [2024-04-26 15:20:56.676634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.447 [2024-04-26 15:20:56.676668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.447 [2024-04-26 15:20:56.676701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.447 [2024-04-26 15:20:56.676729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.447 [2024-04-26 15:20:56.676762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.447 [2024-04-26 15:20:56.676798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.447 [2024-04-26 15:20:56.676829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.447 [2024-04-26 15:20:56.676868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.447 [2024-04-26 15:20:56.676898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.447 [2024-04-26 15:20:56.676933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.447 [2024-04-26 15:20:56.676961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.447 [2024-04-26 15:20:56.676991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.447 [2024-04-26 15:20:56.677022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.447 [2024-04-26 15:20:56.677051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.447 [2024-04-26 15:20:56.677089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.447 [2024-04-26 15:20:56.677121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.447 [2024-04-26 15:20:56.677473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.447 [2024-04-26 15:20:56.677510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.447 [2024-04-26 15:20:56.677542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.447 [2024-04-26 15:20:56.677577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.447 [2024-04-26 15:20:56.677609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.447 [2024-04-26 15:20:56.677642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.447 [2024-04-26 15:20:56.677669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.447 [2024-04-26 15:20:56.677703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.447 [2024-04-26 15:20:56.677736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.447 [2024-04-26 15:20:56.677771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.447 [2024-04-26 15:20:56.677802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.447 [2024-04-26 15:20:56.677834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.447 [2024-04-26 15:20:56.677873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.447 [2024-04-26 15:20:56.677914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.447 [2024-04-26 15:20:56.677944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.447 [2024-04-26 15:20:56.677976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.447 [2024-04-26 15:20:56.678018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.447 [2024-04-26 15:20:56.678050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.447 [2024-04-26 15:20:56.678079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.447 [2024-04-26 15:20:56.678112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.447 [2024-04-26 15:20:56.678145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.447 [2024-04-26 15:20:56.678173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.447 [2024-04-26 15:20:56.678201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.447 [2024-04-26 15:20:56.678243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.447 [2024-04-26 15:20:56.678274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.447 [2024-04-26 15:20:56.678308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.447 [2024-04-26 15:20:56.678341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.447 [2024-04-26 15:20:56.678375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.447 [2024-04-26 15:20:56.678401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.447 [2024-04-26 15:20:56.678437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.447 [2024-04-26 15:20:56.678466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.447 [2024-04-26 15:20:56.678498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.447 [2024-04-26 15:20:56.678533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.447 [2024-04-26 15:20:56.678564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.447 [2024-04-26 15:20:56.678594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.447 [2024-04-26 15:20:56.678624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.447 [2024-04-26 15:20:56.678662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.678695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.678731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.678760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.678796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.678828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.678864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.678899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.678936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.678970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.679002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.679035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.679076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.679107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.679140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.679171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.679203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.679234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.679264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.679328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.679371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.679403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.679441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.679475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.679506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.679539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.679570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.679604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.679959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.679990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.680047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.680077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.680114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.680148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.680182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.680217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.680246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.680281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.680311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.680340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.680372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.680409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.680439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.680468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.680495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.680530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.680561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.680604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.680637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.680669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.680710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.680740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.680773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.680805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.680835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.680874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.680901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.680929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.680962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.680995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.681032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.681062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.681093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.681122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.681159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.681192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.681224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.681254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.681283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.681310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.681346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.681378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.681409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.681443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.681473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.681502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.681531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.681558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.681592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.681626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.681654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.681686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.681718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.681749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.681807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.681843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.681874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.681908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.681940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.681969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.682005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.682360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.682394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.682424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.682458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.682486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.448 [2024-04-26 15:20:56.682520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.449 [2024-04-26 15:20:56.682556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.449 [2024-04-26 15:20:56.682591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.449 [2024-04-26 15:20:56.682623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.449 [2024-04-26 15:20:56.682665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.449 [2024-04-26 15:20:56.682694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.449 [2024-04-26 15:20:56.682724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.449 [2024-04-26 15:20:56.682756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.449 [2024-04-26 15:20:56.682786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.449 [2024-04-26 15:20:56.682817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.449 [2024-04-26 15:20:56.682852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.449 [2024-04-26 15:20:56.682887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.449 [2024-04-26 15:20:56.682923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.449 [2024-04-26 15:20:56.682960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.449 [2024-04-26 15:20:56.682990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.449 [2024-04-26 15:20:56.683026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.449 [2024-04-26 15:20:56.683058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.449 [2024-04-26 15:20:56.683089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.449 [2024-04-26 15:20:56.683118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.449 [2024-04-26 15:20:56.683146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.449 [2024-04-26 15:20:56.683175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.449 [2024-04-26 15:20:56.683213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.449 [2024-04-26 15:20:56.683246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.449 [2024-04-26 15:20:56.683282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.449 [2024-04-26 15:20:56.683316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.449 [2024-04-26 15:20:56.683346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.449 [2024-04-26 15:20:56.683382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.449 [2024-04-26 15:20:56.683415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.449 [2024-04-26 15:20:56.683446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.449 [2024-04-26 15:20:56.683478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.449 [2024-04-26 15:20:56.683508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.449 [2024-04-26 15:20:56.683541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.449 [2024-04-26 15:20:56.683575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.449 [2024-04-26 15:20:56.683615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.449 [2024-04-26 15:20:56.683648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.449 [2024-04-26 15:20:56.683682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.449 [2024-04-26 15:20:56.683716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.449 [2024-04-26 15:20:56.683748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.449 [2024-04-26 15:20:56.683786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.449 [2024-04-26 15:20:56.683815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.449 [2024-04-26 15:20:56.683849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.449 [2024-04-26 15:20:56.683879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.449 [2024-04-26 15:20:56.683913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.449 [2024-04-26 15:20:56.683975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.449 [2024-04-26 15:20:56.684006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.449 [2024-04-26 15:20:56.684040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.449 [2024-04-26 15:20:56.684073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.449 [2024-04-26 15:20:56.684109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.449 [2024-04-26 15:20:56.684147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.449 [2024-04-26 15:20:56.684175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.449 [2024-04-26 15:20:56.684207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.449 [2024-04-26 15:20:56.684249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.449 [2024-04-26 15:20:56.684282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.449 [2024-04-26 15:20:56.684318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.449 [2024-04-26 15:20:56.684351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.449 [2024-04-26 15:20:56.684391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.449 [2024-04-26 15:20:56.684423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.449 [2024-04-26 15:20:56.684450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.449 [2024-04-26 15:20:56.684484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.449 [2024-04-26 15:20:56.684818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.449 [2024-04-26 15:20:56.684854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.449 [2024-04-26 15:20:56.684884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.449 [2024-04-26 15:20:56.684914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.449 [2024-04-26 15:20:56.684947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.449 [2024-04-26 15:20:56.684978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.449 [2024-04-26 15:20:56.685009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.449 [2024-04-26 15:20:56.685037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.449 [2024-04-26 15:20:56.685069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.449 [2024-04-26 15:20:56.685100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.449 [2024-04-26 15:20:56.685157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.685189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.685225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.685256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.685289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.685322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.685353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.685387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.685418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.685449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.685497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.685527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.685556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.685590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.685621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.685651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.685682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.685713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.685746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.685779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.685812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.685853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.685884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.685922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.685951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.685982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.686016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.686044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.686078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.686110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.686151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.686184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.686221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.686252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.686282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.686320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.686348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.686376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.686410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.686442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.686475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.686507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.686541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.686572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.686600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.686632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.686661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.686693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.686725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.686761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.686790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.686827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.686864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.687236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.687275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.687308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.687352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.687382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.687417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.687452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.687484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.687515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.687548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.687584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.687616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.687645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.687680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.687710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.687743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.687771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.687801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.687830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.687867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.687899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.687936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.687972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.688006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.688039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.688070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.688107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.688140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.688170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.688196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.688230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.688267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.688298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.688326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.688359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.688385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.688410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.450 [2024-04-26 15:20:56.688436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.688461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.688487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.688513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.688548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.688582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.688615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.688644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.688670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.688695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.688721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.688746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.688779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.688809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.688845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.688880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.688912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.688948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.688980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.689013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.689061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.689096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.689133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.689166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.689201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.689233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.689264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.689613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.689646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.689683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.689717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.689748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.689782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.689817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.689855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.689891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.689922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.689975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.690007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.690038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.690070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.690099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.690132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.690164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.690195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.690229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.690263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.690292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.690332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.690362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.690391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.690429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.690459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.690492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.690523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.690555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.690587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.690618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.690653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.690687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.690721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.690754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.690789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.690821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.690854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.690888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.690917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.690949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.690979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.691009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.691043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.691079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.691108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.691139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.691169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.691201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.691235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.691270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.691303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.691334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.691367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.691399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.691456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.691488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.691519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.691552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.451 [2024-04-26 15:20:56.691583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.691616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.691649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.691686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.692039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.692074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.692109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.692141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.692172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.692203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.692237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.692268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.692302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.692333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.692377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.692410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.692442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.692478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.692511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.692547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.692581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.692607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.692639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.692669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.692710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.692739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.692769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.692809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.692842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.692876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.692909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.692937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.692971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.693003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.693041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.693073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.693110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.693141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.693173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.693206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.693238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.693267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.693306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.693336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.693366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.693400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.693431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.693462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.693498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.693532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.693565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.693599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.693627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.693665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.693696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.693726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.693758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.693788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.693816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.693857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.693887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.693921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.693954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.693989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.694023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.694060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.694093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.694124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.694472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.694511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.694544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.694576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.694614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.694642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.694678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.694714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.694750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.694779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.694809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.694844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.694877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.694908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.694945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.694977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.695012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.695045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.695077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.695109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.695142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.695175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.452 [2024-04-26 15:20:56.695217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.695249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.695282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.695310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.695344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.695373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.695404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.695442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.695473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.695508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.695544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.695574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.695604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.695633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.695659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.695696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.695730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.695765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.695793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.695823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.695861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.695892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.695925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.695959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.695990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.696021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.696052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.696086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.696119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.696147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.696181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.696215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.696253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.696288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.696320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.696353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.696383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.696418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.696449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.696483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.696515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.696881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.696914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.696942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.696975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.697009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.697042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.697084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.697112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.697136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.697165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.697193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.697227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.697257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.697286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.697313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.697347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.697380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.697411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.697443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.697477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.697508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.697542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.697571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.697600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.697632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.697661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.697693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.697722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.697778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.697812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.697851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.697887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.697917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.697961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.697991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.698035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.698067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.698096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.698129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.698163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.698196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.698227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.698260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.698292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.698323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.698359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.698389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.698440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.698474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.698505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.453 [2024-04-26 15:20:56.698535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.454 [2024-04-26 15:20:56.698564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.454 [2024-04-26 15:20:56.698597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.454 [2024-04-26 15:20:56.698627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.454 [2024-04-26 15:20:56.698663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.454 [2024-04-26 15:20:56.698698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.454 [2024-04-26 15:20:56.698727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.454 [2024-04-26 15:20:56.698761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.454 [2024-04-26 15:20:56.698794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.454 [2024-04-26 15:20:56.698834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.454 [2024-04-26 15:20:56.698866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.454 [2024-04-26 15:20:56.698898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.454 [2024-04-26 15:20:56.698928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.454 [2024-04-26 15:20:56.698966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.454 [2024-04-26 15:20:56.699321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.454 [2024-04-26 15:20:56.699357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.454 [2024-04-26 15:20:56.699387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.454 [2024-04-26 15:20:56.699418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.454 [2024-04-26 15:20:56.699453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.454 [2024-04-26 15:20:56.699486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.454 [2024-04-26 15:20:56.699514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.454 [2024-04-26 15:20:56.699544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.454 [2024-04-26 15:20:56.699575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.454 [2024-04-26 15:20:56.699612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.454 [2024-04-26 15:20:56.699642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.454 [2024-04-26 15:20:56.699673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.454 [2024-04-26 15:20:56.699701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.454 [2024-04-26 15:20:56.699731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.454 [2024-04-26 15:20:56.699767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.454 [2024-04-26 15:20:56.699797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.454 [2024-04-26 15:20:56.699830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.454 [2024-04-26 15:20:56.699866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.454 [2024-04-26 15:20:56.699893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.454 [2024-04-26 15:20:56.699924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.454 [2024-04-26 15:20:56.699959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.454 [2024-04-26 15:20:56.699988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.454 [2024-04-26 15:20:56.700016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.454 [2024-04-26 15:20:56.700047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.454 [2024-04-26 15:20:56.700079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.454 [2024-04-26 15:20:56.700107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.454 [2024-04-26 15:20:56.700137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.454 [2024-04-26 15:20:56.700171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.454 [2024-04-26 15:20:56.700204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.454 [2024-04-26 15:20:56.700230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.454 [2024-04-26 15:20:56.700264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.454 [2024-04-26 15:20:56.700296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.454 [2024-04-26 15:20:56.700328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.454 [2024-04-26 15:20:56.700358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.454 [2024-04-26 15:20:56.700390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.454 [2024-04-26 15:20:56.700422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.454 [2024-04-26 15:20:56.700475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.454 [2024-04-26 15:20:56.700505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.454 [2024-04-26 15:20:56.700539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.454 [2024-04-26 15:20:56.700571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.454 [2024-04-26 15:20:56.700601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.454 [2024-04-26 15:20:56.700633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.454 [2024-04-26 15:20:56.700662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.454 [2024-04-26 15:20:56.700726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.454 [2024-04-26 15:20:56.700758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.454 [2024-04-26 15:20:56.700799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.454 [2024-04-26 15:20:56.700830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.454 [2024-04-26 15:20:56.700865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.454 [2024-04-26 15:20:56.700896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.454 [2024-04-26 15:20:56.700928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.454 [2024-04-26 15:20:56.700959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.455 [2024-04-26 15:20:56.700993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.455 [2024-04-26 15:20:56.701028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.455 [2024-04-26 15:20:56.701057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.455 [2024-04-26 15:20:56.701089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.455 [2024-04-26 15:20:56.701121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.455 [2024-04-26 15:20:56.701157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.455 [2024-04-26 15:20:56.701184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.455 [2024-04-26 15:20:56.701217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.455 [2024-04-26 15:20:56.701259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.455 [2024-04-26 15:20:56.701293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.455 [2024-04-26 15:20:56.701337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.455 [2024-04-26 15:20:56.701367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.455 [2024-04-26 15:20:56.701760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.455 [2024-04-26 15:20:56.701794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.455 [2024-04-26 15:20:56.701828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.455 [2024-04-26 15:20:56.701867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.455 [2024-04-26 15:20:56.701902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.455 [2024-04-26 15:20:56.701940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.455 [2024-04-26 15:20:56.701974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.455 [2024-04-26 15:20:56.702003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.455 [2024-04-26 15:20:56.702036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.455 [2024-04-26 15:20:56.702067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.455 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:39.455 [2024-04-26 15:20:56.702100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.455 [2024-04-26 15:20:56.702131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.455 [2024-04-26 15:20:56.702165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.455 [2024-04-26 15:20:56.702207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.455 [2024-04-26 15:20:56.702237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.455 [2024-04-26 15:20:56.702267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.455 [2024-04-26 15:20:56.702304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.455 [2024-04-26 15:20:56.702336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.455 [2024-04-26 15:20:56.702370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.455 [2024-04-26 15:20:56.702396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.455 [2024-04-26 15:20:56.702425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.455 [2024-04-26 15:20:56.702455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.455 [2024-04-26 15:20:56.702490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.455 [2024-04-26 15:20:56.702527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.455 [2024-04-26 15:20:56.702561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.455 [2024-04-26 15:20:56.702598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.455 [2024-04-26 15:20:56.702629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.455 [2024-04-26 15:20:56.702661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.455 [2024-04-26 15:20:56.702692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.455 [2024-04-26 15:20:56.702729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.455 [2024-04-26 15:20:56.702764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.455 [2024-04-26 15:20:56.702794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.455 [2024-04-26 15:20:56.702830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.455 [2024-04-26 15:20:56.702871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.455 [2024-04-26 15:20:56.702897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.455 [2024-04-26 15:20:56.702934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.455 [2024-04-26 15:20:56.702970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.455 [2024-04-26 15:20:56.702998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.455 [2024-04-26 15:20:56.703035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.455 [2024-04-26 15:20:56.703064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.455 [2024-04-26 15:20:56.703099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.455 [2024-04-26 15:20:56.703131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.455 [2024-04-26 15:20:56.703165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.455 [2024-04-26 15:20:56.703200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.455 [2024-04-26 15:20:56.703230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.455 [2024-04-26 15:20:56.703261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.455 [2024-04-26 15:20:56.703292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.455 [2024-04-26 15:20:56.703323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.455 [2024-04-26 15:20:56.703356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.455 [2024-04-26 15:20:56.703389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.455 [2024-04-26 15:20:56.703425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.455 [2024-04-26 15:20:56.703457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.455 [2024-04-26 15:20:56.703491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.455 [2024-04-26 15:20:56.703522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.455 [2024-04-26 15:20:56.703554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.455 [2024-04-26 15:20:56.703590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.455 [2024-04-26 15:20:56.703620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.455 [2024-04-26 15:20:56.703650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.455 [2024-04-26 15:20:56.703678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.455 [2024-04-26 15:20:56.703706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.455 [2024-04-26 15:20:56.703740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.455 [2024-04-26 15:20:56.703771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.455 [2024-04-26 15:20:56.703799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.455 [2024-04-26 15:20:56.703828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.455 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:39.455 15:20:56 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:39.455 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:39.455 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:39.455 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:39.742 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:39.742 [2024-04-26 15:20:56.878423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.742 [2024-04-26 15:20:56.878468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.742 [2024-04-26 15:20:56.878497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.742 [2024-04-26 15:20:56.878525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.742 [2024-04-26 15:20:56.878558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.878590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.878622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.878657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.878687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.878717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.878747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.878779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.878816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.878856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.878885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.878911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.878943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.878975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.879008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.879033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.879059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.879085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.879119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.879154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.879184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.879213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.879241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.879268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.879297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.879326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.879360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.879390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.879422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.879458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.879491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.879568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.879600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.879632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.879665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.879697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.879767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.879797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.879826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.879866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.879899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.879929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.879959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.879989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.880021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.880053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.880085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.880116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.880151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.880184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.880212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.880242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.880274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.880305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.880335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.880364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.880398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.880427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.880456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.880817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.880855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.880889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.880919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.880949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.880978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.881006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.881038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.881068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.881100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.881134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:39.743 [2024-04-26 15:20:56.881171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.881211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.881240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.881275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.881303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.881333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.881361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.881393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.881433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.881460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.881488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.881524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.881561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.881603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.881635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.881666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.881693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.881722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.881750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.881781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.881812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.881848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.881872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.882302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.882336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.882363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.743 [2024-04-26 15:20:56.882392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.882424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.882457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.882488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.882520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.882557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.882588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.882620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.882652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.882686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.882718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.882747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.882775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.882805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.882843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.882871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.882905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.882937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.882969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.883000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.883033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.883064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.883093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.883125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.883154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.883183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.883214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.883420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.883456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.883486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.883515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.883543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.883591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.883620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.883668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.883700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.883737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.883767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.883798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.883832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.883866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.883894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.883929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.883967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.883997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.884028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.884056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.884083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.884113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.884138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.884166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.884196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.884226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.884259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.884288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.884321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.884351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.884382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.884412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.884441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.884474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.884504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.884538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.884574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.884606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.884639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.884669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.884699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.884732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.884761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.884800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.884835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.884875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.884913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.884940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.884970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.884999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.885036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.885069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.885097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.885128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.885160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.885188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.885221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.885252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.885285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.885313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.885344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.885374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.885406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.885637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.885669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.885706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.885739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.885771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.885803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.885843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.885873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.885905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.744 [2024-04-26 15:20:56.885937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.885965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.886005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.886037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.886071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.886103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.886134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.886166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.886196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.886222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.886255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.886284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.886316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.886341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.886373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.886404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.886435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.886462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.886489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.886516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.886543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.886571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.886600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.886631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.886668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.886697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.886723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.886755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.886786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.886816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.886848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.886877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.886908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.886935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.886961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.886986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.887015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.887049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.887080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.887110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.887144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.887173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.887203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.887230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.887262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.887297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.887328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.887358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.887385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.887416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.887444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.887483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.887515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.887549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.887578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.887950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.887982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.888012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.888045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.888076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.888106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.888131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.888157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.888191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.888222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.888251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.888277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.888307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.888335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.888362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.888388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.888423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.888459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.888495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.888532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.888561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.888589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.888612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.888644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.888671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.888703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.888734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.888763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.888793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.888825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.888860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.888895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.888928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.888959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.888984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.889012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.889041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.889078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.889102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.889125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.889149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.889173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.889197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.745 [2024-04-26 15:20:56.889220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.889244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.889268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.889292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.889316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.889343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.889366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.889390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.889413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.889438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.889461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.889485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.889509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.889533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.889556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.889580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.889604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.889628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.889652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.889676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.890252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.890284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.890316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.890345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.890378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.890405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.890439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.890468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.890498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.890527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.890560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.890593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.890626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.890655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.890688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.890720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.890753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.890785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.890813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.890849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.890880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.890909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.890940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.890974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.891004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.891035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.891066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.891094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.891126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.891157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.891185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.891212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.891237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.891265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.891300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.891336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.891372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.891406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.891434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.891461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.891487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.891519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.891548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.891577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.891608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.891638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.891665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.891696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.891726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.891755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.891788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.891819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.891854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.891891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.891919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.891944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.891977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.892005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.892033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.892065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.892093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.892129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.892158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.892188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.892367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.892397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.892433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.892465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.892493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.892524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.892554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.892585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.892638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.892671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.892705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.892735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.892769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.892798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.746 [2024-04-26 15:20:56.892841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.747 [2024-04-26 15:20:56.892874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.747 [2024-04-26 15:20:56.892906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.747 [2024-04-26 15:20:56.892940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.747 [2024-04-26 15:20:56.892971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.747 [2024-04-26 15:20:56.893006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.747 [2024-04-26 15:20:56.893038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.747 [2024-04-26 15:20:56.893065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.747 [2024-04-26 15:20:56.893094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.747 [2024-04-26 15:20:56.893128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.747 [2024-04-26 15:20:56.893157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.747 [2024-04-26 15:20:56.893196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.747 [2024-04-26 15:20:56.893227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.747 [2024-04-26 15:20:56.893259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.747 [2024-04-26 15:20:56.893288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.747 [2024-04-26 15:20:56.893315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.747 [2024-04-26 15:20:56.893344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.747 [2024-04-26 15:20:56.893368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.747 [2024-04-26 15:20:56.893394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.747 [2024-04-26 15:20:56.893425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.747 [2024-04-26 15:20:56.893456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.747 [2024-04-26 15:20:56.893485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.747 [2024-04-26 15:20:56.893518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.747 [2024-04-26 15:20:56.893548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.747 [2024-04-26 15:20:56.893574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.747 [2024-04-26 15:20:56.893604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.747 [2024-04-26 15:20:56.893628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.747 [2024-04-26 15:20:56.893662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.747 [2024-04-26 15:20:56.893694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.747 [2024-04-26 15:20:56.893724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.747 [2024-04-26 15:20:56.893753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.747 [2024-04-26 15:20:56.893790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.747 [2024-04-26 15:20:56.893822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.747 [2024-04-26 15:20:56.893858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.747 [2024-04-26 15:20:56.893894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.747 [2024-04-26 15:20:56.893924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.747 [2024-04-26 15:20:56.893957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.747 [2024-04-26 15:20:56.893985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.747 [2024-04-26 15:20:56.894015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.747 [2024-04-26 15:20:56.894041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.747 [2024-04-26 15:20:56.894074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.747 [2024-04-26 15:20:56.894106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.747 [2024-04-26 15:20:56.894136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.747 [2024-04-26 15:20:56.894165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.747 [2024-04-26 15:20:56.894200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.747 [2024-04-26 15:20:56.894235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.747 [2024-04-26 15:20:56.894266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.747 [2024-04-26 15:20:56.894294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.747 [2024-04-26 15:20:56.894322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.747 [2024-04-26 15:20:56.894721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.747 [2024-04-26 15:20:56.894752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.747 [2024-04-26 15:20:56.894783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.747 [2024-04-26 15:20:56.894814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.747 [2024-04-26 15:20:56.894849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.747 [2024-04-26 15:20:56.894885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.747 [2024-04-26 15:20:56.894916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.747 [2024-04-26 15:20:56.894946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.747 [2024-04-26 15:20:56.895004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.747 [2024-04-26 15:20:56.895031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.747 [2024-04-26 15:20:56.895061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.747 [2024-04-26 15:20:56.895091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.747 [2024-04-26 15:20:56.895121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.747 [2024-04-26 15:20:56.895156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.747 [2024-04-26 15:20:56.895196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.747 [2024-04-26 15:20:56.895228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.747 [2024-04-26 15:20:56.895271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.747 [2024-04-26 15:20:56.895299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.747 [2024-04-26 15:20:56.895343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.747 [2024-04-26 15:20:56.895371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.747 [2024-04-26 15:20:56.895416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.747 [2024-04-26 15:20:56.895453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.747 [2024-04-26 15:20:56.895480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.747 [2024-04-26 15:20:56.895510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.747 [2024-04-26 15:20:56.895543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.747 [2024-04-26 15:20:56.895578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.747 [2024-04-26 15:20:56.895606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.747 [2024-04-26 15:20:56.895633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.747 [2024-04-26 15:20:56.895663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.747 [2024-04-26 15:20:56.895694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.747 [2024-04-26 15:20:56.895728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.895753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.895783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.895812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.895845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.895874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.895900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.895928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.895953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.895982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.896011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.896047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.896080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.896116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.896146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.896176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.896210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.896239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.896273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.896303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.896330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.896365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.896394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.896426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.896460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.896491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.896520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.896548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.896577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.896607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.896639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.896672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.896704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.896733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.897077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.897107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.897139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.897169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.897196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.897229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.897258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.897288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.897318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.897350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.897385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.897415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.897447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.897474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.897507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.897538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.897574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.897608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.897643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.897669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.897701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.897733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.897770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.897800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.897827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.897867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.897896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.897931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.897960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.897990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.898018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.898049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.898080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.898109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.898138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.898170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.898197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.898226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.898256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.898285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.898318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.898349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.898381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.898411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.898444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.898477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.898522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.898552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.898583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.898613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.898644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.898679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.898708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.898734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.898760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.898790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.898819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.898858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.898898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.898927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.898953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.898979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.899009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.899352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.899388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.748 [2024-04-26 15:20:56.899427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.899458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.899488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.899517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.899546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.899577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.899608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.899638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.899666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.899693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.899724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.899755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.899783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.899813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.899847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.899879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.899911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.899939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.899970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.900002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.900036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.900065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.900096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.900130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.900160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.900191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.900219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.900249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.900283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.900314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.900348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.900376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.900410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.900439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.900477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.900505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.900540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.900572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.900602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.900634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.900661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.900687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.900717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.900749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.900790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.900825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.900860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.900891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.900921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.900945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.900975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.901009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.901039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.901073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.901108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.901146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.901173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.901207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.901237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.901272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.901304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.901336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.901698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.901730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.901764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.901793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.901822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.901855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.901887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.901922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.901959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.901990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.902020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.902053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.902085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.902113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.902147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.902179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.902207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.902238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.902267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.902299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.902343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.902374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.902407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.902435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.902467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.902497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.902524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.902557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.902591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.902620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.902654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.902686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.902719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.902750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.902783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.902814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.902849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.749 [2024-04-26 15:20:56.902879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.902930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.902959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.902990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.903021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.903055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.903085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.903113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.903144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.903183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.903215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.903247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.903277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.903307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.903337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.903366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.903392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.903420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.903450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.903478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.903505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.903530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.903559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.903585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.903619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.903646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.904017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.904049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.904079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.904113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.904145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.904177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.904206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.904235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.904266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.904296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.904333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.904365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.904395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.904423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.904454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.904488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.904523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.904554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.904587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.904615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.904665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.904696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.904732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.904763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.904826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.904862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.904897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.904928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.904959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.904992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.905019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.905047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.905076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.905104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.905151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.905178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.905211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.905238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.905267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.905295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.905324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.905352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.905382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.905410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.905437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.905464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.905493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.905523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.905553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.905582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.905609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.905637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.905667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.905694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.905721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.905752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.905784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.905813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.905844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.905872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.905899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.905929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.905964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.905996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.906649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.906682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.906716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.906749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.906779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.906812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.906843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.906876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.906905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.750 [2024-04-26 15:20:56.906937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.751 [2024-04-26 15:20:56.906968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.751 [2024-04-26 15:20:56.906998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.751 [2024-04-26 15:20:56.907025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.751 [2024-04-26 15:20:56.907058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.751 [2024-04-26 15:20:56.907088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.751 [2024-04-26 15:20:56.907124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.751 [2024-04-26 15:20:56.907159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.751 [2024-04-26 15:20:56.907187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.751 [2024-04-26 15:20:56.907217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.751 [2024-04-26 15:20:56.907241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.751 [2024-04-26 15:20:56.907275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.751 [2024-04-26 15:20:56.907309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.751 [2024-04-26 15:20:56.907339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.751 [2024-04-26 15:20:56.907368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.751 [2024-04-26 15:20:56.907395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.751 [2024-04-26 15:20:56.907419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.751 [2024-04-26 15:20:56.907452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.751 [2024-04-26 15:20:56.907485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.751 [2024-04-26 15:20:56.907513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.751 [2024-04-26 15:20:56.907545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.751 [2024-04-26 15:20:56.907579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.751 [2024-04-26 15:20:56.907613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.751 [2024-04-26 15:20:56.907646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.751 [2024-04-26 15:20:56.907675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.751 [2024-04-26 15:20:56.907706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.751 [2024-04-26 15:20:56.907736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.751 [2024-04-26 15:20:56.907768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.751 [2024-04-26 15:20:56.907798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.751 [2024-04-26 15:20:56.907822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.751 [2024-04-26 15:20:56.907854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.751 [2024-04-26 15:20:56.907888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.751 [2024-04-26 15:20:56.907914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.751 [2024-04-26 15:20:56.907939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.751 [2024-04-26 15:20:56.907963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.751 [2024-04-26 15:20:56.907988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.751 [2024-04-26 15:20:56.908012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.751 [2024-04-26 15:20:56.908036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.751 [2024-04-26 15:20:56.908059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.751 [2024-04-26 15:20:56.908083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.751 [2024-04-26 15:20:56.908107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.751 [2024-04-26 15:20:56.908133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.751 [2024-04-26 15:20:56.908165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.751 [2024-04-26 15:20:56.908197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.751 [2024-04-26 15:20:56.908227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.751 [2024-04-26 15:20:56.908261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.751 [2024-04-26 15:20:56.908292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.751 [2024-04-26 15:20:56.908323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.751 [2024-04-26 15:20:56.908356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.751 [2024-04-26 15:20:56.908388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.751 [2024-04-26 15:20:56.908422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.751 [2024-04-26 15:20:56.908455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.751 [2024-04-26 15:20:56.908490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.751 [2024-04-26 15:20:56.908519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.751 [2024-04-26 15:20:56.908658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.751 [2024-04-26 15:20:56.908689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.751 [2024-04-26 15:20:56.908721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.751 [2024-04-26 15:20:56.908753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.751 [2024-04-26 15:20:56.908784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.751 [2024-04-26 15:20:56.908827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.751 [2024-04-26 15:20:56.908861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.751 [2024-04-26 15:20:56.908894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.751 [2024-04-26 15:20:56.908923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.751 [2024-04-26 15:20:56.908951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.751 [2024-04-26 15:20:56.908980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.751 [2024-04-26 15:20:56.909008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.751 [2024-04-26 15:20:56.909035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.751 [2024-04-26 15:20:56.909069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.751 [2024-04-26 15:20:56.909098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.751 [2024-04-26 15:20:56.909132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.751 [2024-04-26 15:20:56.909161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.751 [2024-04-26 15:20:56.909193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.751 [2024-04-26 15:20:56.909223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.751 [2024-04-26 15:20:56.909254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.751 [2024-04-26 15:20:56.909279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.751 [2024-04-26 15:20:56.909303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.751 [2024-04-26 15:20:56.909332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.751 [2024-04-26 15:20:56.909360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.751 [2024-04-26 15:20:56.909390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.752 [2024-04-26 15:20:56.909419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.752 [2024-04-26 15:20:56.909454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.752 [2024-04-26 15:20:56.909483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.752 [2024-04-26 15:20:56.909523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.752 [2024-04-26 15:20:56.909552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.752 [2024-04-26 15:20:56.909599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.752 [2024-04-26 15:20:56.909631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.752 [2024-04-26 15:20:56.909684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.752 [2024-04-26 15:20:56.909712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.752 [2024-04-26 15:20:56.909747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.752 [2024-04-26 15:20:56.909779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.752 [2024-04-26 15:20:56.909811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.752 [2024-04-26 15:20:56.909846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.752 [2024-04-26 15:20:56.909875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.752 [2024-04-26 15:20:56.909906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.752 [2024-04-26 15:20:56.909939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.752 [2024-04-26 15:20:56.909972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.752 [2024-04-26 15:20:56.910001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.752 [2024-04-26 15:20:56.910034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.752 [2024-04-26 15:20:56.910063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.752 [2024-04-26 15:20:56.910099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.752 [2024-04-26 15:20:56.910129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.752 [2024-04-26 15:20:56.910159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.752 [2024-04-26 15:20:56.910196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.752 [2024-04-26 15:20:56.910225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.752 [2024-04-26 15:20:56.910252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.752 [2024-04-26 15:20:56.910284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.752 [2024-04-26 15:20:56.910315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.752 [2024-04-26 15:20:56.910345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.752 [2024-04-26 15:20:56.910373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.752 [2024-04-26 15:20:56.910404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.752 [2024-04-26 15:20:56.910441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.752 [2024-04-26 15:20:56.910477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.752 [2024-04-26 15:20:56.910509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.752 [2024-04-26 15:20:56.910540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.752 [2024-04-26 15:20:56.910571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.752 [2024-04-26 15:20:56.910602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.752 [2024-04-26 15:20:56.910634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.752 15:20:56 -- target/ns_hotplug_stress.sh@40 -- # null_size=1052 00:10:39.752 [2024-04-26 15:20:56.910669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.752 [2024-04-26 15:20:56.911039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.752 15:20:56 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:10:39.752 [2024-04-26 15:20:56.911075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.752 [2024-04-26 15:20:56.911114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.752 [2024-04-26 15:20:56.911150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.752 [2024-04-26 15:20:56.911179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.752 [2024-04-26 15:20:56.911217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.752 [2024-04-26 15:20:56.911249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.752 [2024-04-26 15:20:56.911276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.752 [2024-04-26 15:20:56.911308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.752 [2024-04-26 15:20:56.911339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.752 [2024-04-26 15:20:56.911367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.752 [2024-04-26 15:20:56.911399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.752 [2024-04-26 15:20:56.911429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.752 [2024-04-26 15:20:56.911472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.752 [2024-04-26 15:20:56.911503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.752 [2024-04-26 15:20:56.911538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.752 [2024-04-26 15:20:56.911571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.752 [2024-04-26 15:20:56.911605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.752 [2024-04-26 15:20:56.911637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.752 [2024-04-26 15:20:56.911664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.752 [2024-04-26 15:20:56.911693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.752 [2024-04-26 15:20:56.911727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.752 [2024-04-26 15:20:56.911760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.752 [2024-04-26 15:20:56.911791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.752 [2024-04-26 15:20:56.911824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.752 [2024-04-26 15:20:56.911858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.752 [2024-04-26 15:20:56.911894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.752 [2024-04-26 15:20:56.911926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.752 [2024-04-26 15:20:56.911957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.752 [2024-04-26 15:20:56.911990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.752 [2024-04-26 15:20:56.912030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.752 [2024-04-26 15:20:56.912067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.752 [2024-04-26 15:20:56.912095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.752 [2024-04-26 15:20:56.912126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.752 [2024-04-26 15:20:56.912160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.752 [2024-04-26 15:20:56.912191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.752 [2024-04-26 15:20:56.912222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.912258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.912288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.912318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.912345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.912383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.912412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.912447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.912480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.912513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.912546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.912582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.912611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.912640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.912671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.912704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.912738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.912769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.912798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.912831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.912865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.912895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.912927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.912961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.913000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.913034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.913063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.913432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.913469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.913503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.913533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.913563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.913595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.913629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.913658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.913692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.913726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.913754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.913791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.913818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.913852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.913887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.913919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.913955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.913985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.914017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.914056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.914087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.914115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.914151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.914191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.914221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.914253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.914285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.914315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.914345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.914384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.914419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.914448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.914478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.914515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.914546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.914581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.914615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.914648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.914679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.914710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.914742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.914775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.914802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.914832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.914861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.914887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.914912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.914947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.914979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.915012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.915052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.915083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.915119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.915151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.915183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.915215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.915249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.915285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.915318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.915350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.915381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.915434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.915466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.915498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.915864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.915899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.915929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.915963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.915996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.916030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.916062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.916096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.753 [2024-04-26 15:20:56.916128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.916159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.916195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.916224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.916259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.916293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.916321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.916355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.916390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.916420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.916448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:39.754 [2024-04-26 15:20:56.916485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.916513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.916545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.916587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.916618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.916647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.916677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.916711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.916740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.916776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.916809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.916845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.916874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.916903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.916934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.916972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.917005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.917035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.917064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.917102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.917134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.917167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.917198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.917233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.917267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.917302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.917332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.917368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.917395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.917423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.917455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.917490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.917518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.917570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.917602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.917638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.917672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.917704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.917760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.917789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.917826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.917861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.917890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.917920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.918271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.918305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.918336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.918367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.918395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.918425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.918456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.918489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.918519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.918549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.918579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.918608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.918638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.918674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.918702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.918732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.918761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.918792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.918826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.918857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.918886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.918919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.918948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.918978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.919009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.919038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.919069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.919097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.919128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.919158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.919196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.919229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.919262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.754 [2024-04-26 15:20:56.919297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.919322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.919351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.919381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.919413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.919445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.919484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.919512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.919544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.919578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.919610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.919643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.919674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.919699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.919728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.919762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.919787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.919818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.919848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.919873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.919899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.919923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.919949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.919975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.920000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.920032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.920068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.920097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.920130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.920161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.920190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.920882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.920913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.920943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.920978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.921009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.921039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.921071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.921100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.921134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.921165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.921194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.921230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.921265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.921295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.921329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.921357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.921387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.921419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.921448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.921479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.921513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.921546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.921575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.921606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.921637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.921669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.921702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.921743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.921775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.921809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.921850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.921882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.921918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.921950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.922000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.922029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.922063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.922094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.922132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.922162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.922193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.922226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.922253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.922299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.922331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.922366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.922404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.922434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.922477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.922508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.922545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.922574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.922605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.922635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.922663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.755 [2024-04-26 15:20:56.922693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.922721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.922760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.922793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.922822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.922856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.922888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.922915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.923143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.923178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.923207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.923238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.923278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.923308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.923340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.923368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.923400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.923433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.923465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.923503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.923534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.923568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.923598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.923631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.923663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.923700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.923730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.923762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.923793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.923823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.923860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.923896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.923926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.923959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.923989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.924028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.924061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.924118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.924154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.924184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.924216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.924249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.924285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.924316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.924358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.924388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.924421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.924453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.924483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.924514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.924544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.924576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.924608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.924639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.924670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.924695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.924733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.924766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.924795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.924829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.924861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.924892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.924922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.924954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.924986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.925021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.925061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.925087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.925123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.925159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.925193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.925223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.926012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.926047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.926079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.926142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.926174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.926206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.926238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.926270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.926316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.926350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.926389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.926423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.926456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.926495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.926529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.926564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.926590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.926622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.756 [2024-04-26 15:20:56.926658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.757 [2024-04-26 15:20:56.926688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.757 [2024-04-26 15:20:56.926720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.757 [2024-04-26 15:20:56.926758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.757 [2024-04-26 15:20:56.926791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.757 [2024-04-26 15:20:56.926827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.757 [2024-04-26 15:20:56.926861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.757 [2024-04-26 15:20:56.926901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.757 [2024-04-26 15:20:56.926930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.757 [2024-04-26 15:20:56.926962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.757 [2024-04-26 15:20:56.926990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.757 [2024-04-26 15:20:56.927021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.757 [2024-04-26 15:20:56.927069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.757 [2024-04-26 15:20:56.927100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.757 [2024-04-26 15:20:56.927133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.757 [2024-04-26 15:20:56.927165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.757 [2024-04-26 15:20:56.927196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.757 [2024-04-26 15:20:56.927230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.757 [2024-04-26 15:20:56.927258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.757 [2024-04-26 15:20:56.927307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.757 [2024-04-26 15:20:56.927341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.757 [2024-04-26 15:20:56.927376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.757 [2024-04-26 15:20:56.927406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.757 [2024-04-26 15:20:56.927437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.757 [2024-04-26 15:20:56.927474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.757 [2024-04-26 15:20:56.927507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.757 [2024-04-26 15:20:56.927541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.757 [2024-04-26 15:20:56.927579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.757 [2024-04-26 15:20:56.927609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.757 [2024-04-26 15:20:56.927638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.757 [2024-04-26 15:20:56.927665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.757 [2024-04-26 15:20:56.927696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.757 [2024-04-26 15:20:56.927732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.757 [2024-04-26 15:20:56.927769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.757 [2024-04-26 15:20:56.927801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.757 [2024-04-26 15:20:56.927831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.757 [2024-04-26 15:20:56.927861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.757 [2024-04-26 15:20:56.927892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.757 [2024-04-26 15:20:56.927928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.757 [2024-04-26 15:20:56.927960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.757 [2024-04-26 15:20:56.927995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.757 [2024-04-26 15:20:56.928021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.757 [2024-04-26 15:20:56.928057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.757 [2024-04-26 15:20:56.928088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.757 [2024-04-26 15:20:56.928120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.757 [2024-04-26 15:20:56.928278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.757 [2024-04-26 15:20:56.928311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.757 [2024-04-26 15:20:56.928346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.757 [2024-04-26 15:20:56.928378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.757 [2024-04-26 15:20:56.928412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.757 [2024-04-26 15:20:56.928440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.757 [2024-04-26 15:20:56.928471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.757 [2024-04-26 15:20:56.928507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.757 [2024-04-26 15:20:56.928536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.757 [2024-04-26 15:20:56.928574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.757 [2024-04-26 15:20:56.928605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.757 [2024-04-26 15:20:56.928635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.757 [2024-04-26 15:20:56.928667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.757 [2024-04-26 15:20:56.928697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.757 [2024-04-26 15:20:56.928731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.757 [2024-04-26 15:20:56.928759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.757 [2024-04-26 15:20:56.928799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.757 [2024-04-26 15:20:56.928829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.757 [2024-04-26 15:20:56.928871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.757 [2024-04-26 15:20:56.928904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.757 [2024-04-26 15:20:56.928935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.757 [2024-04-26 15:20:56.928969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.757 [2024-04-26 15:20:56.929000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.757 [2024-04-26 15:20:56.929062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.757 [2024-04-26 15:20:56.929096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.757 [2024-04-26 15:20:56.929127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.757 [2024-04-26 15:20:56.929160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.757 [2024-04-26 15:20:56.929193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.757 [2024-04-26 15:20:56.929232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.757 [2024-04-26 15:20:56.929263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.929298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.929331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.929362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.929399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.929431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.929471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.929502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.929538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.929571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.929603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.929639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.929668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.929698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.929726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.929760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.929790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.929832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.929867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.929900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.929930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.929958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.929995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.930027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.930057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.930087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.930125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.930154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.930183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.930212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.930246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.930275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.930309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.930334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.930368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.930727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.930759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.930794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.930824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.930864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.930893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.930928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.930958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.930992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.931020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.931052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.931083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.931115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.931148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.931183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.931213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.931250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.931282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.931314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.931353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.931382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.931410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.931442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.931478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.931519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.931548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.931580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.931613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.931644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.931670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.931704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.931737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.931769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.931795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.931820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.931859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.931894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.931928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.931958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.931993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.932028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.932058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.932083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.932113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.932146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.932181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.932213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.932244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.932269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.932307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.932332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.932356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.932383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.932410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.932435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.932460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.758 [2024-04-26 15:20:56.932486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.932511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.932536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.932560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.932585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.932611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.932635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.932981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.933015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.933044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.933092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.933125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.933157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.933191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.933222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.933255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.933289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.933324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.933355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.933410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.933443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.933471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.933502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.933535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.933565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.933598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.933652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.933682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.933719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.933750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.933781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.933811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.933848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.933881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.933911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.933944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.933974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.934007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.934050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.934079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.934112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.934416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.934452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.934492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.934522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.934556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.934588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.934617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.934652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.934690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.934720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.934753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.934784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.934821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.934862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.934895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.934928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.934965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.934997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.935030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.935066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.935096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.935131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.935162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.935193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.935223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.935252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.935287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.935318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.935350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.935384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.935598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.935633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.935666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.935699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.935734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.935764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.935799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.935831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.935866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.935903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.935938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.935971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.936002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.936031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.936065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.936098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.936131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.936161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.936191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.936224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.936258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.936298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.936333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.936366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.936397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.936432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.936464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.759 [2024-04-26 15:20:56.936495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.760 [2024-04-26 15:20:56.936532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.760 [2024-04-26 15:20:56.936562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.760 [2024-04-26 15:20:56.936621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.760 [2024-04-26 15:20:56.936652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.760 [2024-04-26 15:20:56.936682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.760 [2024-04-26 15:20:56.936715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.760 [2024-04-26 15:20:56.936751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.760 [2024-04-26 15:20:56.936784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.760 [2024-04-26 15:20:56.936813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.760 [2024-04-26 15:20:56.936845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.760 [2024-04-26 15:20:56.936878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.760 [2024-04-26 15:20:56.936916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.760 [2024-04-26 15:20:56.936945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.760 [2024-04-26 15:20:56.936977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.760 [2024-04-26 15:20:56.937006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.760 [2024-04-26 15:20:56.937035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.760 [2024-04-26 15:20:56.937069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.760 [2024-04-26 15:20:56.937104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.760 [2024-04-26 15:20:56.937133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.760 [2024-04-26 15:20:56.937168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.760 [2024-04-26 15:20:56.937200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.760 [2024-04-26 15:20:56.937231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.760 [2024-04-26 15:20:56.937262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.760 [2024-04-26 15:20:56.937294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.760 [2024-04-26 15:20:56.937322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.760 [2024-04-26 15:20:56.937352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.760 [2024-04-26 15:20:56.937388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.760 [2024-04-26 15:20:56.937417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.760 [2024-04-26 15:20:56.937447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.760 [2024-04-26 15:20:56.937478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.760 [2024-04-26 15:20:56.937510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.760 [2024-04-26 15:20:56.937544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.760 [2024-04-26 15:20:56.937581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.760 [2024-04-26 15:20:56.937613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.760 [2024-04-26 15:20:56.937649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.760 [2024-04-26 15:20:56.937678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.760 [2024-04-26 15:20:56.937822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.760 [2024-04-26 15:20:56.937862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.760 [2024-04-26 15:20:56.937924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.760 [2024-04-26 15:20:56.937954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.760 [2024-04-26 15:20:56.937988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.760 [2024-04-26 15:20:56.938022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.760 [2024-04-26 15:20:56.938052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.760 [2024-04-26 15:20:56.938083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.760 [2024-04-26 15:20:56.938111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.760 [2024-04-26 15:20:56.938153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.760 [2024-04-26 15:20:56.938185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.760 [2024-04-26 15:20:56.938221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.760 [2024-04-26 15:20:56.938250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.760 [2024-04-26 15:20:56.938282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.760 [2024-04-26 15:20:56.938312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.760 [2024-04-26 15:20:56.938346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.760 [2024-04-26 15:20:56.938410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.760 [2024-04-26 15:20:56.938442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.760 [2024-04-26 15:20:56.938476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.760 [2024-04-26 15:20:56.938505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.760 [2024-04-26 15:20:56.938536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.760 [2024-04-26 15:20:56.938565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.760 [2024-04-26 15:20:56.938593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.760 [2024-04-26 15:20:56.938626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.760 [2024-04-26 15:20:56.938661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.760 [2024-04-26 15:20:56.938696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.760 [2024-04-26 15:20:56.938724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.760 [2024-04-26 15:20:56.938752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.760 [2024-04-26 15:20:56.938783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.760 [2024-04-26 15:20:56.938821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.760 [2024-04-26 15:20:56.938856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.760 [2024-04-26 15:20:56.938889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.760 [2024-04-26 15:20:56.938916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.760 [2024-04-26 15:20:56.939512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.760 [2024-04-26 15:20:56.939545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.760 [2024-04-26 15:20:56.939580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.760 [2024-04-26 15:20:56.939617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.760 [2024-04-26 15:20:56.939651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.760 [2024-04-26 15:20:56.939684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.760 [2024-04-26 15:20:56.939709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.760 [2024-04-26 15:20:56.939743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.760 [2024-04-26 15:20:56.939776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.760 [2024-04-26 15:20:56.939807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.939842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.939877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.939906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.939952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.939986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.940021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.940053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.940083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.940115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.940144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.940174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.940207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.940244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.940272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.940302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.940338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.940370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.940400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.940436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.940467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.940505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.940541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.940572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.940601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.940628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.940660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.940689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.940723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.940757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.940789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.940820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.940859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.940904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.940935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.940967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.940997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.941031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.941062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.941090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.941119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.941149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.941181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.941221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.941254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.941284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.941315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.941351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.941384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.941415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.941446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.941476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.941508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.941539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.941568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.941702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.941732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.941768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.941796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.941827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.941860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.941889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.941922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.941954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.941986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.942016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.942045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.942073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.942106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.942164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.942195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.942231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.942263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.942293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.942327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.942357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.942388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.942418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.942455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.942486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.942516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.942556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.942588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.942628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.942659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.943071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.943104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.943142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.943176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.761 [2024-04-26 15:20:56.943212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.943244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.943279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.943309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.943339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.943370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.943400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.943429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.943465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.943495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.943525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.943557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.943588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.943625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.943655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.943683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.943720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.943751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.943788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.943827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.943862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.943902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.943937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.943973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.944004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.944034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.944062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.944089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.944118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.944153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.944182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.944211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.944242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.944274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.944307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.944339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.944369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.944403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.944434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.944466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.944498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.944530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.944557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.944591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.944621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.944647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.944677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.944707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.944732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.944756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.944781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.944806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.944831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.944861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.944891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.944925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.944956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.944994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.945021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.945053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.945193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.945230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.945261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.945293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.945324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.945356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.945385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.945430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.945461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.945493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.945525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.945557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.945588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.945619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.945647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.945676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.945711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.945747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.945779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.945806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.945832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.945862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.945887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.945912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.762 [2024-04-26 15:20:56.945936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.945962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.945987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.946013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.946039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.946071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.946106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.946137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.946170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.946574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.946603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.946636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.946668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.946701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.946734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.946768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.946802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.946833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.946867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.946906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.946935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.946975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.947007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.947041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.947072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.947107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.947135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.947165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.947193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.947228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.947257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.947289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.947321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.947351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.947394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.947428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.947461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.947493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.947522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.947718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.947748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.947779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.947815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.947848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.947884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.947918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.947949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.947979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.948020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.948049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.948083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.948109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.948141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.948168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.948199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.948232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.948262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.948294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.948321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.948352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.948384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.948415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.948470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.948503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.948538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.948573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.948602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.948637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.948666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.948702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.948734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.948767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.948801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.948835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.948873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.948904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.948937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.948972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.949000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.949032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.949067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.949096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.949124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.949154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.949191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.949224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.949267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.949298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.949335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.949366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.949397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.949429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.949460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.949494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.949528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.949565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.949597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.949623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.763 [2024-04-26 15:20:56.949654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.949684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.949717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.949747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.949782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.950026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.950062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.950094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.950125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.950156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.950188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.950219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.950254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.950284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.950313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.950345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.950378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.950412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.950440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.950477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.950508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.950540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.950575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.950605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.950634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.950664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.950695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.950727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.950757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.950789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.950822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.950856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.950885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.950912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.950944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.950977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.951010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.951040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.951073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.951102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.951150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.951181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.951217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.951251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.951279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.951308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.951337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.951371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.951402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.951429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.951460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.951493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.951519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.951552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.951579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.951616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.951653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.951687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.951718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.951747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.951776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.951811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.951842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.951876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.951907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.951937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.951968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.951997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.952556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.952591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.952625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.952658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.952695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.952727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.952758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.952787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.952822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.952860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.952893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.952930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.952961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.952995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.953027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.953058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.953108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.953141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.953177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.953209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.953240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.953272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.953303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.953338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.953370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.953401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.953434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.953465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.953499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.953531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.953567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.953602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.953642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.953676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.953709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.764 [2024-04-26 15:20:56.953738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.953774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.953810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.953847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.953886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.953919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.953956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.953986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.954018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.954050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.954080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.954111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.954143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.954173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.954204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.954232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.954263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.954296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.954326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.954356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.954388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.954423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.954455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.954488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.954520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.954554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.954590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.954623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.954656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.954808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.954853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.954887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.954922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.954956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.954987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.955020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.955053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.955084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.955113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.955142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.955171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.955199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.955229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.955260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 Message suppressed 999 times: [2024-04-26 15:20:56.955290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 Read completed with error (sct=0, sc=15) 00:10:39.765 [2024-04-26 15:20:56.955324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.955357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.955387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.955417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.955456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.955487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.955522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.955553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.955582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.955612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.955641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.955673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.955708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.955743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.955777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.955809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.955839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.955867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.955903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.955937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.955973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.956006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.956035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.956065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.956095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.956133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.956165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.956197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.956231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.956264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.956297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.956329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.956362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.956399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.956432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.956464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.956496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.956528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.956562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.956593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.956625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.956653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.956685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.956720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.956754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.956788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.956817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.957177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.957207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.957245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.957280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.957305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.957332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.957358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.957384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.957409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.957434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.957459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.957492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.957528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.957564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.957594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.765 [2024-04-26 15:20:56.957623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.766 [2024-04-26 15:20:56.957663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.766 [2024-04-26 15:20:56.957689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.766 [2024-04-26 15:20:56.957714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.766 [2024-04-26 15:20:56.957739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.766 [2024-04-26 15:20:56.957762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.766 [2024-04-26 15:20:56.957788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.766 [2024-04-26 15:20:56.957813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.766 [2024-04-26 15:20:56.957843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.766 [2024-04-26 15:20:56.957876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.766 [2024-04-26 15:20:56.957905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.766 [2024-04-26 15:20:56.957938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.766 [2024-04-26 15:20:56.957969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.766 [2024-04-26 15:20:56.957995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.766 [2024-04-26 15:20:56.958019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.766 [2024-04-26 15:20:56.958044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.766 [2024-04-26 15:20:56.958069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.766 [2024-04-26 15:20:56.958096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.766 [2024-04-26 15:20:56.958121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.766 [2024-04-26 15:20:56.958146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.766 [2024-04-26 15:20:56.958171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.766 [2024-04-26 15:20:56.958197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.766 [2024-04-26 15:20:56.958221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.766 [2024-04-26 15:20:56.958248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.766 [2024-04-26 15:20:56.958273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.766 [2024-04-26 15:20:56.958297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.766 [2024-04-26 15:20:56.958321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.766 [2024-04-26 15:20:56.958347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.766 [2024-04-26 15:20:56.958371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.766 [2024-04-26 15:20:56.958397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.766 [2024-04-26 15:20:56.958421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.766 [2024-04-26 15:20:56.958445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.766 [2024-04-26 15:20:56.958471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.766 [2024-04-26 15:20:56.958496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.766 [2024-04-26 15:20:56.958520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.766 [2024-04-26 15:20:56.958545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.766 [2024-04-26 15:20:56.958570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.766 [2024-04-26 15:20:56.958594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.766 [2024-04-26 15:20:56.958620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.766 [2024-04-26 15:20:56.958644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.766 [2024-04-26 15:20:56.958668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.766 [2024-04-26 15:20:56.958694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.766 [2024-04-26 15:20:56.958718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.766 [2024-04-26 15:20:56.958745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.766 [2024-04-26 15:20:56.958769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.766 [2024-04-26 15:20:56.958795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.766 [2024-04-26 15:20:56.958820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.766 [2024-04-26 15:20:56.958848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.766 [2024-04-26 15:20:56.958873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.766 [2024-04-26 15:20:56.959144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.766 [2024-04-26 15:20:56.959173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.766 [2024-04-26 15:20:56.959202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.766 [2024-04-26 15:20:56.959233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.766 [2024-04-26 15:20:56.959264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.766 [2024-04-26 15:20:56.959299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.766 [2024-04-26 15:20:56.959337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.766 [2024-04-26 15:20:56.959366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.766 [2024-04-26 15:20:56.959398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.766 [2024-04-26 15:20:56.959430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.766 [2024-04-26 15:20:56.959462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.766 [2024-04-26 15:20:56.959495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.766 [2024-04-26 15:20:56.959524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.766 [2024-04-26 15:20:56.959554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.766 [2024-04-26 15:20:56.959585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.766 [2024-04-26 15:20:56.959617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.766 [2024-04-26 15:20:56.959648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.766 [2024-04-26 15:20:56.959936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.766 [2024-04-26 15:20:56.959968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.766 [2024-04-26 15:20:56.960000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.766 [2024-04-26 15:20:56.960030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.766 [2024-04-26 15:20:56.960066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.960100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.960129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.960159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.960188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.960222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.960252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.960293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.960326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.960355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.960389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.960420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.960468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.960501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.960539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.960570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.960609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.960642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.960674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.960704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.960735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.960764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.960793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.960824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.960865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.960892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.960924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.960956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.960991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.961024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.961055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.961084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.961117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.961154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.961188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.961220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.961248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.961275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.961309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.961338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.961368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.961406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.961586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.961621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.961657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.961691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.961723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.961760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.961790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.961821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.961858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.961894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.961952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.961980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.962008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.962042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.962072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.962100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.962128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.962161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.962192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.962227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.962256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.962292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.962323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.962355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.962386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.962414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.962450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.962479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.962514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.962544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.962575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.962607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.962639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.962668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.962696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.962741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.962772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.962812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.962850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.962881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.962912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.962944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.962975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.963008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.963038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.963065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.963099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.963132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.963163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.963196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.963227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.963255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.963289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.963323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.963348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.963383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.963416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.963443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.963470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.767 [2024-04-26 15:20:56.963503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.963532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.963566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.963601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.963638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.963761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.963794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.963829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.963864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.963893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.963924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.963954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.963982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.964014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.964047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.964080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.964112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.964150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.964181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.964214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.964245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.964279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.964702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.964741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.964772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.964805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.964835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.964879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.964909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.964934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.964968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.965002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.965032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.965063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.965092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.965132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.965162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.965193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.965225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.965256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.965287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.965320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.965347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.965373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.965399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.965432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.965469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.965502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.965533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.965562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.965598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.965633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.965661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.965686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.965712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.965737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.965762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.965792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.965824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.965859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.965892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.965922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.965964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.965995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.966023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.966055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.966089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.966129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.966309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.966372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.966405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.966437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.966471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.966502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.966533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.966562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.966608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.966640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.966670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.966702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.966735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.966788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.966824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.966858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.966892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.966924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.966989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.967020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.967055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.967087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.967119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.967149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.967183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.967213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.967244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.967277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.967308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.768 [2024-04-26 15:20:56.967340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.967371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.967399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.967427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.967451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.967482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.967518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.967551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.967579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.967611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.967640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.967680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.967711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.967748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.967779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.967810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.967846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.967878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.967909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.967942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.967974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.968004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.968038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.968073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.968104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.968137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.968165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.968194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.968227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.968258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.968290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.968323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.968356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.968386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.968422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.968717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.968779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.968813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.968848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.968878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.968908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.968947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.968978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.969010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.969043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.969079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.969112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.969145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.969179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.969213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.969242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.969273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.969307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.969339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.969370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.969404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.969436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.969465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.969494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.969525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.969561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.969591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.969620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.969651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.969683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.969719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.969750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.969779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.969810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.969845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.969879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.969910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.969942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.969972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.970007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.970035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.970068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.970100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.970133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.970163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.970195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.970228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.970260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.970290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.970325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.970354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.970384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.970418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.970447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.970480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.970511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.970541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.970573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.970606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.970638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.970667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.970693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.970722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.971075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.971108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.769 [2024-04-26 15:20:56.971146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.971177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.971241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.971275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.971306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.971336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.971362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.971395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.971432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.971462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.971502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.971540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.971571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.971604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.971644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.971676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.971706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.971735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.971765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.971793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.971823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.971855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.971886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.971918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.971950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.971984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.972014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.972041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.972077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.972104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.972133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.972158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.972184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.972216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.972246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.972278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.972308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.972338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.972363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.972388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.972412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.972436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.972462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.972491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.972524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.972558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.972593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.972621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.972651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.972682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.972710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.972741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.972768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.972798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.972830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.972869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.972898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.972935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.972969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.973004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.973037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.973073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.973440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.973472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.973529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.973559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.973591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.973622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.973651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.973682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.973716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.973745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.973774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.973820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.973855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.973886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.973915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.973946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.973992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.974026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.974062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.974091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.974119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.974146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.974175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.974206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.974236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.974269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.974298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.974328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.974359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.974393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.974424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.974455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.974485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.974517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.974544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.974572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.974606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.974639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.974670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.770 [2024-04-26 15:20:56.974702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.974734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.974763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.974796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.974830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.974867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.974899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.974936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.974965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.974999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.975028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.975060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.975095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.975129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.975163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.975197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.975228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.975271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.975303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.975333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.975362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.975395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.975429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.975461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.975825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.975863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.975897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.975929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.975959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.975988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.976019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.976054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.976083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.976115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.976151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.976181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.976210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.976237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.976264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.976292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.976320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.976357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.976388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.976421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.976454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.976484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.976517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.976553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.976583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.976614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.976645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.976679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.976709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.976741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.976767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.976798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.976828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.976865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.976902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.976938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.976970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.977005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.977035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.977070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.977104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.977139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.977176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.977207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.977238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.977269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.977308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.977341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.977370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.977399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.977424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.977453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.977488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.977522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.977556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.977589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.977622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.977655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.977685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.977717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.771 [2024-04-26 15:20:56.977745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.977777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.977805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.977835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.978181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.978218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.978247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.978280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.978310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.978346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.978372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.978406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.978436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.978469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.978499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.978537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.978571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.978600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.978633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.978664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.978699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.978732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.978764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.978795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.978831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.978868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.978901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.978932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.978964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.978998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.979029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.979067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.979099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.979136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.979169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.979196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.979227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.979258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.979291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.979319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.979354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.979383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.979412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.979454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.979485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.979516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.979546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.979578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.979613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.979642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.979676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.979707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.979739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.979769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.979806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.979842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.979875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.979908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.979938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.979985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.980019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.980052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.980088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.980123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.980185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.980217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.980252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.980639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.980671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.980705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.980736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.980768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.980810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.980842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.980877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.980906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.980939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.980967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.981001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.981030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.981060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.981093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.981123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.981153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.981181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.981212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.981237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.981270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.981307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.981339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.981373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.981405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.981439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.981465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.981497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.981536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.981566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.981597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.981637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.981669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.981725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.981757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.981794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.981828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.981866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.981902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.981932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.981977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.982010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.982041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.982072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.982101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.982128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.982161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.982193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.982224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.982258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.982288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.982322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.982355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.982388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.772 [2024-04-26 15:20:56.982426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.982457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.982491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.982520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.982559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.982591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.982619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.982655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.982688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.982730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.983075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.983107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.983145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.983175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.983205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.983234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.983266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.983299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.983334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.983364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.983392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.983422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.983457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.983487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.983515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.983550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.983585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.983615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.983645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.983673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.983706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.983741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.983772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.983804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.983845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.983882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.983915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.983951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.983982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.984013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.984039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.984070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.984103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.984134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.984162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.984192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.984226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.984252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.984277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.984301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.984326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.984351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.984375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.984400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.984430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.984461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.984493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.984527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.984556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.984582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.984607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.984632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.984657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.984681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.984705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.984729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.984755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.984779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.984805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.984844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.984877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.984909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.984941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.985304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.985337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.985368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.985397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.985427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.985461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.985495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.985532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.985562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.985594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.985649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.985678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.985710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.985741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.985771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.985806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.985835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.985874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.985906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.985934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.985968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.985996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.986027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.986059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.986097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.986126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.986161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.986190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.986220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.986247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.986275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.986304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.986336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.986366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.986398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.986426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.986459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.986491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.986522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.986555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.986585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.986613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.986644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.986679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.986710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.986746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.986777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.986809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.986847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.986876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.773 [2024-04-26 15:20:56.986911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.986940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.986974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.987005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.987035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.987067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.987099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.987151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.987179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.987215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.987245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.987279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.987312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.987346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.987696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.987727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.987760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.987795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.987823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.987859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.987889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.987920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.987953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.987988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.988019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.988052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.988081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.988119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.988147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.988181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.988212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.988242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.988272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.988310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.988345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.988376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.988402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.988438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.988474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.988503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.988533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.988564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.988593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.988624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.988655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.988689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.988720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.988754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.988784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.988821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.988858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.988889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.988922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.988953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.988984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.989017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.989048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.989076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.989106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.989130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.989162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.989193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.989227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.989260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.989293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.989325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.989357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.989390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.989422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.989455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.989486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.989511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.989539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.989570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.989601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.989637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.989670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.990047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.990084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.990116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.990146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.990179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.990213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.990244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.990274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.990316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.990349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.990379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.990413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.990445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.990475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.990503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.990533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.990568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.990602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.990634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.990660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.990684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.990709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.990735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.990761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.990789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.990823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.774 [2024-04-26 15:20:56.990859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.990891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.990924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.990955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.990988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.991017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.991052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.991085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.991119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.991153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.991181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.991211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.991244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.991296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.991329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.991367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.991396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.991424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.991460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.991491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.991528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.991561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.991597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.991631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.991661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.991692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.991724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.991766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.991796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.991829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.991866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.991895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.991922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.991949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.991983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.992015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.992046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.992079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.992430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.992465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.992500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.992532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.992564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.992599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.992633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.992663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.992697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:39.776 [2024-04-26 15:20:56.992729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.992767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.992801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.992832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.992885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.992915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.992946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.992978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.993009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.993042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.993075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.993112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.993141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.993178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.993212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.993242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.993279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.993314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.993351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.993384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.993414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.993442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.993476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.993507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.993542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.993574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.993607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.993637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.993669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.993699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.993724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.993755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.993785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.993813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.993849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.993881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.993912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.993945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.993986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.994019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.994050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.994078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.994103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.994133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.994162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.994192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.776 [2024-04-26 15:20:56.994225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.994254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.994283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.994321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.994352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.994388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.994420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.994454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.994813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.994853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.994888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.994920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.994952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.994985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.995013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.995038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.995072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.995105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.995137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.995166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.995196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.995227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.995257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.995291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.995326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.995356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.995388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.995417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.995452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.995484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.995522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.995552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.995588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.995621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.995655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.995692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.995723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.995758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.995785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.995818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.995857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.995883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.995909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.995934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.995959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.995986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.996012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.996036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.996060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.996085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.996112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.996141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.996171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.996201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.996226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.996250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.996275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.996300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.996325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.996349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.996374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.996408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.996440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.996473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.996508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.996538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.996568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.996600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.996632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.996661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.996690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.996721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.997079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.997112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.997144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.997204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.997237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.997266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.997302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.997336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.997367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.997398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.997429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.997461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.997492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.997522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.997561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.997593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.997629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.997662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.997692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.997724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.997756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.997788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.997821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.997863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.997895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.997928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.997958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.997989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.998026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.998062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.998091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.998124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.998155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.998192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.998221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.998253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.998279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.998315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.998344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.998376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.998408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.998437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.998466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.998504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.998533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.998564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.998600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.998637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.998673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.998704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.998736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.998769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.998804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.998834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.998871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.998907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.998940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.777 [2024-04-26 15:20:56.998969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:56.999002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:56.999034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:56.999067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:56.999096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:56.999128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:56.999482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:56.999514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:56.999551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:56.999584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:56.999617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:56.999666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:56.999697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:56.999730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:56.999764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:56.999796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:56.999825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:56.999859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:56.999893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:56.999926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:56.999958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:56.999989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.000018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.000052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.000085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.000121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.000156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.000187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.000219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.000247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.000281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.000309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.000342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.000369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.000405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.000441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.000474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.000500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.000529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.000562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.000594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.000623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.000656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.000692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.000722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.000751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.000783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.000816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.000850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.000882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.000910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.000944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.000974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.001006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.001040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.001073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.001102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.001130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.001170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.001200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.001231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.001259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.001290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.001315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.001347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.001378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.001404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.001430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.001455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.001483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.002249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.002287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.002318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.002348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.002377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.002406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.002439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.002473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.002502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.002531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.002563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.002594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.002626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.002657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.002689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.002733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.002764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.002795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.002829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.002865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.002897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.002926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.002971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.003003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.003034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.003062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.003092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.003122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.003156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.003192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.003226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.003259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.003288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.003319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.003353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.003385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.003417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.003446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.003476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.003511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.003547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.003575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.003608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.003641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.003675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.003706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.003737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.003767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.003801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.003831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.003866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.003897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.003928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.003957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.003985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.004017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.004052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.004094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.004125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.004163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.004198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.778 [2024-04-26 15:20:57.004228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.004256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.004495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.004525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.004558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.004594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.004631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.004663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.004692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.004725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.004759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.004784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.004812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.004848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.004884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.004918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.004947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.004981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.005014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.005044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.005074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.005105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.005139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.005173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.005205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.005237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.005269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.005312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.005342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.005376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.005409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.005440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.005474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.005509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.005549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.005581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.005614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.005647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.005679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.005715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.005747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.005779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.005815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.005852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.005883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.005915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.005948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.005979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.006008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.006045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.006081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.006117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.006147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.006176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.006211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.006239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.006271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.006313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.006342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.006369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.006399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.006432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.006463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.006495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.006525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.006557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.006993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.007028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.007059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.007090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.007122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.007151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.007182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.007215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.007246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.007279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.007313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.007351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.007386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.007421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.007450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.007487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.007523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.007557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.007587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.007616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.007653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.007683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.007709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.007745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.007776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.007807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.007846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.007880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.007912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.007949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.007986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.008017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.008049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.008084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.008118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.008151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.008185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.008222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.008253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.008287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.008321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.008351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.008416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.008446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.008482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.008513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.008544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.008578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.008611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.008643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.779 [2024-04-26 15:20:57.008676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.008707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.008737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.008768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.008803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.008834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.008876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.008912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.008944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.008974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.009012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.009041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.009076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.009491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.009529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.009563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.009592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.009626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.009659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.009690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.009725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.009754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.009787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.009822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.009858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.009891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.009922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.009953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.009990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.010022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.010054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.010087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.010120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.010153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.010185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.010216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.010246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.010278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.010338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.010370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.010404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.010437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.010472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.010501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.010531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.010568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.010601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.010635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.010664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.010693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.010727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.010759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.010792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.010821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.010852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.010887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.010919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.010952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.010985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.011017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.011052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.011088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.011121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.011158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.011193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.011225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.011251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.011286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.011320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.011353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.011385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.011418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.011447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.011481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.011516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.011550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.011580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.011973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.012009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.012043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.012075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.012106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.012138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.012171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.012204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.012260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.012290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.012329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.012363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.012391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.012421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.012452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.012485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.012520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.012551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.012590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.012620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.012651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.012683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.012713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.012765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.012794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.012829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.012868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.012900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.012930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.012962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.012992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.013025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.013062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.013093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.013152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.013182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.013214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.013246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.013272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.013303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.013338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.013368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.013404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.013438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.013471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.013502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.013539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.013568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.013600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.013636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.013666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.013702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.013736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.013766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.013798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.013831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.013867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.013896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.013929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.013959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.013990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.014020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.014052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.780 [2024-04-26 15:20:57.014791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.014823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.014865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.014894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.014924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.014963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.014994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.015041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.015077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.015107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.015140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.015170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.015231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.015266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.015296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.015329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.015363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.015405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.015441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.015474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.015533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.015565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.015594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.015625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.015661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.015711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.015742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.015776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.015810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.015846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.015891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.015922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.015952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.015984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.016018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.016060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.016093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.016121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.016147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.016184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.016225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.016263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.016292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.016320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.016354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.016382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.016411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.016449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.016478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.016510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.016537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.016565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.016600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.016633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.016662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.016702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.016733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.016766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.016798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.016830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.016865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.016902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.016938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.016975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.017138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.017174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.017206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.017242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.017274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.017305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.017334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.017363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.017391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.017424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.017459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.017490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.017522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.017555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.017588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.017619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.017650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.017686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.017717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.017755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.017787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.017819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.017859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.017892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.017922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.017952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.017985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.018019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.018055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.018087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.018120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.018152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.018202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.018234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.018270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.018298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.018327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.018363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.018394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.018426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.018456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.018490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.018526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.018560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.018591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.018627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.018660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.018692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.018720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.018752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.018786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.018816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.018849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.018881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.018910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.018946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.018984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.019013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.019040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.019076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.019108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.019141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.019171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.019515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.019550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.019584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.019620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.019653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.019685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.019717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.019754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.019786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.019817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.019850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.019885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.019916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.781 [2024-04-26 15:20:57.019947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.019978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.020011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.020043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.020075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.020108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.020141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.020172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.020206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.020239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.020273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.020304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.020334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.020363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.020394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.020428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.020459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.020492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.020524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.020550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.020585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.020619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.020660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.020692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.020727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.020763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.020792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.020824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.020861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.020891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.020933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.020964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.020996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.021032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.021064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.021096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.021130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.021161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.021218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.021253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.021287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.021322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.021351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.021387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.021420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.021452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.021500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.021529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.021561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.021590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.021622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.021962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.021995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.022025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.022055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.022087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.022115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.022143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.022177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.022211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.022242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.022277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.022308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.022344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.022381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.022417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.022451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.022484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.022517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.022550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.022577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.022605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.022638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.022677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.022703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.022727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.022751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.022778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.022803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.022827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.022856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.022883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.022908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.022934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.022959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.022985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.023023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.023054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.023090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.023126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.023154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.023190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.023222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.023258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.023293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.023323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.023355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.023384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.023418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.023447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.023478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.023509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.023540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.023594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.023628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.023667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.023701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.023733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.023761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.023789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.023821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.023857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.023895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.023928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.024293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.024327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.024361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.024392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.024457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.024489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.024520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.024551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.024581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.782 [2024-04-26 15:20:57.024616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.024646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.024684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.024712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.024740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.024770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.024799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.024828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.024864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.024890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.024924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.024955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.024986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.025023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.025056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.025093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.025126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.025157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.025194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.025224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.025252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.025284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.025315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.025360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.025393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.025423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.025457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.025486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.025518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.025550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.025604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.025635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.025664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.025700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.025732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.025798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.025830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.025865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.025919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.025949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.025987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.026020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.026052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.026090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.026121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.026172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.026206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.026239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.026278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.026308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.026347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.026377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.026412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.026443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.026481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.026833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.026869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.026901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.026932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.026971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.027006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.027040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.027069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.027102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.027132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.027163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.027206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.027238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.027268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.027303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.027333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.027363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.027397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.027431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.027461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.027495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.027530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.027562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.027594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.027628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.027663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.027693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.027726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.027761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.027795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.027833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.027868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.027902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.027928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.027964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.028002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.028032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.028066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.028098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.028123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.028149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.028175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.028200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.028225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.028249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.028274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.028305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.028343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.028375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.028405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.028435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.028467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.028494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.028519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.028545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.028570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.028594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.028620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.028646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.028674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.028705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.028740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.028776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.029146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.029179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.029215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.029248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.029282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.029322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.029352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.029388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.029417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.029455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.029486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.029516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.029551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.029583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.029617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.029650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.029678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.029710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.029745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.029777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.029805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.029842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.029873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.029899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.029930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.029964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.783 [2024-04-26 15:20:57.030001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.030035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.030070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.030103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.030135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.030173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.030206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.030240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.030301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.030333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.030364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.030396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.030426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.030460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.030493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.030524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.030556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.030590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.030625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.030657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.030691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.030725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.030757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.030793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.030827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.030862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.030916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.030945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.030981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.031013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.031045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.031079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.031112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.031148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.031180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.031213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.031249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.031282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:39.784 [2024-04-26 15:20:57.031637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.031671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.031702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.031734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.031768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.031806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.031841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.031874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.031907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.031935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.031965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.032005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.032035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.032064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.032090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.032121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.032151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.032184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.032216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.032246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.032280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.032314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.032346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.032409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.032438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.032473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.032510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.032541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.032571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.032601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.032634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.032664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.032695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.032734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.032771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.032803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.032834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.032869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.032902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.032934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.032967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.033000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.033031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.033070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.033108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.033139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.033168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.033203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.033234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.033265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.033296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.033328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.033358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.033386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.033417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.033448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.033484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.033514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.033548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.033585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.033621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.033654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.033690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.034053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.034090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.034121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.034154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.034187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.034215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.034242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.034273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.034306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.034338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.034374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.034407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.034440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.034469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.034504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.034539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.034571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.034600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.034635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.034670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.034701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.034748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.034781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.034815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.034849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.034880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.034909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.034938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.034988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.035022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.035052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.035086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.035121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.035153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.035185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.035216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.035246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.035276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.035314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.035348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.035378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.035409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.035441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.035477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.035506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.035538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.784 [2024-04-26 15:20:57.035568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.035600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.035642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.035670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.035702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.035742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.035773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.035803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.035832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.035872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.035906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.035933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.035966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.036002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.036030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.036061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.036097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.036129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.036514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.036547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.036607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.036639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.036675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.036708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.036738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.036770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.036801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.036836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.036874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.036908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.036938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.036973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.037006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.037040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.037072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.037109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.037145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.037178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.037208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.037239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.037292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.037324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.037357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.037388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.037421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.037454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.037485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.037546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.037576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.037606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.037643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.037677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.037723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.037754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.037787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.037818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.037856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.037887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.037918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.037948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.037984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.038019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.038048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.038080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.038112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.038144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.038172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.038203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.038235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.038267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.038293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.038331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.038360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.038392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.038426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.038463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.038496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.038527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.038559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.038587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.038624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.785 [2024-04-26 15:20:57.039398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.786 [2024-04-26 15:20:57.039433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.786 [2024-04-26 15:20:57.039470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.786 [2024-04-26 15:20:57.039523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.786 [2024-04-26 15:20:57.039556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.786 [2024-04-26 15:20:57.039584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.786 [2024-04-26 15:20:57.039617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.786 [2024-04-26 15:20:57.039648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.786 [2024-04-26 15:20:57.039682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.786 [2024-04-26 15:20:57.039713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.786 [2024-04-26 15:20:57.039748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.786 [2024-04-26 15:20:57.039779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.786 [2024-04-26 15:20:57.039810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.786 [2024-04-26 15:20:57.039851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.786 [2024-04-26 15:20:57.039880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.786 [2024-04-26 15:20:57.039911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.786 [2024-04-26 15:20:57.039941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.786 [2024-04-26 15:20:57.039970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.786 [2024-04-26 15:20:57.040006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.786 [2024-04-26 15:20:57.040034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.786 [2024-04-26 15:20:57.040082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.786 [2024-04-26 15:20:57.040112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.786 [2024-04-26 15:20:57.040142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.786 [2024-04-26 15:20:57.040172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.786 [2024-04-26 15:20:57.040203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.786 [2024-04-26 15:20:57.040248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.786 [2024-04-26 15:20:57.040279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.786 [2024-04-26 15:20:57.040311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.786 [2024-04-26 15:20:57.040343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.786 [2024-04-26 15:20:57.040373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.786 [2024-04-26 15:20:57.040406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.786 [2024-04-26 15:20:57.040437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.786 [2024-04-26 15:20:57.040496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.786 [2024-04-26 15:20:57.040528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.786 [2024-04-26 15:20:57.040558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.786 [2024-04-26 15:20:57.040588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.786 [2024-04-26 15:20:57.040618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.786 [2024-04-26 15:20:57.040650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.786 [2024-04-26 15:20:57.040680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.786 [2024-04-26 15:20:57.040714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.786 [2024-04-26 15:20:57.040743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.786 [2024-04-26 15:20:57.040775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.786 [2024-04-26 15:20:57.040804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.786 [2024-04-26 15:20:57.040840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.786 [2024-04-26 15:20:57.040877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.786 [2024-04-26 15:20:57.040910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.786 [2024-04-26 15:20:57.040946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.786 [2024-04-26 15:20:57.040977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.786 [2024-04-26 15:20:57.041010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.786 [2024-04-26 15:20:57.041040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.786 [2024-04-26 15:20:57.041068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.786 [2024-04-26 15:20:57.041104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.786 [2024-04-26 15:20:57.041131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.786 [2024-04-26 15:20:57.041165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.786 [2024-04-26 15:20:57.041194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.786 [2024-04-26 15:20:57.041223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.786 [2024-04-26 15:20:57.041262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.786 [2024-04-26 15:20:57.041288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.786 [2024-04-26 15:20:57.041317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.786 [2024-04-26 15:20:57.041346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.786 [2024-04-26 15:20:57.041384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.786 [2024-04-26 15:20:57.041412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.786 [2024-04-26 15:20:57.041447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.786 [2024-04-26 15:20:57.041482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.786 [2024-04-26 15:20:57.041692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.786 [2024-04-26 15:20:57.041724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.786 [2024-04-26 15:20:57.041756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.786 [2024-04-26 15:20:57.041794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.041825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.041859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.041897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.041930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.041965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.042001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.042045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.042079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.042108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.042139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.042171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.042205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.042236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.042270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.042305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.042337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.042371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.042403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.042440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.042473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.042502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.042565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.042600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.042634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.042667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.042700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.042732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.042763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.042799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.042840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.042874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.042905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.042942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.042973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.043007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.043039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.043071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.043109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.043141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.043173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.043206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.043237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.043270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.043300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.043336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.043364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.043389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.043425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.043457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.043488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.043518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.043552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.043582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.043614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.043644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.043678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.043709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.043742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.043773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.044127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.044156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.044185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.044218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.044249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.044279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.044313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.044344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.044380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.044411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.044444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.044478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.044511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.044545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.044581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.044611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.044646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.044679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.044717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.044746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.044778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.044842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.044875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.044907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.044935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.044966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.045004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.045040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.045066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.045101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.045137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.045163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.045189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.045218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.787 [2024-04-26 15:20:57.045250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.045283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.045316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.045344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.045377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.045408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.045448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.045481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.045514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.045544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.045578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.045641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.045670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.045706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.045737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.045767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.045800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.045832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.045869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.045898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.045932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.045965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.045994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.046030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.046063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.046121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.046149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.046186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.046215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.046243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.046582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.046624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.046654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.046689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.046716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.046747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.046780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.046813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.046858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.046894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.046922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.046956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.046992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.047024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.047052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.047088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.047118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.047153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.047182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.047214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.047247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.047281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.047312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.047345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.047372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.047404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.047438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.047463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.047488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.047514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.047539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.047564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.047589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.047613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.047642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.047674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.047708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.047746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.047778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.047812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.047848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.047878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.047910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.047943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.047976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.048030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.048064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.048098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.048129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.048160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.048202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.048233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.048268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.048299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.048330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.048368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.048398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.048433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.048465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.048496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.048528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.048560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.048617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.049008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.049041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.049080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.788 [2024-04-26 15:20:57.049113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.049141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.049172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.049206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.049234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.049264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.049294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.049323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.049355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.049391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.049420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.049450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.049485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.049518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.049552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.049577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.049613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.049646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.049676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.049708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.049742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.049775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.049805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.049842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.049870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.049897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.049928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.049961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.049993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.050028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.050059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.050094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.050125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.050160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.050194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.050223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.050259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.050293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.050331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.050365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.050397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.050433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.050463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.050494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.050523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.050558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.050590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.050619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.050653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.050682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.050739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.050772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.050807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.050844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.050876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.050905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.050937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.050970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.051000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.051031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.051068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.051421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.051452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.051477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.051508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.051539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.051570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.051600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.051635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.051666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.051697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.051727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.051763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.051796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.051824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.051858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.051888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.051917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.051945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.051974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.052006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.052041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.052077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.052110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.052138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.052170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.052202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.052229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.052253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.052284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.052317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.052350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.052382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.052418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.789 [2024-04-26 15:20:57.052454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.052490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.052520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.052546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.052570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.052599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.052629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.052658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.052690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.052722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.052753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.052792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.052818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.052859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.052888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.052915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.052940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.052967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.052991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.053017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.053042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.053067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.053092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.053116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.053144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.053168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.053207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.053241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.053272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.053303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.053657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.053691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.053724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.053755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.053797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.053824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.053865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.053896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.053927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.053961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.053993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.054028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.054060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.054094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.054126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.054157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.054188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.054217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.054244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.054276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.054309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.054339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.054370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.054401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.054431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.054462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.054491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.054518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.054548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.054577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.054607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.054635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.054665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.054697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.054732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.054757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.054789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 true 00:10:39.790 [2024-04-26 15:20:57.054824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.054857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.054886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.054914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.054949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.054985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.055015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.055044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.055073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.055097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.055127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.055158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.055189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.055219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.055250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.055283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.055313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.055345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.055379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.055411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.055441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.055470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.055496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.055528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.055558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.055588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.055619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.790 [2024-04-26 15:20:57.056189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.791 [2024-04-26 15:20:57.056223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.791 [2024-04-26 15:20:57.056252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.791 [2024-04-26 15:20:57.056281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.791 [2024-04-26 15:20:57.056308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.791 [2024-04-26 15:20:57.056337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.791 [2024-04-26 15:20:57.056369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.791 [2024-04-26 15:20:57.056398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.791 [2024-04-26 15:20:57.056429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.791 [2024-04-26 15:20:57.056460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.791 [2024-04-26 15:20:57.056491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.791 [2024-04-26 15:20:57.056524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.791 [2024-04-26 15:20:57.056562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.791 [2024-04-26 15:20:57.056592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.791 [2024-04-26 15:20:57.056633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.791 [2024-04-26 15:20:57.056665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.791 [2024-04-26 15:20:57.056697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.791 [2024-04-26 15:20:57.056724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.791 [2024-04-26 15:20:57.056758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.791 [2024-04-26 15:20:57.056787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.791 [2024-04-26 15:20:57.056819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.791 [2024-04-26 15:20:57.056850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.791 [2024-04-26 15:20:57.056880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.791 [2024-04-26 15:20:57.056910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.791 [2024-04-26 15:20:57.056941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.791 [2024-04-26 15:20:57.056973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.791 [2024-04-26 15:20:57.057002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.791 [2024-04-26 15:20:57.057029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.791 [2024-04-26 15:20:57.057060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.791 [2024-04-26 15:20:57.057089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.791 [2024-04-26 15:20:57.057120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.791 [2024-04-26 15:20:57.057158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.791 [2024-04-26 15:20:57.057187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.791 [2024-04-26 15:20:57.057218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.791 [2024-04-26 15:20:57.057246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.791 [2024-04-26 15:20:57.057273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.791 [2024-04-26 15:20:57.057305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.791 [2024-04-26 15:20:57.057339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.791 [2024-04-26 15:20:57.057374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.791 [2024-04-26 15:20:57.057403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.791 [2024-04-26 15:20:57.057435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.791 [2024-04-26 15:20:57.057460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.791 [2024-04-26 15:20:57.057495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.791 [2024-04-26 15:20:57.057527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.791 [2024-04-26 15:20:57.057562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.791 [2024-04-26 15:20:57.057590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.791 [2024-04-26 15:20:57.057617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.791 [2024-04-26 15:20:57.057647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.791 [2024-04-26 15:20:57.057672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.791 [2024-04-26 15:20:57.057705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.791 [2024-04-26 15:20:57.057738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.791 [2024-04-26 15:20:57.057769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.791 [2024-04-26 15:20:57.057800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.791 [2024-04-26 15:20:57.057831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.791 [2024-04-26 15:20:57.057868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.791 [2024-04-26 15:20:57.057897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.791 [2024-04-26 15:20:57.057925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.791 [2024-04-26 15:20:57.057953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.791 [2024-04-26 15:20:57.057987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.791 [2024-04-26 15:20:57.058016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.791 [2024-04-26 15:20:57.058044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.791 [2024-04-26 15:20:57.058072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.791 [2024-04-26 15:20:57.058104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.791 [2024-04-26 15:20:57.058262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.791 [2024-04-26 15:20:57.058291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.791 [2024-04-26 15:20:57.058323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.791 [2024-04-26 15:20:57.058355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.791 [2024-04-26 15:20:57.058404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.791 [2024-04-26 15:20:57.058436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.791 [2024-04-26 15:20:57.058469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.791 [2024-04-26 15:20:57.058503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.791 [2024-04-26 15:20:57.058538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.791 [2024-04-26 15:20:57.058574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.791 [2024-04-26 15:20:57.058605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.791 [2024-04-26 15:20:57.058647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.791 [2024-04-26 15:20:57.058680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.791 [2024-04-26 15:20:57.058714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.791 [2024-04-26 15:20:57.058743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.058771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.058797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.058833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.058869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.058901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.058934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.058968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.058999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.059030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.059061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.059090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.059118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.059152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.059188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.059217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.059247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.059277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.059309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.059334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.059363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.059389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.059413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.059439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.059470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.059506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.059537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.059570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.059604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.059636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.059666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.059697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.059732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.059761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.059797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.059825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.059864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.059889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.059913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.059938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.059963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.059994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.060023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.060056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.060093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.060125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.060163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.060198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.060232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.060262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.060618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.060650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.060682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.060710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.060741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.060782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.060811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.060841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.060873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.060908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.060939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.060973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.061004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.061036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.061066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.061099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.061130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.061159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.061189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.061216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.061250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.061281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.061326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.061356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.061386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.061422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.061451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.061481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.061511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.061545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.061580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.061614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.061643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.061674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.061702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.061735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.061766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.061804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.061834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.061869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.061906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.061936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.792 [2024-04-26 15:20:57.061968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.062001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.062037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.062071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.062098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.062129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.062159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.062193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.062228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.062259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.062292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.062323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.062354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.062387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.062419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.062451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.062482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.062521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.062557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.062592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.062628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.062991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.063022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.063057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.063088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.063117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.063150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.063181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.063219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.063250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.063288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.063319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.063350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.063382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.063415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.063452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.063485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.063514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.063548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.063577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.063607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.063642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.063674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.063705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.063739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.063773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.063804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.063841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.063866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.063902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.063937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.063968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.063995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.064029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.064057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.064086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.064120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.064151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.064182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.064211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.064239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.064273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.064308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.064341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.064371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.064409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.064438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.064470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.064504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.064535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.064569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.064606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.064640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.064672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.064702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.064732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.064762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.064797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.064827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.064858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.064890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.064920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.064951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.064980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.065018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.065387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.065421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.065453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.065491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.065523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.793 [2024-04-26 15:20:57.065556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.065582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.065611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.065645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.065683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.065714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.065743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.065780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.065811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.065843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.065872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.065906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.065935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.065963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.065990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.066016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.066053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.066082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.066109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.066142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.066173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.066205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.066234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.066261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.066294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.066325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.066358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.066390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.066425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.066457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.066488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.066514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.066549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.066574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.066598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.066622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.066646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.066671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.066695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.066721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.066746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.066770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.066795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.066820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.066850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.066882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.066915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.066944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.066978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.067013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.067046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.067078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.067109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.067158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.067186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.067220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.067252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.067289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.067654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.067687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.067718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.067750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.067779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.067811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.067843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.067875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.067904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.067951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.067983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.068017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.068048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.068079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.068107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.068146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.068178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.068225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.068254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.068283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:39.794 [2024-04-26 15:20:57.068313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.068344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.068371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.068397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.068430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.068456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.794 [2024-04-26 15:20:57.068482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.068509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.068536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.068570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.068598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.068636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.068661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.068695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.068734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.068767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.068799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.068828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.068865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.068897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.068927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.068963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.068998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.069029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.069057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.069091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.069124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.069159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.069189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.069229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.069256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.069285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.069319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.069349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.069378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.069417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.069448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.069481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.069510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.069542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.069572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.069604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.069633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.069664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.070024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.070074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.070108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.070142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.070172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.070207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.070239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.070271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.070335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.070368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.070402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.070435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.070465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.070497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.070532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.070571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.070604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.070640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.070673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.070702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.070736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.070773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.070804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.070843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.070875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.070914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.070942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.070971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.071002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.071036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.071068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.071096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.071126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.071158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.071185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.071217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.071244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.071271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.071301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.071335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.071366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.071398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.071436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.071467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.071496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.071535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.071568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.071597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.071625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.071654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.071686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.071718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.795 [2024-04-26 15:20:57.071744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.796 [2024-04-26 15:20:57.071778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.796 [2024-04-26 15:20:57.071811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.796 [2024-04-26 15:20:57.071846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.796 [2024-04-26 15:20:57.071892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.796 [2024-04-26 15:20:57.071924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.796 [2024-04-26 15:20:57.071957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.796 [2024-04-26 15:20:57.071986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.796 [2024-04-26 15:20:57.072015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.796 [2024-04-26 15:20:57.072048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.796 [2024-04-26 15:20:57.072078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.796 [2024-04-26 15:20:57.072435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.796 [2024-04-26 15:20:57.072465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.796 [2024-04-26 15:20:57.072493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.796 [2024-04-26 15:20:57.072527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.796 [2024-04-26 15:20:57.072562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.796 [2024-04-26 15:20:57.072593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.796 [2024-04-26 15:20:57.072623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.796 [2024-04-26 15:20:57.072655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.796 [2024-04-26 15:20:57.072681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.796 [2024-04-26 15:20:57.072709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.796 [2024-04-26 15:20:57.072738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.796 [2024-04-26 15:20:57.072771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.796 [2024-04-26 15:20:57.072806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.796 [2024-04-26 15:20:57.072840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.796 [2024-04-26 15:20:57.072876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.796 [2024-04-26 15:20:57.072907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.796 [2024-04-26 15:20:57.072937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.796 [2024-04-26 15:20:57.072970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.796 [2024-04-26 15:20:57.073001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.796 [2024-04-26 15:20:57.073034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.796 [2024-04-26 15:20:57.073065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.796 [2024-04-26 15:20:57.073101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.796 [2024-04-26 15:20:57.073133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.796 [2024-04-26 15:20:57.073166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.796 [2024-04-26 15:20:57.073199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.796 [2024-04-26 15:20:57.073233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.796 [2024-04-26 15:20:57.073268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.796 [2024-04-26 15:20:57.073303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.796 [2024-04-26 15:20:57.073335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.796 [2024-04-26 15:20:57.073367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.796 [2024-04-26 15:20:57.073400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.796 [2024-04-26 15:20:57.073430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.796 [2024-04-26 15:20:57.073460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.796 [2024-04-26 15:20:57.073489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.796 [2024-04-26 15:20:57.073515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.796 [2024-04-26 15:20:57.073542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.796 [2024-04-26 15:20:57.073568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.796 [2024-04-26 15:20:57.073594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.796 [2024-04-26 15:20:57.073619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.796 [2024-04-26 15:20:57.073643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.796 [2024-04-26 15:20:57.073666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.796 [2024-04-26 15:20:57.073691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.796 [2024-04-26 15:20:57.073716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.796 [2024-04-26 15:20:57.073742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.796 [2024-04-26 15:20:57.073767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.796 [2024-04-26 15:20:57.073792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.796 [2024-04-26 15:20:57.073817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.796 [2024-04-26 15:20:57.073845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.796 [2024-04-26 15:20:57.073870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.796 [2024-04-26 15:20:57.073896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.796 [2024-04-26 15:20:57.073922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.796 [2024-04-26 15:20:57.073947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.796 [2024-04-26 15:20:57.073972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.796 [2024-04-26 15:20:57.073998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.796 [2024-04-26 15:20:57.074023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.796 [2024-04-26 15:20:57.074048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.796 [2024-04-26 15:20:57.074074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.796 [2024-04-26 15:20:57.074098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.796 [2024-04-26 15:20:57.074123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.796 [2024-04-26 15:20:57.074149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.796 [2024-04-26 15:20:57.074174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.796 [2024-04-26 15:20:57.074199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.074224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.074248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.074658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.074692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.074722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.074751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.074781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.074812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.074848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.074880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.074913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.074945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.074980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.075020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.075048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.075105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.075135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.075164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.075196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.075225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.075258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.075293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.075322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.075352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.075400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.075432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.075467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.075496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.075531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.075564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.075595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.075628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.075662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.075697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.075731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.075763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.075796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.075828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.075863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.075899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.075935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.075969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.076000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.076033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.076064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.076093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.076124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.076154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.076187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.076221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.076255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.076288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.076316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.076348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.076380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.076409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.076443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.076474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.076505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.076537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.076566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.076596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.076627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.076656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.076683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.077039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.077076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.077107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.077167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.077199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.077233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.077265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.077298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.077330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.077361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.077397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.077428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.077464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.077493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.077523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.077558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.077588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.077617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.077650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.077694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.077727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.797 [2024-04-26 15:20:57.077759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.077797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.077827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.077863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.077895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.077925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.077954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.077987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.078018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.078050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.078083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.078116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.078156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.078189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.078220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.078247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.078280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.078311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.078347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.078372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.078406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.078442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.078476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.078513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.078539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.078572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.078603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.078641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.078677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.078710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.078740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.078771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.078802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.078840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.078870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.078898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.078931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.078970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.078998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.079032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.079063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.079096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.079127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.079490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.079527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.079560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.079590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.079624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.079655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.079699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.079732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.079764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.079794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.079827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.079865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.079895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.079929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.079960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 15:20:57 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1506669 00:10:39.798 [2024-04-26 15:20:57.079989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.080019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.080052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.080088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.080120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.080155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.080186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.080222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.080254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.080284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.080323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 15:20:57 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:39.798 [2024-04-26 15:20:57.080355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.080385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.080411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.080445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.080482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.080514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.080541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.080574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.080615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.080644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.080676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.080708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.080738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.080770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.080803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.080833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.080869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.080895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.080920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.080944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.080977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.081006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.081038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.081068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.081105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.798 [2024-04-26 15:20:57.081140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.081169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.081201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.081226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.081251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.081283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.081317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.081352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.081383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.081415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.081445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.081476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.081830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.081869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.081900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.081930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.081963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.081997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.082027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.082060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.082090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.082125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.082155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.082186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.082217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.082251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.082282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.082317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.082347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.082384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.082415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.082451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.082480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.082509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.082539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.082569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.082601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.082631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.082668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.082700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.082732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.082770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.082802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.082834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.082875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.082906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.082937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.082971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.083003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.083042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.083069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.083096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.083130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.083164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.083195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.083225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.083264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.083289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.083321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.083355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.083387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.083415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.083443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.083480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.083513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.083543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.083583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.083615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.083648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.083680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.083712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.083745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.083772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.083804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.083840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.083889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.084232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.084266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.084297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.084328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.084359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.084390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.084438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.084471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.084512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.084546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.084575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.084607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.084636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.084683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.799 [2024-04-26 15:20:57.084717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.084746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.084777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.084808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.084841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.084871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.084908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.084939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.084970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.085000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.085032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.085063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.085093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.085118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.085150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.085180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.085220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.085259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.085289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.085322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.085363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.085393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.085423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.085455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.085483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.085511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.085540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.085575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.085607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.085646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.085679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.085709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.085739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.085769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.085800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.085828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.085858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.085890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.085925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.085956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.085986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.086021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.086048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.086076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.086112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.086139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.086164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.086191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.086217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.087044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.087081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.087129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.087160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.087192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.087225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.087257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.087300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.087336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.087367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.087403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.087437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.087473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.087504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.087536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.087569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.087602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.087631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.087660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.087690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.087730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.087761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.087797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.087827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.087860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.087891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.087921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.087954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.087986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.088023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.088054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.088085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.088118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.088151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.088185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.088219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.088252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.088279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.088337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.088370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.088401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.088433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.088464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.088497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.088525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.088554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.088630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.800 [2024-04-26 15:20:57.088666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.088699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.088731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.088762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.088792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.088821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.088856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.088886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.088916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.088953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.088988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.089019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.089051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.089080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.089114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.089154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.089189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.089387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.089425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.089453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.089482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.089512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.089546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.089574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.089604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.089635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.089663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.089693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.089724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.089753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.089782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.089816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.089854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.089887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.089914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.089954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.089987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.090017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.090047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.090079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.090112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.090147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.090180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.090210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.090247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.090277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.090315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.090346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.090380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.090410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.090440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.090470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.090502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.090538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.090567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.090600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.090628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.090660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.090691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.090728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.090760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.090791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.090819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.090852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.090889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.090919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.090945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.090977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.091008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.091039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.091076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.091110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.091140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.091173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.091212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.091247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.091275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.091316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.091345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.091377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.092104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.092137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.092179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.801 [2024-04-26 15:20:57.092213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.092247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.092279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.092311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.092342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.092371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.092405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.092433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.092467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.092498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.092529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.092568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.092600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.092629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.092663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.092691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.092722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.092755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.092787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.092817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.092850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.092882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.092911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.092938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.092970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.092999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.093031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.093060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.093095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.093122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.093149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.093187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.093220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.093248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.093283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.093315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.093347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.093379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.093411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.093444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.093476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.093505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.093538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.093566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.093597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.093630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.093659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.093693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.093722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.093753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.093784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.093812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.093842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.093870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.093906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.093940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.093970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.094006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.094039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.094068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.094100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.094259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.094298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.094326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.094361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.094398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.094427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.094458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.094487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.094520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.094551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.094583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.094623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.094657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.094690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.094722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.094751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.094785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.094815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.094862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.094898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.094929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.094962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.094991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.095044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.095073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.095106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.095136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.095173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.095204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.095237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.095268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.095301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.095334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.095364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.095396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.095429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.095462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.802 [2024-04-26 15:20:57.095493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.095521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.095558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.095588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.095621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.095651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.095682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.095711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.095741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.095770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.095800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.095841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.095868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.095899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.095934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.095967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.096006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.096038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.096067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.096101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.096133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.096164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.096196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.096228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.096255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.096285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.096624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.096655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.096691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.096722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.096755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.096793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.096824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.096860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.096891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.096923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.096953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.096985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.097020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.097051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.097083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.097113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.097144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.097180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.097211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.097243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.097274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.097307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.097335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.097367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.097396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.097430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.097460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.097493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.097525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.097559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.097591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.097619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.097655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.097685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.097717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.097749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.097780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.097805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.097829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.097863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.097895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.097926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.097958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.097990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.098020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.098048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.098078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.098113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.098147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.098181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.098213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.098245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.098276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.098306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.098337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.098369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.098398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.098445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.098477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.098506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.098542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.098573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.098604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.098637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.098927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.098954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.098978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.099003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.099028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.099062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.099090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.099115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.803 [2024-04-26 15:20:57.099140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.099166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.099191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.099215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.099240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.099265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.099288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.099312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.099336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.099364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.099398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.099429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.099462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.099494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.099526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.099565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.099598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.099632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.099662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.099695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.099729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.099761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.099795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.099828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.099866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.099901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.099932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.099966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.099996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.100034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.100068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.100098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.100144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.100174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.100208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.100244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.100272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.100303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.100335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.100370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.100400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.100436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.100468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.100501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.100530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.100564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.100598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.100634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.100665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.100712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.100744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.100775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.100809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.100852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.100884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.101221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.101257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.101293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.101320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.101351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.101383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.101414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.101444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.101483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.101509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.101539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.101571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.101602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.101639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.101671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.101704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.101740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.101767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.101799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.101832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.101868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.101903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.101933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.101982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.102013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.102048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.102083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.102113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.102143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.102172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.102202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.102234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.102269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.102302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.102333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.102366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.102396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.102437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.102467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.102515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.102545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.102575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.804 [2024-04-26 15:20:57.102606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.102643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.102670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.102705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.102733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.102772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.102802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.102830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.102866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.102897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.102941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.102973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.103005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.103036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.103066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.103097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.103129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.103159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.103187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.103216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.103249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.103276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.103652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.103680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.103705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.103735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.103770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.103806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.103841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.103871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.103905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.103938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.103970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.104005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.104034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.104062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.104095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.104128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.104159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.104190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.104219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.104246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.104276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.104301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.104332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.104366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.104393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.104429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.104464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.104494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.104520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.104545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.104570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.104596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.104620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.104645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.104671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.104696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.104721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.104746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.104771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.104796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.104822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.104850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.104876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.104900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.104925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.104950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.104981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.105010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.105044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.105075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.105110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.105142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.105175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.105209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.105238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.105270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.105298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.105341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.805 [2024-04-26 15:20:57.105373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.105405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.105437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.105472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.105506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.105875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.105909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.105943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.105974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.106005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.106035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.106068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.106097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.106127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.106170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.106201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.106227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.106261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.106294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.106332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.106363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.106397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:39.806 [2024-04-26 15:20:57.106677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.106711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.106739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.106799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.106832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.106870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.106901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.106932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.106963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.106989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.107020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.107049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.107082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.107118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.107147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.107185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.107218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.107248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.107280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.107311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.107342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.107373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.107405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.107436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.107467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.107502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.107531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.107575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.107605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.107637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.107669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.107697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.107727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.107757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.107793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.107823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.107863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.107896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.107926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.107954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.107988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.108013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.108048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.108083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.108117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.108153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.108185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.108346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.108373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.108406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.108436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.108469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.108503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.108540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.108569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.108599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.108632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.108667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.108701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.108740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.108775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.108804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.108832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.108865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.108903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.108932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.108970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.109000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.109031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.109065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.109098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.109137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.109172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.109205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.806 [2024-04-26 15:20:57.109234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.109265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.109295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.109322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.109358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.109395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.109426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.109455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.109489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.109518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.109547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.109581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.109612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.109643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.109678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.109707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.109744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.109776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.109811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.109851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.109883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.109916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.109949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.109978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.110007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.110040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.110071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.110098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.110129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.110158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.110189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.110221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.110257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.110289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.110328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.110362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.110399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.110532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.110566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.110593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.110623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.110660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.110690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.110721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.110752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.110782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.110814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.110847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.110884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.110922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.110956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.110987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.111037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.111465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.111504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.111535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.111585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.111613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.111642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.111670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.111704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.111739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.111769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.111819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.111853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.111881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.111914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.111948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.111977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.112009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.112039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.112066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.112095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.112133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.112166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.112199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.112232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.112263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.112289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.112328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.112358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.112391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.112430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.112458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.112486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.112519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.112549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.112580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.112612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.112650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.112680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.112709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.112740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.112772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.112805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.112835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.112899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.807 [2024-04-26 15:20:57.112932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.112969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.112999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.113191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.113223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.113257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.113293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.113324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.113359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.113390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.113421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.113455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.113486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.113514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.113549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.113579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.113612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.113647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.113680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.113715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.113747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.113778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.113810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.113848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.113884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.113916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.113946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.113978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.114010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.114043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.114075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.114109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.114145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.114180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.114211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.114240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.114274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.114310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.114340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.114371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.114405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.114436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.114477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.114509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.114542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.114576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.114607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.114633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.114668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.114699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.114727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.114754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.114783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.114811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.114845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.114877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.114910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.114942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.114978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.115009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.115039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.115077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.115107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.115138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.115172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.115197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.115233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.115358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.115401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.115433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.115468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.115503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.115535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.115596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.115628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.115659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.115692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.115720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.115750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.115779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.115812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.115848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.115881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.116489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.116521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.116555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.116584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.116620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.116674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.116704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.116738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.116767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.116798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.116829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.116862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.808 [2024-04-26 15:20:57.116895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.116927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.116956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.116986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.117018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.117053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.117082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.117112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.117144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.117170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.117194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.117220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.117245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.117271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.117296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.117322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.117347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.117372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.117397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.117428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.117460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.117491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.117522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.117552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.117578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.117603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.117627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.117653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.117677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.117702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.117732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.117765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.117798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.117831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.117865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.118039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.118071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.118103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.118132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.118179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.118212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.118247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.118279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.118309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.118344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.118373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.118406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.118437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.118468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.118498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.118528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.118565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.118596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.118640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.118673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.118705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.118734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.118768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.118801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.118835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.118871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.118908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.118939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.118984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.119017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.119066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.119098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.119137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.119168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.119202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.119229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.119264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.119293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.119330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.119361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.119389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.119419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.119452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.119482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.119516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.119548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.119577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.119608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.119642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.119678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.119710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.119741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.119772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.119800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.119831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.119865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.119904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.119932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.119960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.119992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.809 [2024-04-26 15:20:57.120025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.120056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.120086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.120117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.120419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.120456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.120489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.120521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.120551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.120589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.120620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.120651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.120682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.120715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.120743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.120774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.120807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.120844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.120879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.120915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.120946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.120981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.121010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.121042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.121073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.121113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.121146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.121176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.121203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.121232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.121263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.121299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.121329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.121363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.121393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.121430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.121469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.121501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.121528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.121558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.121587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.121620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.121650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.121680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.121713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.121742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.121773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.121803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.121830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.121864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.121889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.121920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.121953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.121984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.122015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.122047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.122077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.122106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.122141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.122176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.122212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.122242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.122272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.122307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.122341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.122366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.122405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.122798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.122834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.122873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.122905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.122933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.122966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.122995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.123021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.123046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.123070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.123104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.123141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.123179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.810 [2024-04-26 15:20:57.123212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.123245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.123278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.123303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.123328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.123354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.123380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.123405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.123432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.123457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.123482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.123507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.123532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.123556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.123581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.123605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.123630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.123657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.123687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.123715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.123749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.123779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.123814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.123847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.123876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.123908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.123943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.123977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.124006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.124038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.124076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.124105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.124140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.124179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.124212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.124237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.124261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.124287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.124313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.124339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.124364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.124390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.124414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.124440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.124464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.124490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.124514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.124541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.124573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.124600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.124628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.125018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.125053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.125083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.125150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.125185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.125225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.125257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.125286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.125317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.125351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.125383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.125418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.125451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.125479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.125513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.125551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.125585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.125621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.125650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.125684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.125717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.125750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.125783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.125819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.125854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.125885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.125916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.125947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.125979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.126014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.126045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.126074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.126103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.126133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.126166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.126196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.126224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.126254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.126289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.126328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.126365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.126394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.126425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.811 [2024-04-26 15:20:57.126455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.126492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.126527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.126555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.126586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.126614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.126648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.126681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.126712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.126738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.126772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.126802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.126836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.126876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.126912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.126941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.126977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.127320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.127353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.127387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.127418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.127452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.127482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.127518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.127552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.127586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.127621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.127653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.127687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.127718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.127750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.127779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.127812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.127847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.127882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.127914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.127943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.127972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.128007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.128041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.128075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.128108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.128138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.128169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.128199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.128229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.128260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.128294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.128325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.128357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.128386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.128419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.128450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.128480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.128507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.128542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.128573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.128609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.128645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.128672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.128703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.128738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.128775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.128804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.128840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.128874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.128900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.128932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.128963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.128993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.129027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.129057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.129087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.129117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.129144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.129173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.129208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.129239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.129273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.129300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.129333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.129488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.129520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.129551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.130008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.130040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.130072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.130106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.130139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.130188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.130224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.130249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.130277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.130316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.130345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.130373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.130404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.130435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.130466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.812 [2024-04-26 15:20:57.130499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.130540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.130567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.130591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.130625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.130654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.130684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.130714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.130746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.130783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.130816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.130855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.130886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.130924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.130955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.130988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.131021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.131054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.131085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.131115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.131146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.131178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.131206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.131231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.131257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.131283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.131307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.131332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.131358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.131383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.131413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.131444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.131475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.131508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.131540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.131572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.131600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.131630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.131670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.131701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.131747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.131782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.131814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.131846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.131879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.132023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.132053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.132081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.132115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.132165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.132198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.132231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.132263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.132295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.132326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.132355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.132385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.132415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.132449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.132481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.132520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.132552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.132583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.132613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.132644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.132680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.132714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.132744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.132775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.132813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.132849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.132881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.132916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.132947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.132974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.133002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.133031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.133065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.133099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.133127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.133157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.133193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.133229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.133258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.133289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.133317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.133348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.133379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.133414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.133449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.133482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.133511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.133542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.133572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.133602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.813 [2024-04-26 15:20:57.133630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.133659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.133689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.133722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.133752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.133785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.133818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.133854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.133889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.133924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.133958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.133987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.134020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.134051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.134398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.134430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.134464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.134495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.134527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.134557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.134587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.134622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.134654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.134684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.134720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.134750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.134783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.134810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.134844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.134875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.134911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.134943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.134977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.135004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.135036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.135064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.135100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.135130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.135157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.135189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.135216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.135243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.135274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.135305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.135339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.135373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.135401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.135433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.135471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.135508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.135538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.135564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.135597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.135625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.135658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.135687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.135718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.135744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.135779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.135810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.135841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.135872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.135902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.135936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.135970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.135999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.136030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.136059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.136089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.136125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.136157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.136189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.136226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.136260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.136292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.136329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.136360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.136715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.136748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.136783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.136811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.136857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.136886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.136916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.136949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.136983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.137015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.137048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.137078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.137110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.137140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.137169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.137198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.137230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.137267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.137299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.137330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.137365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.137400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.814 [2024-04-26 15:20:57.137430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.815 [2024-04-26 15:20:57.137456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.815 [2024-04-26 15:20:57.137480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.815 [2024-04-26 15:20:57.137508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.815 [2024-04-26 15:20:57.137533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.815 [2024-04-26 15:20:57.137557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.815 [2024-04-26 15:20:57.137589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.815 [2024-04-26 15:20:57.137622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.815 [2024-04-26 15:20:57.137657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.815 [2024-04-26 15:20:57.137688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.815 [2024-04-26 15:20:57.137719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.815 [2024-04-26 15:20:57.137753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.815 [2024-04-26 15:20:57.137787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.815 [2024-04-26 15:20:57.137822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.815 [2024-04-26 15:20:57.137860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.815 [2024-04-26 15:20:57.137888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.815 [2024-04-26 15:20:57.137919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.815 [2024-04-26 15:20:57.137948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.815 [2024-04-26 15:20:57.137976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.815 [2024-04-26 15:20:57.138001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.815 [2024-04-26 15:20:57.138025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.815 [2024-04-26 15:20:57.138049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.815 [2024-04-26 15:20:57.138075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.815 [2024-04-26 15:20:57.138099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.815 [2024-04-26 15:20:57.138123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.815 [2024-04-26 15:20:57.138149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.815 [2024-04-26 15:20:57.138173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.815 [2024-04-26 15:20:57.138198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.815 [2024-04-26 15:20:57.138223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.815 [2024-04-26 15:20:57.138256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.815 [2024-04-26 15:20:57.138284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.815 [2024-04-26 15:20:57.138315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.815 [2024-04-26 15:20:57.138347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.815 [2024-04-26 15:20:57.138393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.815 [2024-04-26 15:20:57.138426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.815 [2024-04-26 15:20:57.138460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.815 [2024-04-26 15:20:57.138490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.815 [2024-04-26 15:20:57.138524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.815 [2024-04-26 15:20:57.138555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.815 [2024-04-26 15:20:57.138590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.815 [2024-04-26 15:20:57.138624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.815 [2024-04-26 15:20:57.138653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.815 [2024-04-26 15:20:57.139015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.815 [2024-04-26 15:20:57.139048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.815 [2024-04-26 15:20:57.139079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.815 [2024-04-26 15:20:57.139115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.815 [2024-04-26 15:20:57.139149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.815 [2024-04-26 15:20:57.139186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.815 [2024-04-26 15:20:57.139223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.815 [2024-04-26 15:20:57.139254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.815 [2024-04-26 15:20:57.139288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.815 [2024-04-26 15:20:57.139319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.815 [2024-04-26 15:20:57.139348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.815 [2024-04-26 15:20:57.139381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.815 [2024-04-26 15:20:57.139410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.815 [2024-04-26 15:20:57.139462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.815 [2024-04-26 15:20:57.139493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.815 [2024-04-26 15:20:57.139528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.815 [2024-04-26 15:20:57.139563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.815 [2024-04-26 15:20:57.139596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.815 [2024-04-26 15:20:57.139626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.815 [2024-04-26 15:20:57.139658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.815 [2024-04-26 15:20:57.139691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.815 [2024-04-26 15:20:57.139731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.815 [2024-04-26 15:20:57.139765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.815 [2024-04-26 15:20:57.139798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.815 [2024-04-26 15:20:57.139827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.815 [2024-04-26 15:20:57.139868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.815 [2024-04-26 15:20:57.139899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.815 [2024-04-26 15:20:57.139928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.815 [2024-04-26 15:20:57.139960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.815 [2024-04-26 15:20:57.139991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.815 [2024-04-26 15:20:57.140016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.815 [2024-04-26 15:20:57.140046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.815 [2024-04-26 15:20:57.140076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.815 [2024-04-26 15:20:57.140110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.815 [2024-04-26 15:20:57.140141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.815 [2024-04-26 15:20:57.140177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.815 [2024-04-26 15:20:57.140207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.815 [2024-04-26 15:20:57.140242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.140270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.140310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.140343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.140376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.140401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.140434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.140466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.140503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.140532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.140566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.140596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.140631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.140662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.140693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.140727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.140757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.140821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.140857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.140890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.140927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.140956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.141016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.141045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.141077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.141109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.141485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.141519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.141552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.141583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.141623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.141654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.141686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.141713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.141745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.141773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.141803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.141833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.141872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.141907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.141939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.141971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.142002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.142036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.142067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.142096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.142131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.142163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.142195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.142224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.142258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.142288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.142318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.142346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.142376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.142410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.142440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.142474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.142511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.142537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.142569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.142595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.142625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.142663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.142692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.142723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.142758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.142792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.142824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.142859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.142888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.142918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.142946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.142983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.143018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.143052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.143084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.143117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.143147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.143177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.143210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.143245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.143272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.143311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.143342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.143375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.143411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.143445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.143480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.143511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.143872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.143919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.143948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.143983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:39.816 [2024-04-26 15:20:57.144021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.816 [2024-04-26 15:20:57.144057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.144088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.144128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.144163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.144198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.144226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.144257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.144293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.144327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.144360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.144391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.144422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.144454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.144486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.144518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.144547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.144582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.144612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.144645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.144677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.144712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.144741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.144772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.144801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.144832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.144865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.144894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.144927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.144953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.144978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.145003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.145029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.145055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.145081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.145106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.145131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.145155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.145181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.145205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.145229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.145255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.145280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.145303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.145329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.145354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.145379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.145405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.145431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.145457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.145482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.145508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.145533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.145559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.145584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.145608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.145633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.145659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.145685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.146142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.146177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.146207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.146249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.146278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.146311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.146346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.146382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.146414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.146442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.146476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.146509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.146540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.146572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.146604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.146634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.146669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.146698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.146733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.146767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.146805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.146836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.146874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.146925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.146955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.146989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.147023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.147055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.147087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.147119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.147154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.147190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.147223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.147258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.147288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.147321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.147351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.147379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.817 [2024-04-26 15:20:57.147408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.147440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.147470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.147501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.147530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.147569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.147600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.147632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.147663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.147696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.147728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.147757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.147785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.147815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.147848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.147885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.147916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.147947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.147980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.148011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.148040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.148078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.148113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.148143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.148170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.148204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.148575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.148612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.148642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.148671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.148702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.148734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.148784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.148818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.148851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.148884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.148917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.148952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.148983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.149012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.149043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.149075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.149106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.149139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.149171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.149205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.149252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.149286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.149315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.149347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.149377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.149415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.149446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.149477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.149508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.149554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.149586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.149620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.149650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.149681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.149717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.149747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.149781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.149812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.149844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.149870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.149904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.149936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.149967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.149998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.150029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.150056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.150081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.150111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.150142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.150175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.150206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.150231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.150267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.150300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.150338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.150367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.150400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.150437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.150470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.150500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.150535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.150566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.150602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.150986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.151025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.151056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.151085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.151119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.151151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.151185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.818 [2024-04-26 15:20:57.151214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.151244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.151273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.151329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.151361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.151396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.151426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.151460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.151489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.151521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.151551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.151581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.151616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.151645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.151682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.151718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.151747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.151779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.151814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.151849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.151879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.151909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.151941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.151970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.151998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.152030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.152057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.152095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.152125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.152156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.152190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.152221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.152250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.152286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.152317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.152353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.152383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.152412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.152442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.152476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.152509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.152536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.152566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.152591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.152615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.152640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.152665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.152689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.152714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.152745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.152779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.152808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.152846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.152877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.152911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.152946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.152976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.153345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.153382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.153414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.153447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.153484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.153519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.153553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.153583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.153613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.153642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.153671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.153716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.153749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.153780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.153808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.153839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.153870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.153898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.153929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.153963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.153998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.154028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.154061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.154102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.819 [2024-04-26 15:20:57.154132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.154161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.154191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.154231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.154261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.154297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.154327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.154360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.154390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.154417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.154449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.154477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.154515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.154552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.154581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.154612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.154644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.154672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.154699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.154729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.154762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.154791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.154822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.154859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.154891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.154923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.154951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.154982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.155015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.155047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.155077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.155110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.155146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.155195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.155226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.155260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.155291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.155326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.155355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.155706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.155744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.155779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.155811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.155849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.155883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.155911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.155943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.155977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.156027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.156056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.156088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.156117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.156150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.156183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.156214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.156248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.156281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.156326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.156357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.156385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.156416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.156451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.156487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.156522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.156549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.156581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.156613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.156640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.156673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.156708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.156737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.156769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.156802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.156833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.156875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.156906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.156937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.156962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.156996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.157024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.157055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.157086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.157114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.157145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.157179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.157211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.157242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.157269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.157303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.157334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.157367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.157399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.157430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.157464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.157499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.157533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.157561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.157592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.157624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.820 [2024-04-26 15:20:57.157657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.157686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.157716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.157748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.158094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.158127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.158156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.158186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.158218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.158244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.158274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.158305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.158344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.158377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.158409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.158441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.158481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.158509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.158538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.158572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.158601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.158638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.158672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.158701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.158730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.158764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.158797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.158830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.158860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.158896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.158928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.158961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.158991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.159019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.159050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.159082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.159114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.159144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.159173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.159200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.159234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.159260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.159285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.159310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.159335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.159361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.159388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.159413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.159438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.159464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.159489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.159514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.159539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.159563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.159588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.159612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.159638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.159667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.159703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.159734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.159770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.159803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.159833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.159870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.159904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.159941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.159971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.160320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.160353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.160388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.160430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.160462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.160509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.160541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.160601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.160633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.160667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.160696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.160727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.160779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.160812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.160849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.160884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.160919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.160972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.161004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.161042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.161069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.161100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.161137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.161170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.161198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.161229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.161266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.161304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.161336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.821 [2024-04-26 15:20:57.161365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.161404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.161438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.161468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.161501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.161530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.161562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.161594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.161629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.161660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.161692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.161722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.161755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.161786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.161815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.161851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.161887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.161913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.161942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.161971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.162005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.162037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.162068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.162097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.162133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.162165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.162203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.162233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.162267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.162300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.162330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.162359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.162389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.162444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.162478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.162828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.162865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.162899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.162929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.162962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.162992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.163023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.163050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.163087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.163116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.163149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.163178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.163209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.163249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.163279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.163310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.163342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.163373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.163406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.163446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.163477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.163508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.163540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.163565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.163604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.163637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.163669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.163705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.163737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.163769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.163799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.163843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.163873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.163903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.163932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.163959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.163990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.164023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.164052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.164080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.164118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.164151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.164176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.164207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.164237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.164270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.164307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.164337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.164370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.164399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.164433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.164463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.164495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.164526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.164554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.164583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.164611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.164646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.164678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.164707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.164740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.164773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.164819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.165169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.165208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.165241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:39.822 [2024-04-26 15:20:57.165275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.094 [2024-04-26 15:20:57.165311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.094 [2024-04-26 15:20:57.165343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.094 [2024-04-26 15:20:57.165373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.094 [2024-04-26 15:20:57.165408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.094 [2024-04-26 15:20:57.165444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.094 [2024-04-26 15:20:57.165476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.094 [2024-04-26 15:20:57.165505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.094 [2024-04-26 15:20:57.165534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.094 [2024-04-26 15:20:57.165563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.094 [2024-04-26 15:20:57.165596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.094 [2024-04-26 15:20:57.165628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.094 [2024-04-26 15:20:57.165657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.094 [2024-04-26 15:20:57.165691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.094 [2024-04-26 15:20:57.165722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.094 [2024-04-26 15:20:57.165752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.094 [2024-04-26 15:20:57.165779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.094 [2024-04-26 15:20:57.165810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.094 [2024-04-26 15:20:57.165844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.094 [2024-04-26 15:20:57.165871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.094 [2024-04-26 15:20:57.165901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.094 [2024-04-26 15:20:57.165932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.094 [2024-04-26 15:20:57.165965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.094 [2024-04-26 15:20:57.165996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.094 [2024-04-26 15:20:57.166029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.094 [2024-04-26 15:20:57.166062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.094 [2024-04-26 15:20:57.166095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.094 [2024-04-26 15:20:57.166127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.094 [2024-04-26 15:20:57.166155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.094 [2024-04-26 15:20:57.166184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.094 [2024-04-26 15:20:57.166220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.094 [2024-04-26 15:20:57.166249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.094 [2024-04-26 15:20:57.166274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.094 [2024-04-26 15:20:57.166300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.094 [2024-04-26 15:20:57.166325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.094 [2024-04-26 15:20:57.166350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.094 [2024-04-26 15:20:57.166377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.094 [2024-04-26 15:20:57.166401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.094 [2024-04-26 15:20:57.166427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.094 [2024-04-26 15:20:57.166451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.094 [2024-04-26 15:20:57.166475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.094 [2024-04-26 15:20:57.166500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.094 [2024-04-26 15:20:57.166524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.094 [2024-04-26 15:20:57.166548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.094 [2024-04-26 15:20:57.166573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.094 [2024-04-26 15:20:57.166597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.094 [2024-04-26 15:20:57.166624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.094 [2024-04-26 15:20:57.166649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.094 [2024-04-26 15:20:57.166675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.094 [2024-04-26 15:20:57.166699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.094 [2024-04-26 15:20:57.166724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.094 [2024-04-26 15:20:57.166749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.166774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.166800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.166825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.166855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.166882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.166906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.166931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.166956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.166980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.167383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.167418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.167456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.167489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.167522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.167557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.167589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.167616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.167647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.167682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.167714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.167746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.167777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.167809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.167845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.167880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.167912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.167943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.167976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.168005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.168038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.168071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.168106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.168139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.168177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.168209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.168243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.168277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.168305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.168338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.168370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.168404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.168434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.168488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.168518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.168550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.168580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.168611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.168640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.168669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.168707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.168741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.168773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.168803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.168847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.168879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.168909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.168946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.168976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.169010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.169040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.169072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.169107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.169137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.169168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.169199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.169231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.169268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.169297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.169331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.169363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.169397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.169428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.169797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.169836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.169870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.169902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.169943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.169971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.170009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.170042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.170073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.170104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.170133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.170167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.170198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.170228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.170262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.170293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.170328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.170359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.170395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.170424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.170459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.170494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.170525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.170559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.095 [2024-04-26 15:20:57.170588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.170628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.170660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.170691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.170723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.170758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.170788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.170819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.170858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.170891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.170921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.170955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.170986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.171013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.171045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.171075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.171111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.171141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.171174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.171216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.171249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.171280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.171319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.171353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.171382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.171416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.171446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.171479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.171511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.171544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.171579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.171614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.171647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.171676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.171706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.171738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.171763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.171801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.171833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.171865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.172143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.172180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.172210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.172244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.172273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.172305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.172338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.172373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.172404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.172439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.172467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.172497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.172523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.172547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.172572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.172598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.172623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.172649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.172675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.172707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.172741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.172775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.172806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.172845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.172875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.172907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.172936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.172970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.173001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.173034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.173063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.173095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.173125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.173160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.173188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.173221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.173251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.173287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.173319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.173345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.173369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.173394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.173419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.173444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.173469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.173495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.173520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.173552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.173585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.173616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.173650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.173678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.173708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.173738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.173772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.173807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.173844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.173878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.173918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.096 [2024-04-26 15:20:57.173948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.173979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.174011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.174042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.174385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.174415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.174447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.174478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.174509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.174541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.174571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.174601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.174630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.174668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.174700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.174752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.174785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.174818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.174865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.174895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.174926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.174953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.174985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.175017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.175049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.175076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.175106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.175141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.175176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.175205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.175239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.175273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.175304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.175336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.175363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.175401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.175433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.175466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.175768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.175802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.175843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.175873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.175904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.175940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.175971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.176024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.176055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.176085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.176116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.176145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.176177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.176210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.176243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.176277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.176311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.176341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.176373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.176416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.176449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.176483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.176513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.176546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.176580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.176613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.176643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.176677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.176710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.176750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.176785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.176817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.176850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.176881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.176920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.176952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.176983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.177040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.177072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.177102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.177135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.177166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.177199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.177231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.177261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.177293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.177323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.177359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.177390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.177424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.177454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.177492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.177525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.177557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.177592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.177631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.177660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.177689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.177725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.097 [2024-04-26 15:20:57.177753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.177785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.177824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.177859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.177896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.178027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.178065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.178094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.178129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.178159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.178191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.178225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.178256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.178290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.178326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.178360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.178392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.178425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.178460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.178491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.178524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.178554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.178582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.178620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.178653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.178684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.178712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.178745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.178778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.178814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.178849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.178875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.178906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.178939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.179133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.179170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.179202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.179233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.179267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.179306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.179343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.179371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.179401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.179433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.179463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.179496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.179524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.179562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.179593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.179622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.179646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.179676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.179711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.179743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.179777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.179810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.179848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.179878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.179911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.179942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.179973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.180000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.180039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.180070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.180099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.180133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.180158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.180183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.180494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.180522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.180548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.180573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.180598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.180623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.180648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.180673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.180698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.180722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.180748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.180774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.180799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.180825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.180854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.180881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.180907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.180932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.180961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.180992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.098 [2024-04-26 15:20:57.181029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.181063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.181098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:40.099 [2024-04-26 15:20:57.181130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.181165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.181194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.181227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.181266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.181295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.181331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.181363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.181400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.181429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.181461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.181499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.181532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.181562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.181596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.181628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.181674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.181703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.181736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.181771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.181801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.181836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.181872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.181909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.181942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.181973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.182005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.182035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.182280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.182314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.182349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.182382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.182411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.182441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.182470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.182504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.182534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.182567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.182605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.182640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.182669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.182701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.182730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.182766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.182800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.182835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.182870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.182907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.182939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.182970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.183001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.183036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.183068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.183102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.183129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.183162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.183196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.183229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.183265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.183295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.183326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.183355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.183417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.183450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.183483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.183517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.183547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.183580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.183611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.183643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.183675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.183707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.183737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.183765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.183816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.183855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.183890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.183922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.183955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.183989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.184020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.184051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.184087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.184119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.099 [2024-04-26 15:20:57.184169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.184202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.184235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.184266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.184295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.184355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.184389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.184429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.184568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.184600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.184633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.184664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.184694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.184733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.184763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.184796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.184833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.184872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.184909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.184945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.185386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.185419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.185456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.185486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.185518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.185550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.185580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.185611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.185644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.185676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.185708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.185742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.185775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.185805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.185835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.185869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.185897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.185925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.185960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.185998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.186027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.186055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.186080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.186104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.186131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.186156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.186182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.186207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.186233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.186257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.186286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.186315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.186345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.186380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.186412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.186444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.186477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.186509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.186541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.186569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.186599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.186643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.186678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.186716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.186748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.186782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.186832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.186870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.186903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.186939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.186970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.187016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.187045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.187078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.187109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.187140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.187175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.187207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.187244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.187275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.187306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.187338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.187369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.187401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.187527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.187561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.187594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.187621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.187715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.187746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.187779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.187810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.187854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.187889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.187920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.187953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.187986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.188018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.188052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.100 [2024-04-26 15:20:57.188084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.188119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.188151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.188187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.188216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.188247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.188286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.188318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.188349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.188388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.188418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.188462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.188494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.188528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.188558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.188591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.188627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.188658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.188687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.188717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.188751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.188783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.188814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.188851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.188883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.188912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.188945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.188980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.189012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.189045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.189075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.189104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.189134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.189171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.189200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.189230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.189579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.189616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.189644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.189675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.189704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.189738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.189771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.189802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.189842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.189873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.189908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.189938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.189968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.189999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.190032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.190080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.190113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.190146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.190183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.190213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.190253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.190284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.190316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.190347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.190378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.190415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.190449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.190484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.190517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.190550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.190590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.190621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.190652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.190684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.190711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.190742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.190775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.190807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.190847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.190874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.190912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.190941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.190971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.191007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.191039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.191070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.191100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.191140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.191173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.191203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.191235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.191271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.191307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.191340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.191367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.191401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.191437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.191494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.191529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.191559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.191597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.191627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.191660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.101 [2024-04-26 15:20:57.191691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.191829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.191870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.191904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.191932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.191962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.191999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.192025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.192060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.192097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.192131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.192167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.192196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.192230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.192264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.192295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.192333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.192778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.192811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.192852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.192888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.192920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.192950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.192985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.193014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.193045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.193086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.193119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.193147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.193189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.193220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.193247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.193278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.193314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.193341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.193371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.193405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.193436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.193468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.193504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.193533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.193563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.193624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.193654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.193690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.193725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.193755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.193785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.193816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.193855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.193892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.193921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.193969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.194000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.194039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.194072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.194103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.194134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.194167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.194203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.194237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.194266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.194301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.194333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.194515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.194546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.194580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.194610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.194641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.194670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.194700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.194730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.194760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.194789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.194821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.194857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.194887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.194925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.194960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.194993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.195030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.195065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.195095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.195132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.195163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.195193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.195225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.195255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.195285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.195315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.195345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.195377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.195413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.195442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.195473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.195507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.195539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.195575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.195608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.102 [2024-04-26 15:20:57.195637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.195675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.195707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.195742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.195772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.195802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.195842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.195876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.195909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.195950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.195985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.196021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.196061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.196088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.196126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.196159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.196192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.196229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.196263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.196296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.196335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.196364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.196399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.196428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.196462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.196492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.196523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.196553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.196591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.196724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.196755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.196785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.196817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.196868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.196901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.196932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.196964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.196994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.197032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.197060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.197090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.197122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.197155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.197195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.197226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.197647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.197681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.197716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.197749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.197788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.197817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.197854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.197884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.197914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.197945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.197978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.198004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.198042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.198078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.198111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.198140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.198170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.198201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.198236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.198290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.198321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.198352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.198383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.198415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.198445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.198476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.198510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.198542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.198576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.198611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.198645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.198679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.198710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.198742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.198772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.198802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.198842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.198876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.198909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.198939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.103 [2024-04-26 15:20:57.198970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.199005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.199038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.199068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.199108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.199137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.199169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.199423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.199454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.199492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.199529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.199562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.199591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.199618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.199649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.199680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.199714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.199749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.199781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.199814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.199850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.199883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.199917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.199949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.199979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.200007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.200040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.200073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.200102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.200148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.200179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.200212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.200260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.200291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.200322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.200354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.200388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.200436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.200468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.200499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.200530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.200561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.200592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.200625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.200653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.200685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.200720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.200753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.200792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.200822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.200860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.200891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.200924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.200961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.200990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.201019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.201048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.201079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.201110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.201145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.201176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.201204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.201238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.201267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.201297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.201333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.201371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.201400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.201428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.201463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.201491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.201617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.201650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.201681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.201712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.201744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.201777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.201810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.201848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.201882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.201914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.201946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.201978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.202010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.202044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.202074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.202118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.202575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.202604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.202638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.202674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.202703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.202734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.202768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.202805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.202844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.202877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.202911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.202944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.104 [2024-04-26 15:20:57.202974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.203008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.203036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.203069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.203100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.203138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.203171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.203204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.203232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.203267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.203306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.203338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.203376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.203417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.203444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.203477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.203509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.203539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.203570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.203600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.203632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.203665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.203703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.203739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.203770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.203804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.203845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.203878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.203911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.203944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.203979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.204006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.204033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.204069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.204104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.204302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.204334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.204368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.204401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.204431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.204471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.204501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.204540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.204572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.204605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.204642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.204673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.204732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.204764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.204796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.204830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.204866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.204897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.204931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.204963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.204994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.205024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.205055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.205089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.205119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.205152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.205192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.205223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.205256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.205288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.205318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.205347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.205375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.205403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.205429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.205466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.205497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.205526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.205558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.205591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.205622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.205656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.205690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.205719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.205751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.205785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.205816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.205851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.205886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.205919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.205954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.205989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.206026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.206062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.206096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.206128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.206162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.206200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.206231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.206263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.206295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.206329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.206366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.105 [2024-04-26 15:20:57.206397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.206534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.206564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.206613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.206641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.206679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.206708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.206741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.206773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.206806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.206844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.206876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.206911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.206951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.206982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.207010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.207043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.207467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.207500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.207542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.207571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.207609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.207638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.207675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.207708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.207738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.207769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.207808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.207842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.207880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.207909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.207939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.207979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.208010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.208040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.208070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.208103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.208132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.208163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.208201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.208231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.208266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.208303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.208332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.208362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.208396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.208431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.208466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.208502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.208533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.208563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.208598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.208630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.208661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.208696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.208727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.208778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.208813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.208852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.208908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.208940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.208975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.209011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.209041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.209076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.209109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.209136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.209172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.209202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.209260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.209289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.209334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.209367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.209397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.209438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.209468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.209500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.209528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.209568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.209596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.209633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.209851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.209881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.209910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.209941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.209973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.210008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.210039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.210067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.210104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.210135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.210170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.210202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.210235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.210273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.210303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.210334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.210365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.210397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.210434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.106 [2024-04-26 15:20:57.210467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.210500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.210530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.210561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.210587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.210619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.210649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.210680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.210710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.210749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.210781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.210812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.210854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.210885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.210921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.210946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.210982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.211019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.211050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.211080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.211112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.211144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.211180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.211214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.211249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.211279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.211311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.211346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.211380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.211413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.211445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.211480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.211512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.211546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.211573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.211607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.211637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.211671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.211700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.211727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.211752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.211778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.211804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.211830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.212189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.212228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.212288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.212320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.212354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.212387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.212426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.212457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.212488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.212522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.212553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.212607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.212636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.212669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.212702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.212735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.212785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.212818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.212857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.212912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.212944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.212982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.213014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.213044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.213076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.213113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.213144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.213175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.213206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.213236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.213270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.213300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.213330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.213365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.213396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.213432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.213458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.213490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.213519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.213550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.213587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.213621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.213652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.213686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.213715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.213743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.213777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.213809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.213846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.213878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.213915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.107 [2024-04-26 15:20:57.213947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.108 [2024-04-26 15:20:57.213982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.108 [2024-04-26 15:20:57.214014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.108 [2024-04-26 15:20:57.214045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.108 [2024-04-26 15:20:57.214079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.108 [2024-04-26 15:20:57.214111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.108 [2024-04-26 15:20:57.214140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.108 [2024-04-26 15:20:57.214170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.108 [2024-04-26 15:20:57.214205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.108 [2024-04-26 15:20:57.214243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.108 [2024-04-26 15:20:57.214278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.108 [2024-04-26 15:20:57.214309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.108 [2024-04-26 15:20:57.214357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.108 [2024-04-26 15:20:57.214723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.108 [2024-04-26 15:20:57.214755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.108 [2024-04-26 15:20:57.214785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.108 [2024-04-26 15:20:57.214818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.108 [2024-04-26 15:20:57.214855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.108 [2024-04-26 15:20:57.214891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.108 [2024-04-26 15:20:57.214922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.108 [2024-04-26 15:20:57.214954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.108 [2024-04-26 15:20:57.214986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.108 [2024-04-26 15:20:57.215021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.108 [2024-04-26 15:20:57.215055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.108 [2024-04-26 15:20:57.215088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.108 [2024-04-26 15:20:57.215120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.108 [2024-04-26 15:20:57.215150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.108 [2024-04-26 15:20:57.215185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.108 [2024-04-26 15:20:57.215215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.108 [2024-04-26 15:20:57.215246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.108 [2024-04-26 15:20:57.215278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.108 [2024-04-26 15:20:57.215311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.108 [2024-04-26 15:20:57.215342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.108 [2024-04-26 15:20:57.215370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.108 [2024-04-26 15:20:57.215416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.108 [2024-04-26 15:20:57.215450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.108 [2024-04-26 15:20:57.215482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.108 [2024-04-26 15:20:57.215509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.108 [2024-04-26 15:20:57.215545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.108 [2024-04-26 15:20:57.215579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.108 [2024-04-26 15:20:57.215609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.108 [2024-04-26 15:20:57.215646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.108 [2024-04-26 15:20:57.215677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.108 [2024-04-26 15:20:57.215708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.108 [2024-04-26 15:20:57.215741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.108 [2024-04-26 15:20:57.215774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.108 [2024-04-26 15:20:57.215804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.108 [2024-04-26 15:20:57.215834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.108 [2024-04-26 15:20:57.215875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.108 [2024-04-26 15:20:57.215905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.108 [2024-04-26 15:20:57.215933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.108 [2024-04-26 15:20:57.215965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.108 [2024-04-26 15:20:57.215996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.108 [2024-04-26 15:20:57.216030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.108 [2024-04-26 15:20:57.216064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.108 [2024-04-26 15:20:57.216102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.108 [2024-04-26 15:20:57.216138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.108 [2024-04-26 15:20:57.216167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.108 [2024-04-26 15:20:57.216195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.108 [2024-04-26 15:20:57.216235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.108 [2024-04-26 15:20:57.216264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.108 [2024-04-26 15:20:57.216295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.108 [2024-04-26 15:20:57.216328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.108 [2024-04-26 15:20:57.216366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.108 [2024-04-26 15:20:57.216402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.108 [2024-04-26 15:20:57.216431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.108 [2024-04-26 15:20:57.216462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.108 [2024-04-26 15:20:57.216495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.108 [2024-04-26 15:20:57.216526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.108 [2024-04-26 15:20:57.216558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.108 [2024-04-26 15:20:57.216588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.108 [2024-04-26 15:20:57.216618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.108 [2024-04-26 15:20:57.216674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.108 [2024-04-26 15:20:57.216707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.108 [2024-04-26 15:20:57.216740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.108 [2024-04-26 15:20:57.216772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:40.108 [2024-04-26 15:20:57.217126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:41.050 Initializing NVMe Controllers 00:10:41.050 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:41.050 Controller IO queue size 128, less than required. 00:10:41.050 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:41.050 Controller IO queue size 128, less than required. 00:10:41.050 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:41.050 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:41.050 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:10:41.050 Initialization complete. Launching workers. 00:10:41.050 ======================================================== 00:10:41.050 Latency(us) 00:10:41.050 Device Information : IOPS MiB/s Average min max 00:10:41.050 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1291.67 0.63 24985.02 1932.45 1090710.94 00:10:41.050 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 6636.63 3.24 19286.96 1652.99 400271.20 00:10:41.050 ======================================================== 00:10:41.050 Total : 7928.30 3.87 20215.28 1652.99 1090710.94 00:10:41.050 00:10:41.050 15:20:58 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:41.050 15:20:58 -- target/ns_hotplug_stress.sh@40 -- # null_size=1053 00:10:41.050 15:20:58 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:10:41.312 true 00:10:41.312 15:20:58 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1506669 00:10:41.312 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 35: kill: (1506669) - No such process 00:10:41.312 15:20:58 -- target/ns_hotplug_stress.sh@44 -- # wait 1506669 00:10:41.312 15:20:58 -- target/ns_hotplug_stress.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:10:41.312 15:20:58 -- target/ns_hotplug_stress.sh@48 -- # nvmftestfini 00:10:41.312 15:20:58 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:41.312 15:20:58 -- nvmf/common.sh@117 -- # sync 00:10:41.312 15:20:58 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:41.312 15:20:58 -- nvmf/common.sh@120 -- # set +e 00:10:41.312 15:20:58 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:41.312 15:20:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:41.312 rmmod nvme_tcp 00:10:41.312 rmmod nvme_fabrics 00:10:41.312 rmmod nvme_keyring 00:10:41.312 15:20:58 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:41.312 15:20:58 -- nvmf/common.sh@124 -- # set -e 00:10:41.312 15:20:58 -- nvmf/common.sh@125 -- # return 0 00:10:41.312 15:20:58 -- nvmf/common.sh@478 -- # '[' -n 1506298 ']' 00:10:41.312 15:20:58 -- nvmf/common.sh@479 -- # killprocess 1506298 00:10:41.312 15:20:58 -- common/autotest_common.sh@936 -- # '[' -z 1506298 ']' 00:10:41.312 15:20:58 -- common/autotest_common.sh@940 -- # kill -0 1506298 00:10:41.312 15:20:58 -- common/autotest_common.sh@941 -- # uname 00:10:41.312 15:20:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:41.312 15:20:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1506298 00:10:41.312 15:20:58 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:10:41.312 15:20:58 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:10:41.312 15:20:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1506298' 00:10:41.312 killing process with pid 1506298 00:10:41.312 15:20:58 -- common/autotest_common.sh@955 -- # kill 1506298 00:10:41.312 15:20:58 -- common/autotest_common.sh@960 -- # wait 1506298 00:10:41.573 15:20:58 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:41.573 15:20:58 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:10:41.573 15:20:58 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:10:41.573 15:20:58 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:41.573 15:20:58 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:41.573 15:20:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:41.573 15:20:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:41.573 15:20:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:43.559 15:21:00 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:43.559 00:10:43.559 real 0m43.643s 00:10:43.559 user 2m36.506s 00:10:43.559 sys 0m11.485s 00:10:43.559 15:21:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:43.559 15:21:00 -- common/autotest_common.sh@10 -- # set +x 00:10:43.559 ************************************ 00:10:43.559 END TEST nvmf_ns_hotplug_stress 00:10:43.559 ************************************ 00:10:43.559 15:21:00 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:43.559 15:21:00 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:43.559 15:21:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:43.559 15:21:00 -- common/autotest_common.sh@10 -- # set +x 00:10:43.822 ************************************ 00:10:43.822 START TEST nvmf_connect_stress 00:10:43.822 ************************************ 00:10:43.822 15:21:01 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:43.822 * Looking for test storage... 00:10:43.822 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:43.822 15:21:01 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:43.822 15:21:01 -- nvmf/common.sh@7 -- # uname -s 00:10:43.822 15:21:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:43.822 15:21:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:43.822 15:21:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:43.822 15:21:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:43.822 15:21:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:43.822 15:21:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:43.822 15:21:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:43.822 15:21:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:43.822 15:21:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:43.822 15:21:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:43.822 15:21:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:43.822 15:21:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:43.822 15:21:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:43.822 15:21:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:43.822 15:21:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:43.822 15:21:01 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:43.822 15:21:01 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:43.822 15:21:01 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:43.822 15:21:01 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:43.822 15:21:01 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:43.822 15:21:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.822 15:21:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.822 15:21:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.822 15:21:01 -- paths/export.sh@5 -- # export PATH 00:10:43.822 15:21:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.822 15:21:01 -- nvmf/common.sh@47 -- # : 0 00:10:43.822 15:21:01 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:43.822 15:21:01 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:43.822 15:21:01 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:43.822 15:21:01 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:43.822 15:21:01 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:43.822 15:21:01 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:43.822 15:21:01 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:43.822 15:21:01 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:43.822 15:21:01 -- target/connect_stress.sh@12 -- # nvmftestinit 00:10:43.822 15:21:01 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:10:43.822 15:21:01 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:43.822 15:21:01 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:43.822 15:21:01 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:43.822 15:21:01 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:43.822 15:21:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:43.822 15:21:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:43.822 15:21:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:43.822 15:21:01 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:10:43.822 15:21:01 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:10:43.822 15:21:01 -- nvmf/common.sh@285 -- # xtrace_disable 00:10:43.822 15:21:01 -- common/autotest_common.sh@10 -- # set +x 00:10:51.965 15:21:08 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:10:51.965 15:21:08 -- nvmf/common.sh@291 -- # pci_devs=() 00:10:51.965 15:21:08 -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:51.965 15:21:08 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:51.965 15:21:08 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:51.965 15:21:08 -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:51.965 15:21:08 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:51.965 15:21:08 -- nvmf/common.sh@295 -- # net_devs=() 00:10:51.965 15:21:08 -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:51.965 15:21:08 -- nvmf/common.sh@296 -- # e810=() 00:10:51.965 15:21:08 -- nvmf/common.sh@296 -- # local -ga e810 00:10:51.965 15:21:08 -- nvmf/common.sh@297 -- # x722=() 00:10:51.965 15:21:08 -- nvmf/common.sh@297 -- # local -ga x722 00:10:51.965 15:21:08 -- nvmf/common.sh@298 -- # mlx=() 00:10:51.965 15:21:08 -- nvmf/common.sh@298 -- # local -ga mlx 00:10:51.965 15:21:08 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:51.965 15:21:08 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:51.965 15:21:08 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:51.965 15:21:08 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:51.965 15:21:08 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:51.965 15:21:08 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:51.965 15:21:08 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:51.965 15:21:08 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:51.965 15:21:08 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:51.965 15:21:08 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:51.965 15:21:08 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:51.965 15:21:08 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:51.965 15:21:08 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:51.965 15:21:08 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:51.965 15:21:08 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:51.965 15:21:08 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:51.965 15:21:08 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:51.965 15:21:08 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:51.965 15:21:08 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:51.965 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:51.965 15:21:08 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:51.965 15:21:08 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:51.965 15:21:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:51.965 15:21:08 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:51.965 15:21:08 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:51.965 15:21:08 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:51.965 15:21:08 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:51.965 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:51.965 15:21:08 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:51.965 15:21:08 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:51.965 15:21:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:51.965 15:21:08 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:51.965 15:21:08 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:51.965 15:21:08 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:51.965 15:21:08 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:51.965 15:21:08 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:51.965 15:21:08 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:51.965 15:21:08 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:51.965 15:21:08 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:51.965 15:21:08 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:51.965 15:21:08 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:51.965 Found net devices under 0000:31:00.0: cvl_0_0 00:10:51.965 15:21:08 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:51.965 15:21:08 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:51.965 15:21:08 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:51.965 15:21:08 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:51.965 15:21:08 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:51.965 15:21:08 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:51.965 Found net devices under 0000:31:00.1: cvl_0_1 00:10:51.965 15:21:08 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:51.965 15:21:08 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:10:51.965 15:21:08 -- nvmf/common.sh@403 -- # is_hw=yes 00:10:51.965 15:21:08 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:10:51.965 15:21:08 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:10:51.965 15:21:08 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:10:51.965 15:21:08 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:51.965 15:21:08 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:51.965 15:21:08 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:51.965 15:21:08 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:51.965 15:21:08 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:51.965 15:21:08 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:51.965 15:21:08 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:51.965 15:21:08 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:51.965 15:21:08 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:51.965 15:21:08 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:51.965 15:21:08 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:51.965 15:21:08 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:51.965 15:21:08 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:51.965 15:21:08 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:51.965 15:21:08 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:51.965 15:21:08 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:51.965 15:21:08 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:51.965 15:21:08 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:51.965 15:21:08 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:51.965 15:21:08 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:51.965 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:51.965 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.497 ms 00:10:51.965 00:10:51.965 --- 10.0.0.2 ping statistics --- 00:10:51.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:51.966 rtt min/avg/max/mdev = 0.497/0.497/0.497/0.000 ms 00:10:51.966 15:21:08 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:51.966 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:51.966 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.342 ms 00:10:51.966 00:10:51.966 --- 10.0.0.1 ping statistics --- 00:10:51.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:51.966 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:10:51.966 15:21:08 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:51.966 15:21:08 -- nvmf/common.sh@411 -- # return 0 00:10:51.966 15:21:08 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:51.966 15:21:08 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:51.966 15:21:08 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:10:51.966 15:21:08 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:10:51.966 15:21:08 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:51.966 15:21:08 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:10:51.966 15:21:08 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:10:51.966 15:21:08 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:10:51.966 15:21:08 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:51.966 15:21:08 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:51.966 15:21:08 -- common/autotest_common.sh@10 -- # set +x 00:10:51.966 15:21:08 -- nvmf/common.sh@470 -- # nvmfpid=1517797 00:10:51.966 15:21:08 -- nvmf/common.sh@471 -- # waitforlisten 1517797 00:10:51.966 15:21:08 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:51.966 15:21:08 -- common/autotest_common.sh@817 -- # '[' -z 1517797 ']' 00:10:51.966 15:21:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:51.966 15:21:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:51.966 15:21:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:51.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:51.966 15:21:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:51.966 15:21:08 -- common/autotest_common.sh@10 -- # set +x 00:10:51.966 [2024-04-26 15:21:08.759565] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:10:51.966 [2024-04-26 15:21:08.759623] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:51.966 EAL: No free 2048 kB hugepages reported on node 1 00:10:51.966 [2024-04-26 15:21:08.849336] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:51.966 [2024-04-26 15:21:08.942213] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:51.966 [2024-04-26 15:21:08.942277] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:51.966 [2024-04-26 15:21:08.942286] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:51.966 [2024-04-26 15:21:08.942293] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:51.966 [2024-04-26 15:21:08.942300] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:51.966 [2024-04-26 15:21:08.942443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:51.966 [2024-04-26 15:21:08.942928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:51.966 [2024-04-26 15:21:08.943111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:52.227 15:21:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:52.227 15:21:09 -- common/autotest_common.sh@850 -- # return 0 00:10:52.227 15:21:09 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:52.227 15:21:09 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:52.227 15:21:09 -- common/autotest_common.sh@10 -- # set +x 00:10:52.227 15:21:09 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:52.227 15:21:09 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:52.227 15:21:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:52.227 15:21:09 -- common/autotest_common.sh@10 -- # set +x 00:10:52.227 [2024-04-26 15:21:09.589016] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:52.227 15:21:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:52.227 15:21:09 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:52.227 15:21:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:52.227 15:21:09 -- common/autotest_common.sh@10 -- # set +x 00:10:52.227 15:21:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:52.227 15:21:09 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:52.227 15:21:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:52.227 15:21:09 -- common/autotest_common.sh@10 -- # set +x 00:10:52.227 [2024-04-26 15:21:09.613457] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:52.227 15:21:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:52.227 15:21:09 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:52.227 15:21:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:52.227 15:21:09 -- common/autotest_common.sh@10 -- # set +x 00:10:52.227 NULL1 00:10:52.227 15:21:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:52.227 15:21:09 -- target/connect_stress.sh@21 -- # PERF_PID=1518144 00:10:52.227 15:21:09 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:52.227 15:21:09 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:10:52.227 15:21:09 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:52.227 15:21:09 -- target/connect_stress.sh@27 -- # seq 1 20 00:10:52.227 15:21:09 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:52.227 15:21:09 -- target/connect_stress.sh@28 -- # cat 00:10:52.227 15:21:09 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:52.227 15:21:09 -- target/connect_stress.sh@28 -- # cat 00:10:52.227 15:21:09 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:52.227 15:21:09 -- target/connect_stress.sh@28 -- # cat 00:10:52.227 15:21:09 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:52.227 15:21:09 -- target/connect_stress.sh@28 -- # cat 00:10:52.487 15:21:09 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:52.487 15:21:09 -- target/connect_stress.sh@28 -- # cat 00:10:52.487 EAL: No free 2048 kB hugepages reported on node 1 00:10:52.487 15:21:09 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:52.487 15:21:09 -- target/connect_stress.sh@28 -- # cat 00:10:52.487 15:21:09 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:52.487 15:21:09 -- target/connect_stress.sh@28 -- # cat 00:10:52.487 15:21:09 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:52.487 15:21:09 -- target/connect_stress.sh@28 -- # cat 00:10:52.487 15:21:09 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:52.487 15:21:09 -- target/connect_stress.sh@28 -- # cat 00:10:52.488 15:21:09 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:52.488 15:21:09 -- target/connect_stress.sh@28 -- # cat 00:10:52.488 15:21:09 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:52.488 15:21:09 -- target/connect_stress.sh@28 -- # cat 00:10:52.488 15:21:09 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:52.488 15:21:09 -- target/connect_stress.sh@28 -- # cat 00:10:52.488 15:21:09 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:52.488 15:21:09 -- target/connect_stress.sh@28 -- # cat 00:10:52.488 15:21:09 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:52.488 15:21:09 -- target/connect_stress.sh@28 -- # cat 00:10:52.488 15:21:09 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:52.488 15:21:09 -- target/connect_stress.sh@28 -- # cat 00:10:52.488 15:21:09 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:52.488 15:21:09 -- target/connect_stress.sh@28 -- # cat 00:10:52.488 15:21:09 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:52.488 15:21:09 -- target/connect_stress.sh@28 -- # cat 00:10:52.488 15:21:09 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:52.488 15:21:09 -- target/connect_stress.sh@28 -- # cat 00:10:52.488 15:21:09 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:52.488 15:21:09 -- target/connect_stress.sh@28 -- # cat 00:10:52.488 15:21:09 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:52.488 15:21:09 -- target/connect_stress.sh@28 -- # cat 00:10:52.488 15:21:09 -- target/connect_stress.sh@34 -- # kill -0 1518144 00:10:52.488 15:21:09 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:52.488 15:21:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:52.488 15:21:09 -- common/autotest_common.sh@10 -- # set +x 00:10:52.748 15:21:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:52.748 15:21:10 -- target/connect_stress.sh@34 -- # kill -0 1518144 00:10:52.748 15:21:10 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:52.748 15:21:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:52.748 15:21:10 -- common/autotest_common.sh@10 -- # set +x 00:10:53.008 15:21:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:53.008 15:21:10 -- target/connect_stress.sh@34 -- # kill -0 1518144 00:10:53.008 15:21:10 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:53.008 15:21:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:53.008 15:21:10 -- common/autotest_common.sh@10 -- # set +x 00:10:53.580 15:21:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:53.580 15:21:10 -- target/connect_stress.sh@34 -- # kill -0 1518144 00:10:53.580 15:21:10 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:53.580 15:21:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:53.580 15:21:10 -- common/autotest_common.sh@10 -- # set +x 00:10:53.840 15:21:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:53.840 15:21:11 -- target/connect_stress.sh@34 -- # kill -0 1518144 00:10:53.840 15:21:11 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:53.840 15:21:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:53.840 15:21:11 -- common/autotest_common.sh@10 -- # set +x 00:10:54.100 15:21:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:54.100 15:21:11 -- target/connect_stress.sh@34 -- # kill -0 1518144 00:10:54.100 15:21:11 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:54.100 15:21:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:54.100 15:21:11 -- common/autotest_common.sh@10 -- # set +x 00:10:54.361 15:21:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:54.361 15:21:11 -- target/connect_stress.sh@34 -- # kill -0 1518144 00:10:54.361 15:21:11 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:54.361 15:21:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:54.361 15:21:11 -- common/autotest_common.sh@10 -- # set +x 00:10:54.622 15:21:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:54.622 15:21:12 -- target/connect_stress.sh@34 -- # kill -0 1518144 00:10:54.622 15:21:12 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:54.622 15:21:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:54.622 15:21:12 -- common/autotest_common.sh@10 -- # set +x 00:10:55.196 15:21:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:55.196 15:21:12 -- target/connect_stress.sh@34 -- # kill -0 1518144 00:10:55.196 15:21:12 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:55.196 15:21:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:55.196 15:21:12 -- common/autotest_common.sh@10 -- # set +x 00:10:55.457 15:21:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:55.457 15:21:12 -- target/connect_stress.sh@34 -- # kill -0 1518144 00:10:55.457 15:21:12 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:55.457 15:21:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:55.457 15:21:12 -- common/autotest_common.sh@10 -- # set +x 00:10:55.718 15:21:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:55.718 15:21:12 -- target/connect_stress.sh@34 -- # kill -0 1518144 00:10:55.718 15:21:12 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:55.718 15:21:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:55.718 15:21:12 -- common/autotest_common.sh@10 -- # set +x 00:10:55.995 15:21:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:55.995 15:21:13 -- target/connect_stress.sh@34 -- # kill -0 1518144 00:10:55.995 15:21:13 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:55.995 15:21:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:55.995 15:21:13 -- common/autotest_common.sh@10 -- # set +x 00:10:56.259 15:21:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:56.259 15:21:13 -- target/connect_stress.sh@34 -- # kill -0 1518144 00:10:56.259 15:21:13 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:56.259 15:21:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:56.259 15:21:13 -- common/autotest_common.sh@10 -- # set +x 00:10:56.831 15:21:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:56.831 15:21:13 -- target/connect_stress.sh@34 -- # kill -0 1518144 00:10:56.831 15:21:13 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:56.831 15:21:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:56.831 15:21:13 -- common/autotest_common.sh@10 -- # set +x 00:10:57.092 15:21:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:57.092 15:21:14 -- target/connect_stress.sh@34 -- # kill -0 1518144 00:10:57.092 15:21:14 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:57.092 15:21:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:57.092 15:21:14 -- common/autotest_common.sh@10 -- # set +x 00:10:57.353 15:21:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:57.353 15:21:14 -- target/connect_stress.sh@34 -- # kill -0 1518144 00:10:57.353 15:21:14 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:57.353 15:21:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:57.353 15:21:14 -- common/autotest_common.sh@10 -- # set +x 00:10:57.615 15:21:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:57.615 15:21:14 -- target/connect_stress.sh@34 -- # kill -0 1518144 00:10:57.615 15:21:14 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:57.615 15:21:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:57.615 15:21:14 -- common/autotest_common.sh@10 -- # set +x 00:10:57.876 15:21:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:57.876 15:21:15 -- target/connect_stress.sh@34 -- # kill -0 1518144 00:10:57.876 15:21:15 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:57.876 15:21:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:57.876 15:21:15 -- common/autotest_common.sh@10 -- # set +x 00:10:58.446 15:21:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:58.446 15:21:15 -- target/connect_stress.sh@34 -- # kill -0 1518144 00:10:58.446 15:21:15 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:58.446 15:21:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:58.446 15:21:15 -- common/autotest_common.sh@10 -- # set +x 00:10:58.707 15:21:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:58.707 15:21:15 -- target/connect_stress.sh@34 -- # kill -0 1518144 00:10:58.707 15:21:15 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:58.707 15:21:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:58.707 15:21:15 -- common/autotest_common.sh@10 -- # set +x 00:10:58.967 15:21:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:58.967 15:21:16 -- target/connect_stress.sh@34 -- # kill -0 1518144 00:10:58.967 15:21:16 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:58.967 15:21:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:58.967 15:21:16 -- common/autotest_common.sh@10 -- # set +x 00:10:59.227 15:21:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:59.227 15:21:16 -- target/connect_stress.sh@34 -- # kill -0 1518144 00:10:59.227 15:21:16 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:59.227 15:21:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:59.227 15:21:16 -- common/autotest_common.sh@10 -- # set +x 00:10:59.487 15:21:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:59.487 15:21:16 -- target/connect_stress.sh@34 -- # kill -0 1518144 00:10:59.487 15:21:16 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:59.487 15:21:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:59.487 15:21:16 -- common/autotest_common.sh@10 -- # set +x 00:11:00.059 15:21:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:00.059 15:21:17 -- target/connect_stress.sh@34 -- # kill -0 1518144 00:11:00.059 15:21:17 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:00.059 15:21:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:00.059 15:21:17 -- common/autotest_common.sh@10 -- # set +x 00:11:00.321 15:21:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:00.321 15:21:17 -- target/connect_stress.sh@34 -- # kill -0 1518144 00:11:00.321 15:21:17 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:00.321 15:21:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:00.321 15:21:17 -- common/autotest_common.sh@10 -- # set +x 00:11:00.581 15:21:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:00.581 15:21:17 -- target/connect_stress.sh@34 -- # kill -0 1518144 00:11:00.581 15:21:17 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:00.581 15:21:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:00.581 15:21:17 -- common/autotest_common.sh@10 -- # set +x 00:11:00.842 15:21:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:00.843 15:21:18 -- target/connect_stress.sh@34 -- # kill -0 1518144 00:11:00.843 15:21:18 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:00.843 15:21:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:00.843 15:21:18 -- common/autotest_common.sh@10 -- # set +x 00:11:01.104 15:21:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:01.104 15:21:18 -- target/connect_stress.sh@34 -- # kill -0 1518144 00:11:01.104 15:21:18 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:01.104 15:21:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:01.104 15:21:18 -- common/autotest_common.sh@10 -- # set +x 00:11:01.676 15:21:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:01.676 15:21:18 -- target/connect_stress.sh@34 -- # kill -0 1518144 00:11:01.677 15:21:18 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:01.677 15:21:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:01.677 15:21:18 -- common/autotest_common.sh@10 -- # set +x 00:11:01.938 15:21:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:01.938 15:21:19 -- target/connect_stress.sh@34 -- # kill -0 1518144 00:11:01.938 15:21:19 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:01.938 15:21:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:01.938 15:21:19 -- common/autotest_common.sh@10 -- # set +x 00:11:02.200 15:21:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:02.200 15:21:19 -- target/connect_stress.sh@34 -- # kill -0 1518144 00:11:02.200 15:21:19 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:02.200 15:21:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:02.200 15:21:19 -- common/autotest_common.sh@10 -- # set +x 00:11:02.461 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:02.461 15:21:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:02.461 15:21:19 -- target/connect_stress.sh@34 -- # kill -0 1518144 00:11:02.461 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1518144) - No such process 00:11:02.461 15:21:19 -- target/connect_stress.sh@38 -- # wait 1518144 00:11:02.461 15:21:19 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:02.461 15:21:19 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:02.461 15:21:19 -- target/connect_stress.sh@43 -- # nvmftestfini 00:11:02.461 15:21:19 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:02.461 15:21:19 -- nvmf/common.sh@117 -- # sync 00:11:02.461 15:21:19 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:02.461 15:21:19 -- nvmf/common.sh@120 -- # set +e 00:11:02.461 15:21:19 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:02.461 15:21:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:02.461 rmmod nvme_tcp 00:11:02.461 rmmod nvme_fabrics 00:11:02.461 rmmod nvme_keyring 00:11:02.461 15:21:19 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:02.461 15:21:19 -- nvmf/common.sh@124 -- # set -e 00:11:02.461 15:21:19 -- nvmf/common.sh@125 -- # return 0 00:11:02.461 15:21:19 -- nvmf/common.sh@478 -- # '[' -n 1517797 ']' 00:11:02.461 15:21:19 -- nvmf/common.sh@479 -- # killprocess 1517797 00:11:02.461 15:21:19 -- common/autotest_common.sh@936 -- # '[' -z 1517797 ']' 00:11:02.461 15:21:19 -- common/autotest_common.sh@940 -- # kill -0 1517797 00:11:02.461 15:21:19 -- common/autotest_common.sh@941 -- # uname 00:11:02.461 15:21:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:02.461 15:21:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1517797 00:11:02.722 15:21:19 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:02.722 15:21:19 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:02.722 15:21:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1517797' 00:11:02.722 killing process with pid 1517797 00:11:02.722 15:21:19 -- common/autotest_common.sh@955 -- # kill 1517797 00:11:02.722 15:21:19 -- common/autotest_common.sh@960 -- # wait 1517797 00:11:02.722 15:21:20 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:02.722 15:21:20 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:02.722 15:21:20 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:02.722 15:21:20 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:02.722 15:21:20 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:02.722 15:21:20 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:02.722 15:21:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:02.722 15:21:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:05.268 15:21:22 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:05.268 00:11:05.268 real 0m21.082s 00:11:05.268 user 0m42.251s 00:11:05.268 sys 0m8.766s 00:11:05.268 15:21:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:05.268 15:21:22 -- common/autotest_common.sh@10 -- # set +x 00:11:05.268 ************************************ 00:11:05.268 END TEST nvmf_connect_stress 00:11:05.268 ************************************ 00:11:05.268 15:21:22 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:05.268 15:21:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:05.268 15:21:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:05.268 15:21:22 -- common/autotest_common.sh@10 -- # set +x 00:11:05.268 ************************************ 00:11:05.268 START TEST nvmf_fused_ordering 00:11:05.268 ************************************ 00:11:05.268 15:21:22 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:05.268 * Looking for test storage... 00:11:05.268 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:05.268 15:21:22 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:05.268 15:21:22 -- nvmf/common.sh@7 -- # uname -s 00:11:05.268 15:21:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:05.268 15:21:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:05.268 15:21:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:05.268 15:21:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:05.268 15:21:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:05.268 15:21:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:05.268 15:21:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:05.268 15:21:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:05.268 15:21:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:05.268 15:21:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:05.268 15:21:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:05.268 15:21:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:05.268 15:21:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:05.268 15:21:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:05.268 15:21:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:05.268 15:21:22 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:05.268 15:21:22 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:05.268 15:21:22 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:05.268 15:21:22 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:05.268 15:21:22 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:05.268 15:21:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.268 15:21:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.268 15:21:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.268 15:21:22 -- paths/export.sh@5 -- # export PATH 00:11:05.268 15:21:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.268 15:21:22 -- nvmf/common.sh@47 -- # : 0 00:11:05.268 15:21:22 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:05.268 15:21:22 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:05.268 15:21:22 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:05.268 15:21:22 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:05.268 15:21:22 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:05.268 15:21:22 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:05.268 15:21:22 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:05.268 15:21:22 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:05.268 15:21:22 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:11:05.268 15:21:22 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:05.268 15:21:22 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:05.268 15:21:22 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:05.268 15:21:22 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:05.268 15:21:22 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:05.268 15:21:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:05.268 15:21:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:05.268 15:21:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:05.268 15:21:22 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:05.268 15:21:22 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:05.268 15:21:22 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:05.268 15:21:22 -- common/autotest_common.sh@10 -- # set +x 00:11:11.865 15:21:29 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:11.865 15:21:29 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:11.865 15:21:29 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:11.865 15:21:29 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:11.865 15:21:29 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:11.865 15:21:29 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:11.865 15:21:29 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:11.865 15:21:29 -- nvmf/common.sh@295 -- # net_devs=() 00:11:11.865 15:21:29 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:11.865 15:21:29 -- nvmf/common.sh@296 -- # e810=() 00:11:11.865 15:21:29 -- nvmf/common.sh@296 -- # local -ga e810 00:11:11.865 15:21:29 -- nvmf/common.sh@297 -- # x722=() 00:11:11.865 15:21:29 -- nvmf/common.sh@297 -- # local -ga x722 00:11:11.865 15:21:29 -- nvmf/common.sh@298 -- # mlx=() 00:11:11.865 15:21:29 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:11.865 15:21:29 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:11.865 15:21:29 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:11.865 15:21:29 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:11.865 15:21:29 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:11.865 15:21:29 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:11.865 15:21:29 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:11.865 15:21:29 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:11.865 15:21:29 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:11.865 15:21:29 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:11.865 15:21:29 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:11.865 15:21:29 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:11.865 15:21:29 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:11.865 15:21:29 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:11.865 15:21:29 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:11.865 15:21:29 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:11.865 15:21:29 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:11.865 15:21:29 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:11.865 15:21:29 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:11.865 15:21:29 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:11.865 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:11.865 15:21:29 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:11.865 15:21:29 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:11.865 15:21:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:11.865 15:21:29 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:11.865 15:21:29 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:11.865 15:21:29 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:11.865 15:21:29 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:11.865 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:11.865 15:21:29 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:11.865 15:21:29 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:11.865 15:21:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:11.865 15:21:29 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:11.865 15:21:29 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:11.865 15:21:29 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:11.865 15:21:29 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:11.865 15:21:29 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:11.865 15:21:29 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:11.865 15:21:29 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:11.865 15:21:29 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:11.865 15:21:29 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:11.865 15:21:29 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:11.865 Found net devices under 0000:31:00.0: cvl_0_0 00:11:11.865 15:21:29 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:11.865 15:21:29 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:11.865 15:21:29 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:11.865 15:21:29 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:11.865 15:21:29 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:11.865 15:21:29 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:11.865 Found net devices under 0000:31:00.1: cvl_0_1 00:11:11.865 15:21:29 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:11.865 15:21:29 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:11.865 15:21:29 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:11.865 15:21:29 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:11.865 15:21:29 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:11:11.865 15:21:29 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:11:11.866 15:21:29 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:11.866 15:21:29 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:11.866 15:21:29 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:11.866 15:21:29 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:11.866 15:21:29 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:11.866 15:21:29 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:11.866 15:21:29 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:11.866 15:21:29 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:11.866 15:21:29 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:11.866 15:21:29 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:11.866 15:21:29 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:11.866 15:21:29 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:12.127 15:21:29 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:12.127 15:21:29 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:12.127 15:21:29 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:12.127 15:21:29 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:12.127 15:21:29 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:12.127 15:21:29 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:12.390 15:21:29 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:12.390 15:21:29 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:12.390 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:12.390 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.536 ms 00:11:12.390 00:11:12.390 --- 10.0.0.2 ping statistics --- 00:11:12.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:12.390 rtt min/avg/max/mdev = 0.536/0.536/0.536/0.000 ms 00:11:12.390 15:21:29 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:12.390 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:12.390 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.332 ms 00:11:12.390 00:11:12.390 --- 10.0.0.1 ping statistics --- 00:11:12.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:12.390 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:11:12.390 15:21:29 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:12.390 15:21:29 -- nvmf/common.sh@411 -- # return 0 00:11:12.390 15:21:29 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:12.390 15:21:29 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:12.390 15:21:29 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:12.390 15:21:29 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:12.390 15:21:29 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:12.390 15:21:29 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:12.390 15:21:29 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:12.390 15:21:29 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:11:12.390 15:21:29 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:12.390 15:21:29 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:12.390 15:21:29 -- common/autotest_common.sh@10 -- # set +x 00:11:12.390 15:21:29 -- nvmf/common.sh@470 -- # nvmfpid=1524588 00:11:12.390 15:21:29 -- nvmf/common.sh@471 -- # waitforlisten 1524588 00:11:12.390 15:21:29 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:12.390 15:21:29 -- common/autotest_common.sh@817 -- # '[' -z 1524588 ']' 00:11:12.390 15:21:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:12.390 15:21:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:12.390 15:21:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:12.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:12.390 15:21:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:12.390 15:21:29 -- common/autotest_common.sh@10 -- # set +x 00:11:12.390 [2024-04-26 15:21:29.703628] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:11:12.390 [2024-04-26 15:21:29.703688] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:12.390 EAL: No free 2048 kB hugepages reported on node 1 00:11:12.390 [2024-04-26 15:21:29.794301] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:12.652 [2024-04-26 15:21:29.884584] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:12.652 [2024-04-26 15:21:29.884642] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:12.652 [2024-04-26 15:21:29.884650] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:12.652 [2024-04-26 15:21:29.884657] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:12.652 [2024-04-26 15:21:29.884669] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:12.652 [2024-04-26 15:21:29.884693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:13.225 15:21:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:13.225 15:21:30 -- common/autotest_common.sh@850 -- # return 0 00:11:13.225 15:21:30 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:13.225 15:21:30 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:13.225 15:21:30 -- common/autotest_common.sh@10 -- # set +x 00:11:13.225 15:21:30 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:13.225 15:21:30 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:13.225 15:21:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:13.225 15:21:30 -- common/autotest_common.sh@10 -- # set +x 00:11:13.225 [2024-04-26 15:21:30.535324] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:13.225 15:21:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:13.225 15:21:30 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:13.225 15:21:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:13.225 15:21:30 -- common/autotest_common.sh@10 -- # set +x 00:11:13.225 15:21:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:13.225 15:21:30 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:13.225 15:21:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:13.225 15:21:30 -- common/autotest_common.sh@10 -- # set +x 00:11:13.225 [2024-04-26 15:21:30.559544] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:13.225 15:21:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:13.225 15:21:30 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:13.225 15:21:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:13.225 15:21:30 -- common/autotest_common.sh@10 -- # set +x 00:11:13.225 NULL1 00:11:13.225 15:21:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:13.225 15:21:30 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:11:13.225 15:21:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:13.225 15:21:30 -- common/autotest_common.sh@10 -- # set +x 00:11:13.225 15:21:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:13.225 15:21:30 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:13.225 15:21:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:13.225 15:21:30 -- common/autotest_common.sh@10 -- # set +x 00:11:13.225 15:21:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:13.225 15:21:30 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:13.225 [2024-04-26 15:21:30.629125] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:11:13.225 [2024-04-26 15:21:30.629192] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1524633 ] 00:11:13.225 EAL: No free 2048 kB hugepages reported on node 1 00:11:13.798 Attached to nqn.2016-06.io.spdk:cnode1 00:11:13.798 Namespace ID: 1 size: 1GB 00:11:13.798 fused_ordering(0) 00:11:13.798 fused_ordering(1) 00:11:13.798 fused_ordering(2) 00:11:13.798 fused_ordering(3) 00:11:13.798 fused_ordering(4) 00:11:13.798 fused_ordering(5) 00:11:13.798 fused_ordering(6) 00:11:13.798 fused_ordering(7) 00:11:13.798 fused_ordering(8) 00:11:13.798 fused_ordering(9) 00:11:13.798 fused_ordering(10) 00:11:13.798 fused_ordering(11) 00:11:13.798 fused_ordering(12) 00:11:13.798 fused_ordering(13) 00:11:13.798 fused_ordering(14) 00:11:13.798 fused_ordering(15) 00:11:13.798 fused_ordering(16) 00:11:13.798 fused_ordering(17) 00:11:13.798 fused_ordering(18) 00:11:13.798 fused_ordering(19) 00:11:13.798 fused_ordering(20) 00:11:13.798 fused_ordering(21) 00:11:13.798 fused_ordering(22) 00:11:13.798 fused_ordering(23) 00:11:13.798 fused_ordering(24) 00:11:13.798 fused_ordering(25) 00:11:13.798 fused_ordering(26) 00:11:13.798 fused_ordering(27) 00:11:13.798 fused_ordering(28) 00:11:13.798 fused_ordering(29) 00:11:13.798 fused_ordering(30) 00:11:13.798 fused_ordering(31) 00:11:13.798 fused_ordering(32) 00:11:13.798 fused_ordering(33) 00:11:13.798 fused_ordering(34) 00:11:13.798 fused_ordering(35) 00:11:13.798 fused_ordering(36) 00:11:13.798 fused_ordering(37) 00:11:13.798 fused_ordering(38) 00:11:13.798 fused_ordering(39) 00:11:13.798 fused_ordering(40) 00:11:13.798 fused_ordering(41) 00:11:13.798 fused_ordering(42) 00:11:13.798 fused_ordering(43) 00:11:13.798 fused_ordering(44) 00:11:13.798 fused_ordering(45) 00:11:13.798 fused_ordering(46) 00:11:13.798 fused_ordering(47) 00:11:13.798 fused_ordering(48) 00:11:13.798 fused_ordering(49) 00:11:13.798 fused_ordering(50) 00:11:13.798 fused_ordering(51) 00:11:13.798 fused_ordering(52) 00:11:13.798 fused_ordering(53) 00:11:13.798 fused_ordering(54) 00:11:13.798 fused_ordering(55) 00:11:13.798 fused_ordering(56) 00:11:13.798 fused_ordering(57) 00:11:13.798 fused_ordering(58) 00:11:13.798 fused_ordering(59) 00:11:13.798 fused_ordering(60) 00:11:13.798 fused_ordering(61) 00:11:13.798 fused_ordering(62) 00:11:13.798 fused_ordering(63) 00:11:13.798 fused_ordering(64) 00:11:13.798 fused_ordering(65) 00:11:13.798 fused_ordering(66) 00:11:13.798 fused_ordering(67) 00:11:13.798 fused_ordering(68) 00:11:13.798 fused_ordering(69) 00:11:13.798 fused_ordering(70) 00:11:13.798 fused_ordering(71) 00:11:13.798 fused_ordering(72) 00:11:13.798 fused_ordering(73) 00:11:13.798 fused_ordering(74) 00:11:13.798 fused_ordering(75) 00:11:13.798 fused_ordering(76) 00:11:13.798 fused_ordering(77) 00:11:13.798 fused_ordering(78) 00:11:13.798 fused_ordering(79) 00:11:13.798 fused_ordering(80) 00:11:13.798 fused_ordering(81) 00:11:13.798 fused_ordering(82) 00:11:13.798 fused_ordering(83) 00:11:13.798 fused_ordering(84) 00:11:13.798 fused_ordering(85) 00:11:13.798 fused_ordering(86) 00:11:13.798 fused_ordering(87) 00:11:13.798 fused_ordering(88) 00:11:13.798 fused_ordering(89) 00:11:13.798 fused_ordering(90) 00:11:13.798 fused_ordering(91) 00:11:13.798 fused_ordering(92) 00:11:13.798 fused_ordering(93) 00:11:13.798 fused_ordering(94) 00:11:13.798 fused_ordering(95) 00:11:13.798 fused_ordering(96) 00:11:13.798 fused_ordering(97) 00:11:13.798 fused_ordering(98) 00:11:13.798 fused_ordering(99) 00:11:13.798 fused_ordering(100) 00:11:13.798 fused_ordering(101) 00:11:13.798 fused_ordering(102) 00:11:13.798 fused_ordering(103) 00:11:13.798 fused_ordering(104) 00:11:13.798 fused_ordering(105) 00:11:13.798 fused_ordering(106) 00:11:13.798 fused_ordering(107) 00:11:13.798 fused_ordering(108) 00:11:13.798 fused_ordering(109) 00:11:13.798 fused_ordering(110) 00:11:13.798 fused_ordering(111) 00:11:13.798 fused_ordering(112) 00:11:13.798 fused_ordering(113) 00:11:13.798 fused_ordering(114) 00:11:13.798 fused_ordering(115) 00:11:13.798 fused_ordering(116) 00:11:13.798 fused_ordering(117) 00:11:13.798 fused_ordering(118) 00:11:13.798 fused_ordering(119) 00:11:13.798 fused_ordering(120) 00:11:13.798 fused_ordering(121) 00:11:13.798 fused_ordering(122) 00:11:13.798 fused_ordering(123) 00:11:13.798 fused_ordering(124) 00:11:13.798 fused_ordering(125) 00:11:13.798 fused_ordering(126) 00:11:13.798 fused_ordering(127) 00:11:13.798 fused_ordering(128) 00:11:13.798 fused_ordering(129) 00:11:13.798 fused_ordering(130) 00:11:13.798 fused_ordering(131) 00:11:13.798 fused_ordering(132) 00:11:13.798 fused_ordering(133) 00:11:13.798 fused_ordering(134) 00:11:13.798 fused_ordering(135) 00:11:13.798 fused_ordering(136) 00:11:13.798 fused_ordering(137) 00:11:13.798 fused_ordering(138) 00:11:13.798 fused_ordering(139) 00:11:13.798 fused_ordering(140) 00:11:13.798 fused_ordering(141) 00:11:13.798 fused_ordering(142) 00:11:13.798 fused_ordering(143) 00:11:13.798 fused_ordering(144) 00:11:13.798 fused_ordering(145) 00:11:13.798 fused_ordering(146) 00:11:13.798 fused_ordering(147) 00:11:13.798 fused_ordering(148) 00:11:13.798 fused_ordering(149) 00:11:13.798 fused_ordering(150) 00:11:13.798 fused_ordering(151) 00:11:13.798 fused_ordering(152) 00:11:13.798 fused_ordering(153) 00:11:13.798 fused_ordering(154) 00:11:13.798 fused_ordering(155) 00:11:13.798 fused_ordering(156) 00:11:13.798 fused_ordering(157) 00:11:13.798 fused_ordering(158) 00:11:13.798 fused_ordering(159) 00:11:13.798 fused_ordering(160) 00:11:13.798 fused_ordering(161) 00:11:13.798 fused_ordering(162) 00:11:13.798 fused_ordering(163) 00:11:13.798 fused_ordering(164) 00:11:13.798 fused_ordering(165) 00:11:13.798 fused_ordering(166) 00:11:13.798 fused_ordering(167) 00:11:13.798 fused_ordering(168) 00:11:13.798 fused_ordering(169) 00:11:13.798 fused_ordering(170) 00:11:13.798 fused_ordering(171) 00:11:13.798 fused_ordering(172) 00:11:13.798 fused_ordering(173) 00:11:13.799 fused_ordering(174) 00:11:13.799 fused_ordering(175) 00:11:13.799 fused_ordering(176) 00:11:13.799 fused_ordering(177) 00:11:13.799 fused_ordering(178) 00:11:13.799 fused_ordering(179) 00:11:13.799 fused_ordering(180) 00:11:13.799 fused_ordering(181) 00:11:13.799 fused_ordering(182) 00:11:13.799 fused_ordering(183) 00:11:13.799 fused_ordering(184) 00:11:13.799 fused_ordering(185) 00:11:13.799 fused_ordering(186) 00:11:13.799 fused_ordering(187) 00:11:13.799 fused_ordering(188) 00:11:13.799 fused_ordering(189) 00:11:13.799 fused_ordering(190) 00:11:13.799 fused_ordering(191) 00:11:13.799 fused_ordering(192) 00:11:13.799 fused_ordering(193) 00:11:13.799 fused_ordering(194) 00:11:13.799 fused_ordering(195) 00:11:13.799 fused_ordering(196) 00:11:13.799 fused_ordering(197) 00:11:13.799 fused_ordering(198) 00:11:13.799 fused_ordering(199) 00:11:13.799 fused_ordering(200) 00:11:13.799 fused_ordering(201) 00:11:13.799 fused_ordering(202) 00:11:13.799 fused_ordering(203) 00:11:13.799 fused_ordering(204) 00:11:13.799 fused_ordering(205) 00:11:14.061 fused_ordering(206) 00:11:14.061 fused_ordering(207) 00:11:14.061 fused_ordering(208) 00:11:14.061 fused_ordering(209) 00:11:14.061 fused_ordering(210) 00:11:14.061 fused_ordering(211) 00:11:14.061 fused_ordering(212) 00:11:14.061 fused_ordering(213) 00:11:14.061 fused_ordering(214) 00:11:14.061 fused_ordering(215) 00:11:14.061 fused_ordering(216) 00:11:14.061 fused_ordering(217) 00:11:14.061 fused_ordering(218) 00:11:14.061 fused_ordering(219) 00:11:14.061 fused_ordering(220) 00:11:14.061 fused_ordering(221) 00:11:14.061 fused_ordering(222) 00:11:14.061 fused_ordering(223) 00:11:14.061 fused_ordering(224) 00:11:14.061 fused_ordering(225) 00:11:14.061 fused_ordering(226) 00:11:14.061 fused_ordering(227) 00:11:14.061 fused_ordering(228) 00:11:14.061 fused_ordering(229) 00:11:14.061 fused_ordering(230) 00:11:14.061 fused_ordering(231) 00:11:14.061 fused_ordering(232) 00:11:14.061 fused_ordering(233) 00:11:14.061 fused_ordering(234) 00:11:14.061 fused_ordering(235) 00:11:14.061 fused_ordering(236) 00:11:14.061 fused_ordering(237) 00:11:14.061 fused_ordering(238) 00:11:14.061 fused_ordering(239) 00:11:14.061 fused_ordering(240) 00:11:14.061 fused_ordering(241) 00:11:14.061 fused_ordering(242) 00:11:14.061 fused_ordering(243) 00:11:14.061 fused_ordering(244) 00:11:14.061 fused_ordering(245) 00:11:14.061 fused_ordering(246) 00:11:14.061 fused_ordering(247) 00:11:14.061 fused_ordering(248) 00:11:14.061 fused_ordering(249) 00:11:14.061 fused_ordering(250) 00:11:14.061 fused_ordering(251) 00:11:14.061 fused_ordering(252) 00:11:14.061 fused_ordering(253) 00:11:14.061 fused_ordering(254) 00:11:14.061 fused_ordering(255) 00:11:14.061 fused_ordering(256) 00:11:14.061 fused_ordering(257) 00:11:14.061 fused_ordering(258) 00:11:14.061 fused_ordering(259) 00:11:14.061 fused_ordering(260) 00:11:14.061 fused_ordering(261) 00:11:14.061 fused_ordering(262) 00:11:14.061 fused_ordering(263) 00:11:14.061 fused_ordering(264) 00:11:14.061 fused_ordering(265) 00:11:14.061 fused_ordering(266) 00:11:14.061 fused_ordering(267) 00:11:14.061 fused_ordering(268) 00:11:14.061 fused_ordering(269) 00:11:14.061 fused_ordering(270) 00:11:14.061 fused_ordering(271) 00:11:14.061 fused_ordering(272) 00:11:14.061 fused_ordering(273) 00:11:14.061 fused_ordering(274) 00:11:14.061 fused_ordering(275) 00:11:14.061 fused_ordering(276) 00:11:14.061 fused_ordering(277) 00:11:14.061 fused_ordering(278) 00:11:14.061 fused_ordering(279) 00:11:14.061 fused_ordering(280) 00:11:14.061 fused_ordering(281) 00:11:14.061 fused_ordering(282) 00:11:14.061 fused_ordering(283) 00:11:14.061 fused_ordering(284) 00:11:14.061 fused_ordering(285) 00:11:14.061 fused_ordering(286) 00:11:14.061 fused_ordering(287) 00:11:14.061 fused_ordering(288) 00:11:14.061 fused_ordering(289) 00:11:14.061 fused_ordering(290) 00:11:14.061 fused_ordering(291) 00:11:14.061 fused_ordering(292) 00:11:14.061 fused_ordering(293) 00:11:14.061 fused_ordering(294) 00:11:14.061 fused_ordering(295) 00:11:14.061 fused_ordering(296) 00:11:14.061 fused_ordering(297) 00:11:14.061 fused_ordering(298) 00:11:14.061 fused_ordering(299) 00:11:14.061 fused_ordering(300) 00:11:14.061 fused_ordering(301) 00:11:14.061 fused_ordering(302) 00:11:14.061 fused_ordering(303) 00:11:14.061 fused_ordering(304) 00:11:14.061 fused_ordering(305) 00:11:14.061 fused_ordering(306) 00:11:14.061 fused_ordering(307) 00:11:14.061 fused_ordering(308) 00:11:14.061 fused_ordering(309) 00:11:14.061 fused_ordering(310) 00:11:14.061 fused_ordering(311) 00:11:14.061 fused_ordering(312) 00:11:14.061 fused_ordering(313) 00:11:14.061 fused_ordering(314) 00:11:14.061 fused_ordering(315) 00:11:14.061 fused_ordering(316) 00:11:14.061 fused_ordering(317) 00:11:14.061 fused_ordering(318) 00:11:14.061 fused_ordering(319) 00:11:14.061 fused_ordering(320) 00:11:14.061 fused_ordering(321) 00:11:14.061 fused_ordering(322) 00:11:14.061 fused_ordering(323) 00:11:14.061 fused_ordering(324) 00:11:14.061 fused_ordering(325) 00:11:14.061 fused_ordering(326) 00:11:14.061 fused_ordering(327) 00:11:14.061 fused_ordering(328) 00:11:14.061 fused_ordering(329) 00:11:14.061 fused_ordering(330) 00:11:14.061 fused_ordering(331) 00:11:14.061 fused_ordering(332) 00:11:14.061 fused_ordering(333) 00:11:14.061 fused_ordering(334) 00:11:14.061 fused_ordering(335) 00:11:14.061 fused_ordering(336) 00:11:14.061 fused_ordering(337) 00:11:14.061 fused_ordering(338) 00:11:14.061 fused_ordering(339) 00:11:14.061 fused_ordering(340) 00:11:14.061 fused_ordering(341) 00:11:14.061 fused_ordering(342) 00:11:14.061 fused_ordering(343) 00:11:14.061 fused_ordering(344) 00:11:14.061 fused_ordering(345) 00:11:14.061 fused_ordering(346) 00:11:14.061 fused_ordering(347) 00:11:14.061 fused_ordering(348) 00:11:14.061 fused_ordering(349) 00:11:14.061 fused_ordering(350) 00:11:14.061 fused_ordering(351) 00:11:14.061 fused_ordering(352) 00:11:14.061 fused_ordering(353) 00:11:14.061 fused_ordering(354) 00:11:14.061 fused_ordering(355) 00:11:14.061 fused_ordering(356) 00:11:14.061 fused_ordering(357) 00:11:14.061 fused_ordering(358) 00:11:14.061 fused_ordering(359) 00:11:14.061 fused_ordering(360) 00:11:14.061 fused_ordering(361) 00:11:14.061 fused_ordering(362) 00:11:14.061 fused_ordering(363) 00:11:14.061 fused_ordering(364) 00:11:14.061 fused_ordering(365) 00:11:14.061 fused_ordering(366) 00:11:14.061 fused_ordering(367) 00:11:14.061 fused_ordering(368) 00:11:14.061 fused_ordering(369) 00:11:14.061 fused_ordering(370) 00:11:14.061 fused_ordering(371) 00:11:14.061 fused_ordering(372) 00:11:14.061 fused_ordering(373) 00:11:14.061 fused_ordering(374) 00:11:14.061 fused_ordering(375) 00:11:14.061 fused_ordering(376) 00:11:14.061 fused_ordering(377) 00:11:14.061 fused_ordering(378) 00:11:14.061 fused_ordering(379) 00:11:14.061 fused_ordering(380) 00:11:14.061 fused_ordering(381) 00:11:14.061 fused_ordering(382) 00:11:14.061 fused_ordering(383) 00:11:14.061 fused_ordering(384) 00:11:14.061 fused_ordering(385) 00:11:14.061 fused_ordering(386) 00:11:14.061 fused_ordering(387) 00:11:14.061 fused_ordering(388) 00:11:14.061 fused_ordering(389) 00:11:14.061 fused_ordering(390) 00:11:14.061 fused_ordering(391) 00:11:14.061 fused_ordering(392) 00:11:14.061 fused_ordering(393) 00:11:14.061 fused_ordering(394) 00:11:14.061 fused_ordering(395) 00:11:14.061 fused_ordering(396) 00:11:14.061 fused_ordering(397) 00:11:14.061 fused_ordering(398) 00:11:14.061 fused_ordering(399) 00:11:14.061 fused_ordering(400) 00:11:14.061 fused_ordering(401) 00:11:14.061 fused_ordering(402) 00:11:14.061 fused_ordering(403) 00:11:14.061 fused_ordering(404) 00:11:14.061 fused_ordering(405) 00:11:14.061 fused_ordering(406) 00:11:14.061 fused_ordering(407) 00:11:14.061 fused_ordering(408) 00:11:14.061 fused_ordering(409) 00:11:14.061 fused_ordering(410) 00:11:14.322 fused_ordering(411) 00:11:14.322 fused_ordering(412) 00:11:14.322 fused_ordering(413) 00:11:14.322 fused_ordering(414) 00:11:14.322 fused_ordering(415) 00:11:14.322 fused_ordering(416) 00:11:14.322 fused_ordering(417) 00:11:14.322 fused_ordering(418) 00:11:14.322 fused_ordering(419) 00:11:14.322 fused_ordering(420) 00:11:14.322 fused_ordering(421) 00:11:14.322 fused_ordering(422) 00:11:14.322 fused_ordering(423) 00:11:14.322 fused_ordering(424) 00:11:14.322 fused_ordering(425) 00:11:14.322 fused_ordering(426) 00:11:14.322 fused_ordering(427) 00:11:14.322 fused_ordering(428) 00:11:14.322 fused_ordering(429) 00:11:14.322 fused_ordering(430) 00:11:14.322 fused_ordering(431) 00:11:14.322 fused_ordering(432) 00:11:14.322 fused_ordering(433) 00:11:14.322 fused_ordering(434) 00:11:14.322 fused_ordering(435) 00:11:14.322 fused_ordering(436) 00:11:14.322 fused_ordering(437) 00:11:14.322 fused_ordering(438) 00:11:14.322 fused_ordering(439) 00:11:14.322 fused_ordering(440) 00:11:14.322 fused_ordering(441) 00:11:14.322 fused_ordering(442) 00:11:14.322 fused_ordering(443) 00:11:14.322 fused_ordering(444) 00:11:14.322 fused_ordering(445) 00:11:14.322 fused_ordering(446) 00:11:14.322 fused_ordering(447) 00:11:14.322 fused_ordering(448) 00:11:14.322 fused_ordering(449) 00:11:14.322 fused_ordering(450) 00:11:14.322 fused_ordering(451) 00:11:14.322 fused_ordering(452) 00:11:14.322 fused_ordering(453) 00:11:14.322 fused_ordering(454) 00:11:14.322 fused_ordering(455) 00:11:14.322 fused_ordering(456) 00:11:14.322 fused_ordering(457) 00:11:14.322 fused_ordering(458) 00:11:14.322 fused_ordering(459) 00:11:14.322 fused_ordering(460) 00:11:14.322 fused_ordering(461) 00:11:14.322 fused_ordering(462) 00:11:14.322 fused_ordering(463) 00:11:14.322 fused_ordering(464) 00:11:14.322 fused_ordering(465) 00:11:14.322 fused_ordering(466) 00:11:14.322 fused_ordering(467) 00:11:14.322 fused_ordering(468) 00:11:14.322 fused_ordering(469) 00:11:14.322 fused_ordering(470) 00:11:14.322 fused_ordering(471) 00:11:14.322 fused_ordering(472) 00:11:14.322 fused_ordering(473) 00:11:14.322 fused_ordering(474) 00:11:14.322 fused_ordering(475) 00:11:14.322 fused_ordering(476) 00:11:14.322 fused_ordering(477) 00:11:14.322 fused_ordering(478) 00:11:14.322 fused_ordering(479) 00:11:14.322 fused_ordering(480) 00:11:14.322 fused_ordering(481) 00:11:14.322 fused_ordering(482) 00:11:14.322 fused_ordering(483) 00:11:14.322 fused_ordering(484) 00:11:14.322 fused_ordering(485) 00:11:14.322 fused_ordering(486) 00:11:14.322 fused_ordering(487) 00:11:14.322 fused_ordering(488) 00:11:14.322 fused_ordering(489) 00:11:14.322 fused_ordering(490) 00:11:14.322 fused_ordering(491) 00:11:14.322 fused_ordering(492) 00:11:14.323 fused_ordering(493) 00:11:14.323 fused_ordering(494) 00:11:14.323 fused_ordering(495) 00:11:14.323 fused_ordering(496) 00:11:14.323 fused_ordering(497) 00:11:14.323 fused_ordering(498) 00:11:14.323 fused_ordering(499) 00:11:14.323 fused_ordering(500) 00:11:14.323 fused_ordering(501) 00:11:14.323 fused_ordering(502) 00:11:14.323 fused_ordering(503) 00:11:14.323 fused_ordering(504) 00:11:14.323 fused_ordering(505) 00:11:14.323 fused_ordering(506) 00:11:14.323 fused_ordering(507) 00:11:14.323 fused_ordering(508) 00:11:14.323 fused_ordering(509) 00:11:14.323 fused_ordering(510) 00:11:14.323 fused_ordering(511) 00:11:14.323 fused_ordering(512) 00:11:14.323 fused_ordering(513) 00:11:14.323 fused_ordering(514) 00:11:14.323 fused_ordering(515) 00:11:14.323 fused_ordering(516) 00:11:14.323 fused_ordering(517) 00:11:14.323 fused_ordering(518) 00:11:14.323 fused_ordering(519) 00:11:14.323 fused_ordering(520) 00:11:14.323 fused_ordering(521) 00:11:14.323 fused_ordering(522) 00:11:14.323 fused_ordering(523) 00:11:14.323 fused_ordering(524) 00:11:14.323 fused_ordering(525) 00:11:14.323 fused_ordering(526) 00:11:14.323 fused_ordering(527) 00:11:14.323 fused_ordering(528) 00:11:14.323 fused_ordering(529) 00:11:14.323 fused_ordering(530) 00:11:14.323 fused_ordering(531) 00:11:14.323 fused_ordering(532) 00:11:14.323 fused_ordering(533) 00:11:14.323 fused_ordering(534) 00:11:14.323 fused_ordering(535) 00:11:14.323 fused_ordering(536) 00:11:14.323 fused_ordering(537) 00:11:14.323 fused_ordering(538) 00:11:14.323 fused_ordering(539) 00:11:14.323 fused_ordering(540) 00:11:14.323 fused_ordering(541) 00:11:14.323 fused_ordering(542) 00:11:14.323 fused_ordering(543) 00:11:14.323 fused_ordering(544) 00:11:14.323 fused_ordering(545) 00:11:14.323 fused_ordering(546) 00:11:14.323 fused_ordering(547) 00:11:14.323 fused_ordering(548) 00:11:14.323 fused_ordering(549) 00:11:14.323 fused_ordering(550) 00:11:14.323 fused_ordering(551) 00:11:14.323 fused_ordering(552) 00:11:14.323 fused_ordering(553) 00:11:14.323 fused_ordering(554) 00:11:14.323 fused_ordering(555) 00:11:14.323 fused_ordering(556) 00:11:14.323 fused_ordering(557) 00:11:14.323 fused_ordering(558) 00:11:14.323 fused_ordering(559) 00:11:14.323 fused_ordering(560) 00:11:14.323 fused_ordering(561) 00:11:14.323 fused_ordering(562) 00:11:14.323 fused_ordering(563) 00:11:14.323 fused_ordering(564) 00:11:14.323 fused_ordering(565) 00:11:14.323 fused_ordering(566) 00:11:14.323 fused_ordering(567) 00:11:14.323 fused_ordering(568) 00:11:14.323 fused_ordering(569) 00:11:14.323 fused_ordering(570) 00:11:14.323 fused_ordering(571) 00:11:14.323 fused_ordering(572) 00:11:14.323 fused_ordering(573) 00:11:14.323 fused_ordering(574) 00:11:14.323 fused_ordering(575) 00:11:14.323 fused_ordering(576) 00:11:14.323 fused_ordering(577) 00:11:14.323 fused_ordering(578) 00:11:14.323 fused_ordering(579) 00:11:14.323 fused_ordering(580) 00:11:14.323 fused_ordering(581) 00:11:14.323 fused_ordering(582) 00:11:14.323 fused_ordering(583) 00:11:14.323 fused_ordering(584) 00:11:14.323 fused_ordering(585) 00:11:14.323 fused_ordering(586) 00:11:14.323 fused_ordering(587) 00:11:14.323 fused_ordering(588) 00:11:14.323 fused_ordering(589) 00:11:14.323 fused_ordering(590) 00:11:14.323 fused_ordering(591) 00:11:14.323 fused_ordering(592) 00:11:14.323 fused_ordering(593) 00:11:14.323 fused_ordering(594) 00:11:14.323 fused_ordering(595) 00:11:14.323 fused_ordering(596) 00:11:14.323 fused_ordering(597) 00:11:14.323 fused_ordering(598) 00:11:14.323 fused_ordering(599) 00:11:14.323 fused_ordering(600) 00:11:14.323 fused_ordering(601) 00:11:14.323 fused_ordering(602) 00:11:14.323 fused_ordering(603) 00:11:14.323 fused_ordering(604) 00:11:14.323 fused_ordering(605) 00:11:14.323 fused_ordering(606) 00:11:14.323 fused_ordering(607) 00:11:14.323 fused_ordering(608) 00:11:14.323 fused_ordering(609) 00:11:14.323 fused_ordering(610) 00:11:14.323 fused_ordering(611) 00:11:14.323 fused_ordering(612) 00:11:14.323 fused_ordering(613) 00:11:14.323 fused_ordering(614) 00:11:14.323 fused_ordering(615) 00:11:14.896 fused_ordering(616) 00:11:14.896 fused_ordering(617) 00:11:14.896 fused_ordering(618) 00:11:14.896 fused_ordering(619) 00:11:14.896 fused_ordering(620) 00:11:14.896 fused_ordering(621) 00:11:14.896 fused_ordering(622) 00:11:14.896 fused_ordering(623) 00:11:14.896 fused_ordering(624) 00:11:14.896 fused_ordering(625) 00:11:14.896 fused_ordering(626) 00:11:14.896 fused_ordering(627) 00:11:14.896 fused_ordering(628) 00:11:14.896 fused_ordering(629) 00:11:14.896 fused_ordering(630) 00:11:14.896 fused_ordering(631) 00:11:14.896 fused_ordering(632) 00:11:14.896 fused_ordering(633) 00:11:14.896 fused_ordering(634) 00:11:14.896 fused_ordering(635) 00:11:14.896 fused_ordering(636) 00:11:14.896 fused_ordering(637) 00:11:14.896 fused_ordering(638) 00:11:14.896 fused_ordering(639) 00:11:14.896 fused_ordering(640) 00:11:14.896 fused_ordering(641) 00:11:14.896 fused_ordering(642) 00:11:14.896 fused_ordering(643) 00:11:14.896 fused_ordering(644) 00:11:14.896 fused_ordering(645) 00:11:14.896 fused_ordering(646) 00:11:14.896 fused_ordering(647) 00:11:14.896 fused_ordering(648) 00:11:14.896 fused_ordering(649) 00:11:14.896 fused_ordering(650) 00:11:14.896 fused_ordering(651) 00:11:14.896 fused_ordering(652) 00:11:14.896 fused_ordering(653) 00:11:14.896 fused_ordering(654) 00:11:14.896 fused_ordering(655) 00:11:14.896 fused_ordering(656) 00:11:14.896 fused_ordering(657) 00:11:14.896 fused_ordering(658) 00:11:14.896 fused_ordering(659) 00:11:14.896 fused_ordering(660) 00:11:14.896 fused_ordering(661) 00:11:14.896 fused_ordering(662) 00:11:14.896 fused_ordering(663) 00:11:14.896 fused_ordering(664) 00:11:14.896 fused_ordering(665) 00:11:14.896 fused_ordering(666) 00:11:14.896 fused_ordering(667) 00:11:14.896 fused_ordering(668) 00:11:14.896 fused_ordering(669) 00:11:14.896 fused_ordering(670) 00:11:14.896 fused_ordering(671) 00:11:14.896 fused_ordering(672) 00:11:14.896 fused_ordering(673) 00:11:14.896 fused_ordering(674) 00:11:14.896 fused_ordering(675) 00:11:14.896 fused_ordering(676) 00:11:14.896 fused_ordering(677) 00:11:14.896 fused_ordering(678) 00:11:14.896 fused_ordering(679) 00:11:14.896 fused_ordering(680) 00:11:14.896 fused_ordering(681) 00:11:14.896 fused_ordering(682) 00:11:14.896 fused_ordering(683) 00:11:14.896 fused_ordering(684) 00:11:14.896 fused_ordering(685) 00:11:14.896 fused_ordering(686) 00:11:14.896 fused_ordering(687) 00:11:14.896 fused_ordering(688) 00:11:14.896 fused_ordering(689) 00:11:14.896 fused_ordering(690) 00:11:14.896 fused_ordering(691) 00:11:14.896 fused_ordering(692) 00:11:14.896 fused_ordering(693) 00:11:14.896 fused_ordering(694) 00:11:14.896 fused_ordering(695) 00:11:14.896 fused_ordering(696) 00:11:14.896 fused_ordering(697) 00:11:14.896 fused_ordering(698) 00:11:14.896 fused_ordering(699) 00:11:14.896 fused_ordering(700) 00:11:14.896 fused_ordering(701) 00:11:14.896 fused_ordering(702) 00:11:14.896 fused_ordering(703) 00:11:14.896 fused_ordering(704) 00:11:14.896 fused_ordering(705) 00:11:14.896 fused_ordering(706) 00:11:14.896 fused_ordering(707) 00:11:14.896 fused_ordering(708) 00:11:14.896 fused_ordering(709) 00:11:14.896 fused_ordering(710) 00:11:14.896 fused_ordering(711) 00:11:14.896 fused_ordering(712) 00:11:14.896 fused_ordering(713) 00:11:14.896 fused_ordering(714) 00:11:14.896 fused_ordering(715) 00:11:14.896 fused_ordering(716) 00:11:14.896 fused_ordering(717) 00:11:14.896 fused_ordering(718) 00:11:14.896 fused_ordering(719) 00:11:14.896 fused_ordering(720) 00:11:14.896 fused_ordering(721) 00:11:14.896 fused_ordering(722) 00:11:14.896 fused_ordering(723) 00:11:14.896 fused_ordering(724) 00:11:14.896 fused_ordering(725) 00:11:14.896 fused_ordering(726) 00:11:14.896 fused_ordering(727) 00:11:14.896 fused_ordering(728) 00:11:14.896 fused_ordering(729) 00:11:14.896 fused_ordering(730) 00:11:14.896 fused_ordering(731) 00:11:14.896 fused_ordering(732) 00:11:14.896 fused_ordering(733) 00:11:14.896 fused_ordering(734) 00:11:14.896 fused_ordering(735) 00:11:14.896 fused_ordering(736) 00:11:14.896 fused_ordering(737) 00:11:14.896 fused_ordering(738) 00:11:14.896 fused_ordering(739) 00:11:14.896 fused_ordering(740) 00:11:14.896 fused_ordering(741) 00:11:14.896 fused_ordering(742) 00:11:14.896 fused_ordering(743) 00:11:14.896 fused_ordering(744) 00:11:14.896 fused_ordering(745) 00:11:14.896 fused_ordering(746) 00:11:14.896 fused_ordering(747) 00:11:14.896 fused_ordering(748) 00:11:14.896 fused_ordering(749) 00:11:14.896 fused_ordering(750) 00:11:14.896 fused_ordering(751) 00:11:14.896 fused_ordering(752) 00:11:14.896 fused_ordering(753) 00:11:14.896 fused_ordering(754) 00:11:14.896 fused_ordering(755) 00:11:14.896 fused_ordering(756) 00:11:14.896 fused_ordering(757) 00:11:14.896 fused_ordering(758) 00:11:14.896 fused_ordering(759) 00:11:14.896 fused_ordering(760) 00:11:14.896 fused_ordering(761) 00:11:14.896 fused_ordering(762) 00:11:14.896 fused_ordering(763) 00:11:14.896 fused_ordering(764) 00:11:14.896 fused_ordering(765) 00:11:14.896 fused_ordering(766) 00:11:14.896 fused_ordering(767) 00:11:14.896 fused_ordering(768) 00:11:14.896 fused_ordering(769) 00:11:14.896 fused_ordering(770) 00:11:14.896 fused_ordering(771) 00:11:14.896 fused_ordering(772) 00:11:14.896 fused_ordering(773) 00:11:14.896 fused_ordering(774) 00:11:14.896 fused_ordering(775) 00:11:14.896 fused_ordering(776) 00:11:14.896 fused_ordering(777) 00:11:14.896 fused_ordering(778) 00:11:14.896 fused_ordering(779) 00:11:14.896 fused_ordering(780) 00:11:14.896 fused_ordering(781) 00:11:14.896 fused_ordering(782) 00:11:14.896 fused_ordering(783) 00:11:14.896 fused_ordering(784) 00:11:14.896 fused_ordering(785) 00:11:14.896 fused_ordering(786) 00:11:14.896 fused_ordering(787) 00:11:14.896 fused_ordering(788) 00:11:14.896 fused_ordering(789) 00:11:14.896 fused_ordering(790) 00:11:14.896 fused_ordering(791) 00:11:14.896 fused_ordering(792) 00:11:14.896 fused_ordering(793) 00:11:14.896 fused_ordering(794) 00:11:14.896 fused_ordering(795) 00:11:14.896 fused_ordering(796) 00:11:14.896 fused_ordering(797) 00:11:14.896 fused_ordering(798) 00:11:14.896 fused_ordering(799) 00:11:14.896 fused_ordering(800) 00:11:14.896 fused_ordering(801) 00:11:14.896 fused_ordering(802) 00:11:14.896 fused_ordering(803) 00:11:14.896 fused_ordering(804) 00:11:14.896 fused_ordering(805) 00:11:14.896 fused_ordering(806) 00:11:14.896 fused_ordering(807) 00:11:14.896 fused_ordering(808) 00:11:14.896 fused_ordering(809) 00:11:14.896 fused_ordering(810) 00:11:14.896 fused_ordering(811) 00:11:14.896 fused_ordering(812) 00:11:14.896 fused_ordering(813) 00:11:14.896 fused_ordering(814) 00:11:14.896 fused_ordering(815) 00:11:14.896 fused_ordering(816) 00:11:14.896 fused_ordering(817) 00:11:14.896 fused_ordering(818) 00:11:14.896 fused_ordering(819) 00:11:14.896 fused_ordering(820) 00:11:15.469 fused_ordering(821) 00:11:15.469 fused_ordering(822) 00:11:15.469 fused_ordering(823) 00:11:15.469 fused_ordering(824) 00:11:15.469 fused_ordering(825) 00:11:15.469 fused_ordering(826) 00:11:15.469 fused_ordering(827) 00:11:15.469 fused_ordering(828) 00:11:15.469 fused_ordering(829) 00:11:15.469 fused_ordering(830) 00:11:15.469 fused_ordering(831) 00:11:15.469 fused_ordering(832) 00:11:15.469 fused_ordering(833) 00:11:15.469 fused_ordering(834) 00:11:15.469 fused_ordering(835) 00:11:15.469 fused_ordering(836) 00:11:15.469 fused_ordering(837) 00:11:15.469 fused_ordering(838) 00:11:15.469 fused_ordering(839) 00:11:15.469 fused_ordering(840) 00:11:15.469 fused_ordering(841) 00:11:15.469 fused_ordering(842) 00:11:15.469 fused_ordering(843) 00:11:15.469 fused_ordering(844) 00:11:15.469 fused_ordering(845) 00:11:15.469 fused_ordering(846) 00:11:15.469 fused_ordering(847) 00:11:15.469 fused_ordering(848) 00:11:15.469 fused_ordering(849) 00:11:15.469 fused_ordering(850) 00:11:15.469 fused_ordering(851) 00:11:15.469 fused_ordering(852) 00:11:15.469 fused_ordering(853) 00:11:15.469 fused_ordering(854) 00:11:15.469 fused_ordering(855) 00:11:15.469 fused_ordering(856) 00:11:15.469 fused_ordering(857) 00:11:15.469 fused_ordering(858) 00:11:15.469 fused_ordering(859) 00:11:15.469 fused_ordering(860) 00:11:15.469 fused_ordering(861) 00:11:15.469 fused_ordering(862) 00:11:15.469 fused_ordering(863) 00:11:15.469 fused_ordering(864) 00:11:15.469 fused_ordering(865) 00:11:15.469 fused_ordering(866) 00:11:15.469 fused_ordering(867) 00:11:15.469 fused_ordering(868) 00:11:15.469 fused_ordering(869) 00:11:15.469 fused_ordering(870) 00:11:15.469 fused_ordering(871) 00:11:15.469 fused_ordering(872) 00:11:15.469 fused_ordering(873) 00:11:15.469 fused_ordering(874) 00:11:15.469 fused_ordering(875) 00:11:15.469 fused_ordering(876) 00:11:15.469 fused_ordering(877) 00:11:15.469 fused_ordering(878) 00:11:15.469 fused_ordering(879) 00:11:15.469 fused_ordering(880) 00:11:15.469 fused_ordering(881) 00:11:15.469 fused_ordering(882) 00:11:15.469 fused_ordering(883) 00:11:15.469 fused_ordering(884) 00:11:15.469 fused_ordering(885) 00:11:15.469 fused_ordering(886) 00:11:15.469 fused_ordering(887) 00:11:15.469 fused_ordering(888) 00:11:15.469 fused_ordering(889) 00:11:15.469 fused_ordering(890) 00:11:15.469 fused_ordering(891) 00:11:15.469 fused_ordering(892) 00:11:15.469 fused_ordering(893) 00:11:15.469 fused_ordering(894) 00:11:15.469 fused_ordering(895) 00:11:15.469 fused_ordering(896) 00:11:15.469 fused_ordering(897) 00:11:15.469 fused_ordering(898) 00:11:15.469 fused_ordering(899) 00:11:15.469 fused_ordering(900) 00:11:15.469 fused_ordering(901) 00:11:15.469 fused_ordering(902) 00:11:15.469 fused_ordering(903) 00:11:15.469 fused_ordering(904) 00:11:15.469 fused_ordering(905) 00:11:15.469 fused_ordering(906) 00:11:15.469 fused_ordering(907) 00:11:15.469 fused_ordering(908) 00:11:15.469 fused_ordering(909) 00:11:15.469 fused_ordering(910) 00:11:15.469 fused_ordering(911) 00:11:15.469 fused_ordering(912) 00:11:15.469 fused_ordering(913) 00:11:15.469 fused_ordering(914) 00:11:15.469 fused_ordering(915) 00:11:15.469 fused_ordering(916) 00:11:15.469 fused_ordering(917) 00:11:15.469 fused_ordering(918) 00:11:15.469 fused_ordering(919) 00:11:15.469 fused_ordering(920) 00:11:15.469 fused_ordering(921) 00:11:15.469 fused_ordering(922) 00:11:15.469 fused_ordering(923) 00:11:15.469 fused_ordering(924) 00:11:15.469 fused_ordering(925) 00:11:15.469 fused_ordering(926) 00:11:15.469 fused_ordering(927) 00:11:15.469 fused_ordering(928) 00:11:15.469 fused_ordering(929) 00:11:15.469 fused_ordering(930) 00:11:15.469 fused_ordering(931) 00:11:15.469 fused_ordering(932) 00:11:15.469 fused_ordering(933) 00:11:15.469 fused_ordering(934) 00:11:15.469 fused_ordering(935) 00:11:15.469 fused_ordering(936) 00:11:15.469 fused_ordering(937) 00:11:15.469 fused_ordering(938) 00:11:15.469 fused_ordering(939) 00:11:15.469 fused_ordering(940) 00:11:15.469 fused_ordering(941) 00:11:15.469 fused_ordering(942) 00:11:15.469 fused_ordering(943) 00:11:15.469 fused_ordering(944) 00:11:15.469 fused_ordering(945) 00:11:15.469 fused_ordering(946) 00:11:15.469 fused_ordering(947) 00:11:15.469 fused_ordering(948) 00:11:15.469 fused_ordering(949) 00:11:15.469 fused_ordering(950) 00:11:15.469 fused_ordering(951) 00:11:15.469 fused_ordering(952) 00:11:15.469 fused_ordering(953) 00:11:15.469 fused_ordering(954) 00:11:15.469 fused_ordering(955) 00:11:15.469 fused_ordering(956) 00:11:15.469 fused_ordering(957) 00:11:15.469 fused_ordering(958) 00:11:15.469 fused_ordering(959) 00:11:15.469 fused_ordering(960) 00:11:15.469 fused_ordering(961) 00:11:15.469 fused_ordering(962) 00:11:15.469 fused_ordering(963) 00:11:15.469 fused_ordering(964) 00:11:15.469 fused_ordering(965) 00:11:15.469 fused_ordering(966) 00:11:15.469 fused_ordering(967) 00:11:15.469 fused_ordering(968) 00:11:15.469 fused_ordering(969) 00:11:15.469 fused_ordering(970) 00:11:15.469 fused_ordering(971) 00:11:15.469 fused_ordering(972) 00:11:15.469 fused_ordering(973) 00:11:15.469 fused_ordering(974) 00:11:15.469 fused_ordering(975) 00:11:15.469 fused_ordering(976) 00:11:15.469 fused_ordering(977) 00:11:15.469 fused_ordering(978) 00:11:15.469 fused_ordering(979) 00:11:15.469 fused_ordering(980) 00:11:15.469 fused_ordering(981) 00:11:15.469 fused_ordering(982) 00:11:15.469 fused_ordering(983) 00:11:15.469 fused_ordering(984) 00:11:15.469 fused_ordering(985) 00:11:15.469 fused_ordering(986) 00:11:15.469 fused_ordering(987) 00:11:15.469 fused_ordering(988) 00:11:15.469 fused_ordering(989) 00:11:15.469 fused_ordering(990) 00:11:15.469 fused_ordering(991) 00:11:15.469 fused_ordering(992) 00:11:15.469 fused_ordering(993) 00:11:15.469 fused_ordering(994) 00:11:15.469 fused_ordering(995) 00:11:15.469 fused_ordering(996) 00:11:15.469 fused_ordering(997) 00:11:15.469 fused_ordering(998) 00:11:15.469 fused_ordering(999) 00:11:15.469 fused_ordering(1000) 00:11:15.469 fused_ordering(1001) 00:11:15.469 fused_ordering(1002) 00:11:15.469 fused_ordering(1003) 00:11:15.469 fused_ordering(1004) 00:11:15.469 fused_ordering(1005) 00:11:15.469 fused_ordering(1006) 00:11:15.469 fused_ordering(1007) 00:11:15.469 fused_ordering(1008) 00:11:15.469 fused_ordering(1009) 00:11:15.469 fused_ordering(1010) 00:11:15.469 fused_ordering(1011) 00:11:15.469 fused_ordering(1012) 00:11:15.469 fused_ordering(1013) 00:11:15.469 fused_ordering(1014) 00:11:15.469 fused_ordering(1015) 00:11:15.469 fused_ordering(1016) 00:11:15.469 fused_ordering(1017) 00:11:15.469 fused_ordering(1018) 00:11:15.469 fused_ordering(1019) 00:11:15.469 fused_ordering(1020) 00:11:15.469 fused_ordering(1021) 00:11:15.469 fused_ordering(1022) 00:11:15.469 fused_ordering(1023) 00:11:15.469 15:21:32 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:11:15.469 15:21:32 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:11:15.469 15:21:32 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:15.469 15:21:32 -- nvmf/common.sh@117 -- # sync 00:11:15.469 15:21:32 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:15.469 15:21:32 -- nvmf/common.sh@120 -- # set +e 00:11:15.469 15:21:32 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:15.469 15:21:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:15.469 rmmod nvme_tcp 00:11:15.469 rmmod nvme_fabrics 00:11:15.469 rmmod nvme_keyring 00:11:15.731 15:21:32 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:15.731 15:21:32 -- nvmf/common.sh@124 -- # set -e 00:11:15.731 15:21:32 -- nvmf/common.sh@125 -- # return 0 00:11:15.731 15:21:32 -- nvmf/common.sh@478 -- # '[' -n 1524588 ']' 00:11:15.731 15:21:32 -- nvmf/common.sh@479 -- # killprocess 1524588 00:11:15.731 15:21:32 -- common/autotest_common.sh@936 -- # '[' -z 1524588 ']' 00:11:15.731 15:21:32 -- common/autotest_common.sh@940 -- # kill -0 1524588 00:11:15.731 15:21:32 -- common/autotest_common.sh@941 -- # uname 00:11:15.731 15:21:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:15.732 15:21:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1524588 00:11:15.732 15:21:32 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:15.732 15:21:32 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:15.732 15:21:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1524588' 00:11:15.732 killing process with pid 1524588 00:11:15.732 15:21:32 -- common/autotest_common.sh@955 -- # kill 1524588 00:11:15.732 15:21:32 -- common/autotest_common.sh@960 -- # wait 1524588 00:11:15.732 15:21:33 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:15.732 15:21:33 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:15.732 15:21:33 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:15.732 15:21:33 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:15.732 15:21:33 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:15.732 15:21:33 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:15.732 15:21:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:15.732 15:21:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:18.276 15:21:35 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:18.276 00:11:18.276 real 0m12.834s 00:11:18.276 user 0m6.766s 00:11:18.276 sys 0m6.652s 00:11:18.276 15:21:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:18.276 15:21:35 -- common/autotest_common.sh@10 -- # set +x 00:11:18.276 ************************************ 00:11:18.276 END TEST nvmf_fused_ordering 00:11:18.276 ************************************ 00:11:18.276 15:21:35 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:18.276 15:21:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:18.276 15:21:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:18.276 15:21:35 -- common/autotest_common.sh@10 -- # set +x 00:11:18.276 ************************************ 00:11:18.276 START TEST nvmf_delete_subsystem 00:11:18.276 ************************************ 00:11:18.276 15:21:35 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:18.276 * Looking for test storage... 00:11:18.276 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:18.276 15:21:35 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:18.276 15:21:35 -- nvmf/common.sh@7 -- # uname -s 00:11:18.276 15:21:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:18.276 15:21:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:18.276 15:21:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:18.276 15:21:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:18.276 15:21:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:18.276 15:21:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:18.276 15:21:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:18.276 15:21:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:18.276 15:21:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:18.276 15:21:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:18.276 15:21:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:18.276 15:21:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:18.276 15:21:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:18.276 15:21:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:18.276 15:21:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:18.276 15:21:35 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:18.276 15:21:35 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:18.276 15:21:35 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:18.276 15:21:35 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:18.276 15:21:35 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:18.276 15:21:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.276 15:21:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.276 15:21:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.276 15:21:35 -- paths/export.sh@5 -- # export PATH 00:11:18.276 15:21:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.276 15:21:35 -- nvmf/common.sh@47 -- # : 0 00:11:18.276 15:21:35 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:18.276 15:21:35 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:18.276 15:21:35 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:18.276 15:21:35 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:18.276 15:21:35 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:18.276 15:21:35 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:18.276 15:21:35 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:18.276 15:21:35 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:18.276 15:21:35 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:11:18.276 15:21:35 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:18.276 15:21:35 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:18.276 15:21:35 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:18.276 15:21:35 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:18.276 15:21:35 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:18.276 15:21:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:18.276 15:21:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:18.276 15:21:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:18.276 15:21:35 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:18.276 15:21:35 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:18.277 15:21:35 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:18.277 15:21:35 -- common/autotest_common.sh@10 -- # set +x 00:11:26.410 15:21:42 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:26.410 15:21:42 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:26.410 15:21:42 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:26.410 15:21:42 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:26.410 15:21:42 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:26.410 15:21:42 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:26.410 15:21:42 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:26.410 15:21:42 -- nvmf/common.sh@295 -- # net_devs=() 00:11:26.410 15:21:42 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:26.410 15:21:42 -- nvmf/common.sh@296 -- # e810=() 00:11:26.410 15:21:42 -- nvmf/common.sh@296 -- # local -ga e810 00:11:26.410 15:21:42 -- nvmf/common.sh@297 -- # x722=() 00:11:26.410 15:21:42 -- nvmf/common.sh@297 -- # local -ga x722 00:11:26.410 15:21:42 -- nvmf/common.sh@298 -- # mlx=() 00:11:26.410 15:21:42 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:26.410 15:21:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:26.410 15:21:42 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:26.410 15:21:42 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:26.410 15:21:42 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:26.410 15:21:42 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:26.410 15:21:42 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:26.410 15:21:42 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:26.410 15:21:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:26.410 15:21:42 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:26.410 15:21:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:26.410 15:21:42 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:26.410 15:21:42 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:26.410 15:21:42 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:26.410 15:21:42 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:26.410 15:21:42 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:26.410 15:21:42 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:26.410 15:21:42 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:26.410 15:21:42 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:26.410 15:21:42 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:26.410 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:26.410 15:21:42 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:26.410 15:21:42 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:26.410 15:21:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:26.410 15:21:42 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:26.410 15:21:42 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:26.410 15:21:42 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:26.410 15:21:42 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:26.410 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:26.410 15:21:42 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:26.410 15:21:42 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:26.410 15:21:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:26.410 15:21:42 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:26.410 15:21:42 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:26.410 15:21:42 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:26.410 15:21:42 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:26.410 15:21:42 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:26.410 15:21:42 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:26.410 15:21:42 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:26.410 15:21:42 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:26.410 15:21:42 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:26.410 15:21:42 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:26.410 Found net devices under 0000:31:00.0: cvl_0_0 00:11:26.410 15:21:42 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:26.410 15:21:42 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:26.410 15:21:42 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:26.410 15:21:42 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:26.410 15:21:42 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:26.410 15:21:42 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:26.410 Found net devices under 0000:31:00.1: cvl_0_1 00:11:26.410 15:21:42 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:26.410 15:21:42 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:26.410 15:21:42 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:26.410 15:21:42 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:26.410 15:21:42 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:11:26.410 15:21:42 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:11:26.410 15:21:42 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:26.410 15:21:42 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:26.410 15:21:42 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:26.410 15:21:42 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:26.410 15:21:42 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:26.410 15:21:42 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:26.410 15:21:42 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:26.410 15:21:42 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:26.410 15:21:42 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:26.410 15:21:42 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:26.410 15:21:42 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:26.410 15:21:42 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:26.410 15:21:42 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:26.410 15:21:42 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:26.410 15:21:42 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:26.410 15:21:42 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:26.410 15:21:42 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:26.410 15:21:42 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:26.410 15:21:42 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:26.410 15:21:42 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:26.410 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:26.410 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.479 ms 00:11:26.410 00:11:26.410 --- 10.0.0.2 ping statistics --- 00:11:26.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:26.410 rtt min/avg/max/mdev = 0.479/0.479/0.479/0.000 ms 00:11:26.410 15:21:42 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:26.410 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:26.410 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:11:26.410 00:11:26.410 --- 10.0.0.1 ping statistics --- 00:11:26.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:26.410 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:11:26.410 15:21:42 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:26.410 15:21:42 -- nvmf/common.sh@411 -- # return 0 00:11:26.410 15:21:42 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:26.410 15:21:42 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:26.410 15:21:42 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:26.410 15:21:42 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:26.411 15:21:42 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:26.411 15:21:42 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:26.411 15:21:42 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:26.411 15:21:42 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:11:26.411 15:21:42 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:26.411 15:21:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:26.411 15:21:42 -- common/autotest_common.sh@10 -- # set +x 00:11:26.411 15:21:42 -- nvmf/common.sh@470 -- # nvmfpid=1529421 00:11:26.411 15:21:42 -- nvmf/common.sh@471 -- # waitforlisten 1529421 00:11:26.411 15:21:42 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:11:26.411 15:21:42 -- common/autotest_common.sh@817 -- # '[' -z 1529421 ']' 00:11:26.411 15:21:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:26.411 15:21:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:26.411 15:21:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:26.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:26.411 15:21:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:26.411 15:21:42 -- common/autotest_common.sh@10 -- # set +x 00:11:26.411 [2024-04-26 15:21:42.801701] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:11:26.411 [2024-04-26 15:21:42.801757] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:26.411 EAL: No free 2048 kB hugepages reported on node 1 00:11:26.411 [2024-04-26 15:21:42.869540] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:26.411 [2024-04-26 15:21:42.936751] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:26.411 [2024-04-26 15:21:42.936791] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:26.411 [2024-04-26 15:21:42.936798] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:26.411 [2024-04-26 15:21:42.936808] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:26.411 [2024-04-26 15:21:42.936814] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:26.411 [2024-04-26 15:21:42.936897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:26.411 [2024-04-26 15:21:42.936898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.411 15:21:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:26.411 15:21:43 -- common/autotest_common.sh@850 -- # return 0 00:11:26.411 15:21:43 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:26.411 15:21:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:26.411 15:21:43 -- common/autotest_common.sh@10 -- # set +x 00:11:26.411 15:21:43 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:26.411 15:21:43 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:26.411 15:21:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:26.411 15:21:43 -- common/autotest_common.sh@10 -- # set +x 00:11:26.411 [2024-04-26 15:21:43.604425] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:26.411 15:21:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:26.411 15:21:43 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:26.411 15:21:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:26.411 15:21:43 -- common/autotest_common.sh@10 -- # set +x 00:11:26.411 15:21:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:26.411 15:21:43 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:26.411 15:21:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:26.411 15:21:43 -- common/autotest_common.sh@10 -- # set +x 00:11:26.411 [2024-04-26 15:21:43.628585] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:26.411 15:21:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:26.411 15:21:43 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:26.411 15:21:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:26.411 15:21:43 -- common/autotest_common.sh@10 -- # set +x 00:11:26.411 NULL1 00:11:26.411 15:21:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:26.411 15:21:43 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:26.411 15:21:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:26.411 15:21:43 -- common/autotest_common.sh@10 -- # set +x 00:11:26.411 Delay0 00:11:26.411 15:21:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:26.411 15:21:43 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:26.411 15:21:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:26.411 15:21:43 -- common/autotest_common.sh@10 -- # set +x 00:11:26.411 15:21:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:26.411 15:21:43 -- target/delete_subsystem.sh@28 -- # perf_pid=1529701 00:11:26.411 15:21:43 -- target/delete_subsystem.sh@30 -- # sleep 2 00:11:26.411 15:21:43 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:26.411 EAL: No free 2048 kB hugepages reported on node 1 00:11:26.411 [2024-04-26 15:21:43.725252] subsystem.c:1435:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:28.321 15:21:45 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:28.321 15:21:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:28.321 15:21:45 -- common/autotest_common.sh@10 -- # set +x 00:11:28.581 Write completed with error (sct=0, sc=8) 00:11:28.581 Read completed with error (sct=0, sc=8) 00:11:28.581 starting I/O failed: -6 00:11:28.581 Read completed with error (sct=0, sc=8) 00:11:28.581 Read completed with error (sct=0, sc=8) 00:11:28.581 Read completed with error (sct=0, sc=8) 00:11:28.581 Write completed with error (sct=0, sc=8) 00:11:28.581 starting I/O failed: -6 00:11:28.581 Read completed with error (sct=0, sc=8) 00:11:28.581 Write completed with error (sct=0, sc=8) 00:11:28.581 Read completed with error (sct=0, sc=8) 00:11:28.581 Read completed with error (sct=0, sc=8) 00:11:28.581 starting I/O failed: -6 00:11:28.581 Read completed with error (sct=0, sc=8) 00:11:28.581 Write completed with error (sct=0, sc=8) 00:11:28.581 Write completed with error (sct=0, sc=8) 00:11:28.581 Read completed with error (sct=0, sc=8) 00:11:28.581 starting I/O failed: -6 00:11:28.581 Read completed with error (sct=0, sc=8) 00:11:28.581 Read completed with error (sct=0, sc=8) 00:11:28.581 Read completed with error (sct=0, sc=8) 00:11:28.581 Read completed with error (sct=0, sc=8) 00:11:28.581 starting I/O failed: -6 00:11:28.581 Write completed with error (sct=0, sc=8) 00:11:28.581 Read completed with error (sct=0, sc=8) 00:11:28.581 Read completed with error (sct=0, sc=8) 00:11:28.581 Read completed with error (sct=0, sc=8) 00:11:28.581 starting I/O failed: -6 00:11:28.581 Read completed with error (sct=0, sc=8) 00:11:28.581 Write completed with error (sct=0, sc=8) 00:11:28.581 Read completed with error (sct=0, sc=8) 00:11:28.581 Write completed with error (sct=0, sc=8) 00:11:28.581 starting I/O failed: -6 00:11:28.581 Read completed with error (sct=0, sc=8) 00:11:28.581 Read completed with error (sct=0, sc=8) 00:11:28.581 Write completed with error (sct=0, sc=8) 00:11:28.581 Read completed with error (sct=0, sc=8) 00:11:28.581 starting I/O failed: -6 00:11:28.581 Read completed with error (sct=0, sc=8) 00:11:28.581 Read completed with error (sct=0, sc=8) 00:11:28.581 Read completed with error (sct=0, sc=8) 00:11:28.581 Read completed with error (sct=0, sc=8) 00:11:28.581 starting I/O failed: -6 00:11:28.581 Read completed with error (sct=0, sc=8) 00:11:28.581 Read completed with error (sct=0, sc=8) 00:11:28.581 Read completed with error (sct=0, sc=8) 00:11:28.581 Read completed with error (sct=0, sc=8) 00:11:28.581 starting I/O failed: -6 00:11:28.581 Read completed with error (sct=0, sc=8) 00:11:28.581 Read completed with error (sct=0, sc=8) 00:11:28.581 Read completed with error (sct=0, sc=8) 00:11:28.581 [2024-04-26 15:21:45.848976] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2075b60 is same with the state(5) to be set 00:11:28.581 Read completed with error (sct=0, sc=8) 00:11:28.581 Write completed with error (sct=0, sc=8) 00:11:28.581 Read completed with error (sct=0, sc=8) 00:11:28.581 Read completed with error (sct=0, sc=8) 00:11:28.581 Read completed with error (sct=0, sc=8) 00:11:28.581 Read completed with error (sct=0, sc=8) 00:11:28.581 Read completed with error (sct=0, sc=8) 00:11:28.581 Read completed with error (sct=0, sc=8) 00:11:28.581 Read completed with error (sct=0, sc=8) 00:11:28.581 Read completed with error (sct=0, sc=8) 00:11:28.581 Read completed with error (sct=0, sc=8) 00:11:28.581 Read completed with error (sct=0, sc=8) 00:11:28.581 Write completed with error (sct=0, sc=8) 00:11:28.581 Read completed with error (sct=0, sc=8) 00:11:28.581 Write completed with error (sct=0, sc=8) 00:11:28.581 Write completed with error (sct=0, sc=8) 00:11:28.581 Read completed with error (sct=0, sc=8) 00:11:28.581 Read completed with error (sct=0, sc=8) 00:11:28.581 Read completed with error (sct=0, sc=8) 00:11:28.581 Write completed with error (sct=0, sc=8) 00:11:28.581 Read completed with error (sct=0, sc=8) 00:11:28.581 Read completed with error (sct=0, sc=8) 00:11:28.581 Read completed with error (sct=0, sc=8) 00:11:28.581 Read completed with error (sct=0, sc=8) 00:11:28.581 Read completed with error (sct=0, sc=8) 00:11:28.581 Read completed with error (sct=0, sc=8) 00:11:28.581 Read completed with error (sct=0, sc=8) 00:11:28.581 Read completed with error (sct=0, sc=8) 00:11:28.581 Write completed with error (sct=0, sc=8) 00:11:28.581 Write completed with error (sct=0, sc=8) 00:11:28.581 Write completed with error (sct=0, sc=8) 00:11:28.581 Read completed with error (sct=0, sc=8) 00:11:28.581 Read completed with error (sct=0, sc=8) 00:11:28.581 Write completed with error (sct=0, sc=8) 00:11:28.581 Write completed with error (sct=0, sc=8) 00:11:28.581 Read completed with error (sct=0, sc=8) 00:11:28.581 Read completed with error (sct=0, sc=8) 00:11:28.581 Read completed with error (sct=0, sc=8) 00:11:28.581 Write completed with error (sct=0, sc=8) 00:11:28.581 Read completed with error (sct=0, sc=8) 00:11:28.581 Read completed with error (sct=0, sc=8) 00:11:28.581 Read completed with error (sct=0, sc=8) 00:11:28.581 Write completed with error (sct=0, sc=8) 00:11:28.581 Read completed with error (sct=0, sc=8) 00:11:28.581 Read completed with error (sct=0, sc=8) 00:11:28.581 Read completed with error (sct=0, sc=8) 00:11:28.581 Read completed with error (sct=0, sc=8) 00:11:28.581 Read completed with error (sct=0, sc=8) 00:11:28.581 Read completed with error (sct=0, sc=8) 00:11:28.581 Write completed with error (sct=0, sc=8) 00:11:28.581 Write completed with error (sct=0, sc=8) 00:11:28.581 Write completed with error (sct=0, sc=8) 00:11:28.581 [2024-04-26 15:21:45.849264] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060c90 is same with the state(5) to be set 00:11:28.581 Read completed with error (sct=0, sc=8) 00:11:28.581 starting I/O failed: -6 00:11:28.581 Write completed with error (sct=0, sc=8) 00:11:28.581 Read completed with error (sct=0, sc=8) 00:11:28.581 Read completed with error (sct=0, sc=8) 00:11:28.581 Read completed with error (sct=0, sc=8) 00:11:28.581 starting I/O failed: -6 00:11:28.581 Read completed with error (sct=0, sc=8) 00:11:28.581 Read completed with error (sct=0, sc=8) 00:11:28.581 Read completed with error (sct=0, sc=8) 00:11:28.581 Read completed with error (sct=0, sc=8) 00:11:28.581 starting I/O failed: -6 00:11:28.581 Read completed with error (sct=0, sc=8) 00:11:28.581 Read completed with error (sct=0, sc=8) 00:11:28.581 Write completed with error (sct=0, sc=8) 00:11:28.582 Read completed with error (sct=0, sc=8) 00:11:28.582 starting I/O failed: -6 00:11:28.582 Write completed with error (sct=0, sc=8) 00:11:28.582 Read completed with error (sct=0, sc=8) 00:11:28.582 Read completed with error (sct=0, sc=8) 00:11:28.582 Read completed with error (sct=0, sc=8) 00:11:28.582 starting I/O failed: -6 00:11:28.582 Read completed with error (sct=0, sc=8) 00:11:28.582 Read completed with error (sct=0, sc=8) 00:11:28.582 Read completed with error (sct=0, sc=8) 00:11:28.582 Read completed with error (sct=0, sc=8) 00:11:28.582 starting I/O failed: -6 00:11:28.582 Write completed with error (sct=0, sc=8) 00:11:28.582 Read completed with error (sct=0, sc=8) 00:11:28.582 Read completed with error (sct=0, sc=8) 00:11:28.582 Write completed with error (sct=0, sc=8) 00:11:28.582 starting I/O failed: -6 00:11:28.582 Read completed with error (sct=0, sc=8) 00:11:28.582 Write completed with error (sct=0, sc=8) 00:11:28.582 Read completed with error (sct=0, sc=8) 00:11:28.582 Read completed with error (sct=0, sc=8) 00:11:28.582 starting I/O failed: -6 00:11:28.582 Read completed with error (sct=0, sc=8) 00:11:28.582 Read completed with error (sct=0, sc=8) 00:11:28.582 Read completed with error (sct=0, sc=8) 00:11:28.582 Write completed with error (sct=0, sc=8) 00:11:28.582 starting I/O failed: -6 00:11:28.582 Read completed with error (sct=0, sc=8) 00:11:28.582 Read completed with error (sct=0, sc=8) 00:11:28.582 Read completed with error (sct=0, sc=8) 00:11:28.582 Read completed with error (sct=0, sc=8) 00:11:28.582 starting I/O failed: -6 00:11:28.582 Read completed with error (sct=0, sc=8) 00:11:28.582 Write completed with error (sct=0, sc=8) 00:11:28.582 [2024-04-26 15:21:45.853862] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb65400c3d0 is same with the state(5) to be set 00:11:28.582 Write completed with error (sct=0, sc=8) 00:11:28.582 Read completed with error (sct=0, sc=8) 00:11:28.582 Read completed with error (sct=0, sc=8) 00:11:28.582 Read completed with error (sct=0, sc=8) 00:11:28.582 Read completed with error (sct=0, sc=8) 00:11:28.582 Write completed with error (sct=0, sc=8) 00:11:28.582 Read completed with error (sct=0, sc=8) 00:11:28.582 Write completed with error (sct=0, sc=8) 00:11:28.582 Write completed with error (sct=0, sc=8) 00:11:28.582 Write completed with error (sct=0, sc=8) 00:11:28.582 Write completed with error (sct=0, sc=8) 00:11:28.582 Write completed with error (sct=0, sc=8) 00:11:28.582 Read completed with error (sct=0, sc=8) 00:11:28.582 Read completed with error (sct=0, sc=8) 00:11:28.582 Write completed with error (sct=0, sc=8) 00:11:28.582 Write completed with error (sct=0, sc=8) 00:11:28.582 Write completed with error (sct=0, sc=8) 00:11:28.582 Read completed with error (sct=0, sc=8) 00:11:28.582 Read completed with error (sct=0, sc=8) 00:11:28.582 Read completed with error (sct=0, sc=8) 00:11:28.582 Read completed with error (sct=0, sc=8) 00:11:28.582 Read completed with error (sct=0, sc=8) 00:11:28.582 Write completed with error (sct=0, sc=8) 00:11:28.582 Read completed with error (sct=0, sc=8) 00:11:28.582 Write completed with error (sct=0, sc=8) 00:11:28.582 Read completed with error (sct=0, sc=8) 00:11:28.582 Write completed with error (sct=0, sc=8) 00:11:28.582 Write completed with error (sct=0, sc=8) 00:11:28.582 Write completed with error (sct=0, sc=8) 00:11:28.582 Read completed with error (sct=0, sc=8) 00:11:28.582 Read completed with error (sct=0, sc=8) 00:11:28.582 Write completed with error (sct=0, sc=8) 00:11:28.582 Write completed with error (sct=0, sc=8) 00:11:28.582 Write completed with error (sct=0, sc=8) 00:11:28.582 Write completed with error (sct=0, sc=8) 00:11:28.582 Read completed with error (sct=0, sc=8) 00:11:28.582 Write completed with error (sct=0, sc=8) 00:11:28.582 Read completed with error (sct=0, sc=8) 00:11:28.582 Write completed with error (sct=0, sc=8) 00:11:28.582 Read completed with error (sct=0, sc=8) 00:11:28.582 Read completed with error (sct=0, sc=8) 00:11:28.582 Read completed with error (sct=0, sc=8) 00:11:28.582 Write completed with error (sct=0, sc=8) 00:11:28.582 Read completed with error (sct=0, sc=8) 00:11:28.582 Read completed with error (sct=0, sc=8) 00:11:28.582 Read completed with error (sct=0, sc=8) 00:11:28.582 Write completed with error (sct=0, sc=8) 00:11:28.582 Read completed with error (sct=0, sc=8) 00:11:28.582 Read completed with error (sct=0, sc=8) 00:11:28.582 Read completed with error (sct=0, sc=8) 00:11:29.523 [2024-04-26 15:21:46.823796] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207e250 is same with the state(5) to be set 00:11:29.523 Read completed with error (sct=0, sc=8) 00:11:29.523 Write completed with error (sct=0, sc=8) 00:11:29.523 Read completed with error (sct=0, sc=8) 00:11:29.523 Read completed with error (sct=0, sc=8) 00:11:29.523 Read completed with error (sct=0, sc=8) 00:11:29.523 Read completed with error (sct=0, sc=8) 00:11:29.523 Write completed with error (sct=0, sc=8) 00:11:29.524 Write completed with error (sct=0, sc=8) 00:11:29.524 Write completed with error (sct=0, sc=8) 00:11:29.524 Read completed with error (sct=0, sc=8) 00:11:29.524 Read completed with error (sct=0, sc=8) 00:11:29.524 Write completed with error (sct=0, sc=8) 00:11:29.524 Read completed with error (sct=0, sc=8) 00:11:29.524 Write completed with error (sct=0, sc=8) 00:11:29.524 Read completed with error (sct=0, sc=8) 00:11:29.524 Read completed with error (sct=0, sc=8) 00:11:29.524 Write completed with error (sct=0, sc=8) 00:11:29.524 Read completed with error (sct=0, sc=8) 00:11:29.524 Write completed with error (sct=0, sc=8) 00:11:29.524 Read completed with error (sct=0, sc=8) 00:11:29.524 [2024-04-26 15:21:46.852627] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207f290 is same with the state(5) to be set 00:11:29.524 Read completed with error (sct=0, sc=8) 00:11:29.524 Read completed with error (sct=0, sc=8) 00:11:29.524 Write completed with error (sct=0, sc=8) 00:11:29.524 Read completed with error (sct=0, sc=8) 00:11:29.524 Read completed with error (sct=0, sc=8) 00:11:29.524 Read completed with error (sct=0, sc=8) 00:11:29.524 Read completed with error (sct=0, sc=8) 00:11:29.524 Read completed with error (sct=0, sc=8) 00:11:29.524 Write completed with error (sct=0, sc=8) 00:11:29.524 Read completed with error (sct=0, sc=8) 00:11:29.524 Write completed with error (sct=0, sc=8) 00:11:29.524 Write completed with error (sct=0, sc=8) 00:11:29.524 Read completed with error (sct=0, sc=8) 00:11:29.524 Read completed with error (sct=0, sc=8) 00:11:29.524 Read completed with error (sct=0, sc=8) 00:11:29.524 Read completed with error (sct=0, sc=8) 00:11:29.524 Read completed with error (sct=0, sc=8) 00:11:29.524 Write completed with error (sct=0, sc=8) 00:11:29.524 Write completed with error (sct=0, sc=8) 00:11:29.524 [2024-04-26 15:21:46.852784] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2075cf0 is same with the state(5) to be set 00:11:29.524 Write completed with error (sct=0, sc=8) 00:11:29.524 Write completed with error (sct=0, sc=8) 00:11:29.524 Read completed with error (sct=0, sc=8) 00:11:29.524 Read completed with error (sct=0, sc=8) 00:11:29.524 Read completed with error (sct=0, sc=8) 00:11:29.524 Write completed with error (sct=0, sc=8) 00:11:29.524 Read completed with error (sct=0, sc=8) 00:11:29.524 Read completed with error (sct=0, sc=8) 00:11:29.524 Read completed with error (sct=0, sc=8) 00:11:29.524 Read completed with error (sct=0, sc=8) 00:11:29.524 Read completed with error (sct=0, sc=8) 00:11:29.524 Write completed with error (sct=0, sc=8) 00:11:29.524 Read completed with error (sct=0, sc=8) 00:11:29.524 Read completed with error (sct=0, sc=8) 00:11:29.524 Read completed with error (sct=0, sc=8) 00:11:29.524 Read completed with error (sct=0, sc=8) 00:11:29.524 Read completed with error (sct=0, sc=8) 00:11:29.524 Read completed with error (sct=0, sc=8) 00:11:29.524 Read completed with error (sct=0, sc=8) 00:11:29.524 [2024-04-26 15:21:46.855900] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb65400bf90 is same with the state(5) to be set 00:11:29.524 Read completed with error (sct=0, sc=8) 00:11:29.524 Write completed with error (sct=0, sc=8) 00:11:29.524 Write completed with error (sct=0, sc=8) 00:11:29.524 Read completed with error (sct=0, sc=8) 00:11:29.524 Read completed with error (sct=0, sc=8) 00:11:29.524 Read completed with error (sct=0, sc=8) 00:11:29.524 Read completed with error (sct=0, sc=8) 00:11:29.524 Read completed with error (sct=0, sc=8) 00:11:29.524 Read completed with error (sct=0, sc=8) 00:11:29.524 Write completed with error (sct=0, sc=8) 00:11:29.524 Read completed with error (sct=0, sc=8) 00:11:29.524 Read completed with error (sct=0, sc=8) 00:11:29.524 Read completed with error (sct=0, sc=8) 00:11:29.524 Read completed with error (sct=0, sc=8) 00:11:29.524 Read completed with error (sct=0, sc=8) 00:11:29.524 Read completed with error (sct=0, sc=8) 00:11:29.524 Read completed with error (sct=0, sc=8) 00:11:29.524 Write completed with error (sct=0, sc=8) 00:11:29.524 Write completed with error (sct=0, sc=8) 00:11:29.524 Write completed with error (sct=0, sc=8) 00:11:29.524 Read completed with error (sct=0, sc=8) 00:11:29.524 Read completed with error (sct=0, sc=8) 00:11:29.524 Write completed with error (sct=0, sc=8) 00:11:29.524 Write completed with error (sct=0, sc=8) 00:11:29.524 Read completed with error (sct=0, sc=8) 00:11:29.524 Read completed with error (sct=0, sc=8) 00:11:29.524 Read completed with error (sct=0, sc=8) 00:11:29.524 Read completed with error (sct=0, sc=8) 00:11:29.524 Write completed with error (sct=0, sc=8) 00:11:29.524 Read completed with error (sct=0, sc=8) 00:11:29.524 Write completed with error (sct=0, sc=8) 00:11:29.524 Write completed with error (sct=0, sc=8) 00:11:29.524 Read completed with error (sct=0, sc=8) 00:11:29.524 Write completed with error (sct=0, sc=8) 00:11:29.524 Read completed with error (sct=0, sc=8) 00:11:29.524 Read completed with error (sct=0, sc=8) 00:11:29.524 Read completed with error (sct=0, sc=8) 00:11:29.524 Read completed with error (sct=0, sc=8) 00:11:29.524 Read completed with error (sct=0, sc=8) 00:11:29.524 Write completed with error (sct=0, sc=8) 00:11:29.524 [2024-04-26 15:21:46.856033] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb65400c690 is same with the state(5) to be set 00:11:29.524 [2024-04-26 15:21:46.856539] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207e250 (9): Bad file descriptor 00:11:29.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:11:29.524 15:21:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:29.524 15:21:46 -- target/delete_subsystem.sh@34 -- # delay=0 00:11:29.524 15:21:46 -- target/delete_subsystem.sh@35 -- # kill -0 1529701 00:11:29.524 15:21:46 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:11:29.524 Initializing NVMe Controllers 00:11:29.524 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:29.524 Controller IO queue size 128, less than required. 00:11:29.524 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:29.524 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:29.524 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:29.524 Initialization complete. Launching workers. 00:11:29.524 ======================================================== 00:11:29.524 Latency(us) 00:11:29.524 Device Information : IOPS MiB/s Average min max 00:11:29.524 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 161.41 0.08 913585.10 297.53 1005736.56 00:11:29.524 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 156.93 0.08 1004439.11 242.80 2002890.00 00:11:29.524 ======================================================== 00:11:29.524 Total : 318.33 0.16 958372.29 242.80 2002890.00 00:11:29.524 00:11:30.100 15:21:47 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:11:30.100 15:21:47 -- target/delete_subsystem.sh@35 -- # kill -0 1529701 00:11:30.100 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1529701) - No such process 00:11:30.100 15:21:47 -- target/delete_subsystem.sh@45 -- # NOT wait 1529701 00:11:30.100 15:21:47 -- common/autotest_common.sh@638 -- # local es=0 00:11:30.100 15:21:47 -- common/autotest_common.sh@640 -- # valid_exec_arg wait 1529701 00:11:30.100 15:21:47 -- common/autotest_common.sh@626 -- # local arg=wait 00:11:30.100 15:21:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:30.100 15:21:47 -- common/autotest_common.sh@630 -- # type -t wait 00:11:30.100 15:21:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:30.100 15:21:47 -- common/autotest_common.sh@641 -- # wait 1529701 00:11:30.100 15:21:47 -- common/autotest_common.sh@641 -- # es=1 00:11:30.100 15:21:47 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:30.100 15:21:47 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:30.100 15:21:47 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:30.101 15:21:47 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:30.101 15:21:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:30.101 15:21:47 -- common/autotest_common.sh@10 -- # set +x 00:11:30.101 15:21:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:30.101 15:21:47 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:30.101 15:21:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:30.101 15:21:47 -- common/autotest_common.sh@10 -- # set +x 00:11:30.101 [2024-04-26 15:21:47.386055] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:30.101 15:21:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:30.101 15:21:47 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:30.101 15:21:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:30.101 15:21:47 -- common/autotest_common.sh@10 -- # set +x 00:11:30.101 15:21:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:30.101 15:21:47 -- target/delete_subsystem.sh@54 -- # perf_pid=1530379 00:11:30.101 15:21:47 -- target/delete_subsystem.sh@56 -- # delay=0 00:11:30.101 15:21:47 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:30.101 15:21:47 -- target/delete_subsystem.sh@57 -- # kill -0 1530379 00:11:30.101 15:21:47 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:30.101 EAL: No free 2048 kB hugepages reported on node 1 00:11:30.101 [2024-04-26 15:21:47.456073] subsystem.c:1435:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:30.739 15:21:47 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:30.739 15:21:47 -- target/delete_subsystem.sh@57 -- # kill -0 1530379 00:11:30.739 15:21:47 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:31.000 15:21:48 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:31.000 15:21:48 -- target/delete_subsystem.sh@57 -- # kill -0 1530379 00:11:31.000 15:21:48 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:31.571 15:21:48 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:31.571 15:21:48 -- target/delete_subsystem.sh@57 -- # kill -0 1530379 00:11:31.571 15:21:48 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:32.143 15:21:49 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:32.143 15:21:49 -- target/delete_subsystem.sh@57 -- # kill -0 1530379 00:11:32.143 15:21:49 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:32.717 15:21:49 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:32.717 15:21:49 -- target/delete_subsystem.sh@57 -- # kill -0 1530379 00:11:32.717 15:21:49 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:33.290 15:21:50 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:33.290 15:21:50 -- target/delete_subsystem.sh@57 -- # kill -0 1530379 00:11:33.290 15:21:50 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:33.550 Initializing NVMe Controllers 00:11:33.550 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:33.550 Controller IO queue size 128, less than required. 00:11:33.550 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:33.551 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:33.551 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:33.551 Initialization complete. Launching workers. 00:11:33.551 ======================================================== 00:11:33.551 Latency(us) 00:11:33.551 Device Information : IOPS MiB/s Average min max 00:11:33.551 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002244.59 1000209.66 1041224.35 00:11:33.551 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003093.30 1000359.78 1009841.51 00:11:33.551 ======================================================== 00:11:33.551 Total : 256.00 0.12 1002668.95 1000209.66 1041224.35 00:11:33.551 00:11:33.551 15:21:50 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:33.551 15:21:50 -- target/delete_subsystem.sh@57 -- # kill -0 1530379 00:11:33.551 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1530379) - No such process 00:11:33.551 15:21:50 -- target/delete_subsystem.sh@67 -- # wait 1530379 00:11:33.551 15:21:50 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:11:33.551 15:21:50 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:11:33.551 15:21:50 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:33.551 15:21:50 -- nvmf/common.sh@117 -- # sync 00:11:33.551 15:21:50 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:33.551 15:21:50 -- nvmf/common.sh@120 -- # set +e 00:11:33.551 15:21:50 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:33.551 15:21:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:33.551 rmmod nvme_tcp 00:11:33.551 rmmod nvme_fabrics 00:11:33.551 rmmod nvme_keyring 00:11:33.551 15:21:50 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:33.551 15:21:50 -- nvmf/common.sh@124 -- # set -e 00:11:33.551 15:21:50 -- nvmf/common.sh@125 -- # return 0 00:11:33.551 15:21:50 -- nvmf/common.sh@478 -- # '[' -n 1529421 ']' 00:11:33.551 15:21:50 -- nvmf/common.sh@479 -- # killprocess 1529421 00:11:33.551 15:21:50 -- common/autotest_common.sh@936 -- # '[' -z 1529421 ']' 00:11:33.551 15:21:50 -- common/autotest_common.sh@940 -- # kill -0 1529421 00:11:33.551 15:21:50 -- common/autotest_common.sh@941 -- # uname 00:11:33.812 15:21:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:33.812 15:21:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1529421 00:11:33.812 15:21:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:33.812 15:21:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:33.812 15:21:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1529421' 00:11:33.812 killing process with pid 1529421 00:11:33.812 15:21:51 -- common/autotest_common.sh@955 -- # kill 1529421 00:11:33.812 15:21:51 -- common/autotest_common.sh@960 -- # wait 1529421 00:11:33.812 15:21:51 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:33.812 15:21:51 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:33.812 15:21:51 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:33.812 15:21:51 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:33.812 15:21:51 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:33.812 15:21:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:33.812 15:21:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:33.812 15:21:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:36.357 15:21:53 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:36.357 00:11:36.357 real 0m17.880s 00:11:36.357 user 0m30.737s 00:11:36.357 sys 0m6.234s 00:11:36.357 15:21:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:36.357 15:21:53 -- common/autotest_common.sh@10 -- # set +x 00:11:36.357 ************************************ 00:11:36.357 END TEST nvmf_delete_subsystem 00:11:36.357 ************************************ 00:11:36.357 15:21:53 -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:11:36.357 15:21:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:36.357 15:21:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:36.357 15:21:53 -- common/autotest_common.sh@10 -- # set +x 00:11:36.357 ************************************ 00:11:36.357 START TEST nvmf_ns_masking 00:11:36.357 ************************************ 00:11:36.357 15:21:53 -- common/autotest_common.sh@1111 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:11:36.357 * Looking for test storage... 00:11:36.357 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:36.357 15:21:53 -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:36.357 15:21:53 -- nvmf/common.sh@7 -- # uname -s 00:11:36.357 15:21:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:36.357 15:21:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:36.357 15:21:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:36.357 15:21:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:36.357 15:21:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:36.357 15:21:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:36.357 15:21:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:36.357 15:21:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:36.357 15:21:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:36.357 15:21:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:36.357 15:21:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:36.357 15:21:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:36.357 15:21:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:36.357 15:21:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:36.357 15:21:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:36.357 15:21:53 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:36.357 15:21:53 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:36.357 15:21:53 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:36.357 15:21:53 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:36.357 15:21:53 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:36.357 15:21:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.357 15:21:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.357 15:21:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.357 15:21:53 -- paths/export.sh@5 -- # export PATH 00:11:36.357 15:21:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.357 15:21:53 -- nvmf/common.sh@47 -- # : 0 00:11:36.357 15:21:53 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:36.357 15:21:53 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:36.357 15:21:53 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:36.357 15:21:53 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:36.357 15:21:53 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:36.357 15:21:53 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:36.357 15:21:53 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:36.357 15:21:53 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:36.357 15:21:53 -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:36.357 15:21:53 -- target/ns_masking.sh@11 -- # loops=5 00:11:36.357 15:21:53 -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:11:36.357 15:21:53 -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:11:36.357 15:21:53 -- target/ns_masking.sh@15 -- # uuidgen 00:11:36.357 15:21:53 -- target/ns_masking.sh@15 -- # HOSTID=6dea467c-f744-4edc-87a6-1245ff6c05c5 00:11:36.357 15:21:53 -- target/ns_masking.sh@44 -- # nvmftestinit 00:11:36.357 15:21:53 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:36.357 15:21:53 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:36.357 15:21:53 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:36.357 15:21:53 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:36.357 15:21:53 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:36.357 15:21:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:36.357 15:21:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:36.357 15:21:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:36.357 15:21:53 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:36.358 15:21:53 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:36.358 15:21:53 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:36.358 15:21:53 -- common/autotest_common.sh@10 -- # set +x 00:11:42.940 15:22:00 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:42.940 15:22:00 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:42.940 15:22:00 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:42.940 15:22:00 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:42.940 15:22:00 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:42.940 15:22:00 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:42.940 15:22:00 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:42.940 15:22:00 -- nvmf/common.sh@295 -- # net_devs=() 00:11:42.940 15:22:00 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:42.940 15:22:00 -- nvmf/common.sh@296 -- # e810=() 00:11:42.940 15:22:00 -- nvmf/common.sh@296 -- # local -ga e810 00:11:42.940 15:22:00 -- nvmf/common.sh@297 -- # x722=() 00:11:42.940 15:22:00 -- nvmf/common.sh@297 -- # local -ga x722 00:11:42.940 15:22:00 -- nvmf/common.sh@298 -- # mlx=() 00:11:42.940 15:22:00 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:42.940 15:22:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:42.940 15:22:00 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:42.940 15:22:00 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:42.940 15:22:00 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:42.940 15:22:00 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:42.940 15:22:00 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:42.940 15:22:00 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:42.940 15:22:00 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:42.940 15:22:00 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:42.940 15:22:00 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:42.940 15:22:00 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:42.940 15:22:00 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:42.940 15:22:00 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:42.940 15:22:00 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:42.940 15:22:00 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:42.940 15:22:00 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:42.940 15:22:00 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:42.940 15:22:00 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:42.940 15:22:00 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:42.940 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:42.940 15:22:00 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:42.940 15:22:00 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:42.940 15:22:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:42.940 15:22:00 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:42.940 15:22:00 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:42.940 15:22:00 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:42.940 15:22:00 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:42.940 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:42.940 15:22:00 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:42.940 15:22:00 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:42.940 15:22:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:42.940 15:22:00 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:42.940 15:22:00 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:42.940 15:22:00 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:42.940 15:22:00 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:42.940 15:22:00 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:42.940 15:22:00 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:42.940 15:22:00 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:42.940 15:22:00 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:42.940 15:22:00 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:42.940 15:22:00 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:42.940 Found net devices under 0000:31:00.0: cvl_0_0 00:11:42.940 15:22:00 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:42.940 15:22:00 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:42.940 15:22:00 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:42.940 15:22:00 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:42.940 15:22:00 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:42.940 15:22:00 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:42.940 Found net devices under 0000:31:00.1: cvl_0_1 00:11:42.940 15:22:00 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:42.940 15:22:00 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:42.940 15:22:00 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:42.940 15:22:00 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:42.940 15:22:00 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:11:42.940 15:22:00 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:11:42.940 15:22:00 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:42.940 15:22:00 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:42.940 15:22:00 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:42.940 15:22:00 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:42.940 15:22:00 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:42.940 15:22:00 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:42.940 15:22:00 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:42.940 15:22:00 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:42.940 15:22:00 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:42.940 15:22:00 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:42.940 15:22:00 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:42.940 15:22:00 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:42.940 15:22:00 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:42.940 15:22:00 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:42.940 15:22:00 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:42.940 15:22:00 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:43.200 15:22:00 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:43.200 15:22:00 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:43.200 15:22:00 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:43.200 15:22:00 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:43.200 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:43.200 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.550 ms 00:11:43.200 00:11:43.200 --- 10.0.0.2 ping statistics --- 00:11:43.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.201 rtt min/avg/max/mdev = 0.550/0.550/0.550/0.000 ms 00:11:43.201 15:22:00 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:43.201 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:43.201 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.342 ms 00:11:43.201 00:11:43.201 --- 10.0.0.1 ping statistics --- 00:11:43.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.201 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:11:43.201 15:22:00 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:43.201 15:22:00 -- nvmf/common.sh@411 -- # return 0 00:11:43.201 15:22:00 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:43.201 15:22:00 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:43.201 15:22:00 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:43.201 15:22:00 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:43.201 15:22:00 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:43.201 15:22:00 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:43.201 15:22:00 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:43.201 15:22:00 -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:11:43.201 15:22:00 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:43.201 15:22:00 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:43.201 15:22:00 -- common/autotest_common.sh@10 -- # set +x 00:11:43.201 15:22:00 -- nvmf/common.sh@470 -- # nvmfpid=1535447 00:11:43.201 15:22:00 -- nvmf/common.sh@471 -- # waitforlisten 1535447 00:11:43.201 15:22:00 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:43.201 15:22:00 -- common/autotest_common.sh@817 -- # '[' -z 1535447 ']' 00:11:43.201 15:22:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:43.201 15:22:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:43.201 15:22:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:43.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:43.201 15:22:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:43.201 15:22:00 -- common/autotest_common.sh@10 -- # set +x 00:11:43.201 [2024-04-26 15:22:00.641337] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:11:43.201 [2024-04-26 15:22:00.641405] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:43.460 EAL: No free 2048 kB hugepages reported on node 1 00:11:43.460 [2024-04-26 15:22:00.716072] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:43.460 [2024-04-26 15:22:00.789727] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:43.460 [2024-04-26 15:22:00.789771] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:43.460 [2024-04-26 15:22:00.789780] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:43.460 [2024-04-26 15:22:00.789788] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:43.460 [2024-04-26 15:22:00.789795] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:43.460 [2024-04-26 15:22:00.789939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:43.460 [2024-04-26 15:22:00.790054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:43.460 [2024-04-26 15:22:00.790211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:43.460 [2024-04-26 15:22:00.790212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:44.029 15:22:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:44.029 15:22:01 -- common/autotest_common.sh@850 -- # return 0 00:11:44.029 15:22:01 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:44.029 15:22:01 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:44.029 15:22:01 -- common/autotest_common.sh@10 -- # set +x 00:11:44.029 15:22:01 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:44.029 15:22:01 -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:44.289 [2024-04-26 15:22:01.590730] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:44.289 15:22:01 -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:11:44.289 15:22:01 -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:11:44.289 15:22:01 -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:44.548 Malloc1 00:11:44.548 15:22:01 -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:44.548 Malloc2 00:11:44.548 15:22:01 -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:44.808 15:22:02 -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:11:45.068 15:22:02 -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:45.068 [2024-04-26 15:22:02.451858] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:45.068 15:22:02 -- target/ns_masking.sh@61 -- # connect 00:11:45.068 15:22:02 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 6dea467c-f744-4edc-87a6-1245ff6c05c5 -a 10.0.0.2 -s 4420 -i 4 00:11:45.330 15:22:02 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:11:45.330 15:22:02 -- common/autotest_common.sh@1184 -- # local i=0 00:11:45.330 15:22:02 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:11:45.330 15:22:02 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:11:45.330 15:22:02 -- common/autotest_common.sh@1191 -- # sleep 2 00:11:47.875 15:22:04 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:11:47.875 15:22:04 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:11:47.875 15:22:04 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:11:47.875 15:22:04 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:11:47.875 15:22:04 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:11:47.875 15:22:04 -- common/autotest_common.sh@1194 -- # return 0 00:11:47.875 15:22:04 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:11:47.875 15:22:04 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:47.875 15:22:04 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:11:47.875 15:22:04 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:11:47.875 15:22:04 -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:11:47.875 15:22:04 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:47.875 15:22:04 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:47.875 [ 0]:0x1 00:11:47.875 15:22:04 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:47.875 15:22:04 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:47.875 15:22:04 -- target/ns_masking.sh@40 -- # nguid=0bacb181f50b4074be37c9dc8f635fa9 00:11:47.875 15:22:04 -- target/ns_masking.sh@41 -- # [[ 0bacb181f50b4074be37c9dc8f635fa9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:47.875 15:22:04 -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:11:47.875 15:22:05 -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:11:47.875 15:22:05 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:47.875 15:22:05 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:47.875 [ 0]:0x1 00:11:47.875 15:22:05 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:47.875 15:22:05 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:47.875 15:22:05 -- target/ns_masking.sh@40 -- # nguid=0bacb181f50b4074be37c9dc8f635fa9 00:11:47.875 15:22:05 -- target/ns_masking.sh@41 -- # [[ 0bacb181f50b4074be37c9dc8f635fa9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:47.875 15:22:05 -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:11:47.875 15:22:05 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:47.875 15:22:05 -- target/ns_masking.sh@39 -- # grep 0x2 00:11:47.875 [ 1]:0x2 00:11:47.875 15:22:05 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:47.875 15:22:05 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:47.875 15:22:05 -- target/ns_masking.sh@40 -- # nguid=1c4d7b8b107f42ceadddab56147b09df 00:11:47.875 15:22:05 -- target/ns_masking.sh@41 -- # [[ 1c4d7b8b107f42ceadddab56147b09df != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:47.875 15:22:05 -- target/ns_masking.sh@69 -- # disconnect 00:11:47.875 15:22:05 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:47.875 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.875 15:22:05 -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:48.136 15:22:05 -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:11:48.136 15:22:05 -- target/ns_masking.sh@77 -- # connect 1 00:11:48.136 15:22:05 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 6dea467c-f744-4edc-87a6-1245ff6c05c5 -a 10.0.0.2 -s 4420 -i 4 00:11:48.396 15:22:05 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:11:48.396 15:22:05 -- common/autotest_common.sh@1184 -- # local i=0 00:11:48.396 15:22:05 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:11:48.396 15:22:05 -- common/autotest_common.sh@1186 -- # [[ -n 1 ]] 00:11:48.396 15:22:05 -- common/autotest_common.sh@1187 -- # nvme_device_counter=1 00:11:48.396 15:22:05 -- common/autotest_common.sh@1191 -- # sleep 2 00:11:50.306 15:22:07 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:11:50.306 15:22:07 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:11:50.306 15:22:07 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:11:50.306 15:22:07 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:11:50.306 15:22:07 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:11:50.306 15:22:07 -- common/autotest_common.sh@1194 -- # return 0 00:11:50.306 15:22:07 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:11:50.306 15:22:07 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:50.566 15:22:07 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:11:50.566 15:22:07 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:11:50.566 15:22:07 -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:11:50.566 15:22:07 -- common/autotest_common.sh@638 -- # local es=0 00:11:50.566 15:22:07 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:11:50.566 15:22:07 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:11:50.566 15:22:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:50.566 15:22:07 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:11:50.566 15:22:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:50.566 15:22:07 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:11:50.566 15:22:07 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:50.566 15:22:07 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:50.566 15:22:07 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:50.566 15:22:07 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:50.566 15:22:07 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:50.566 15:22:07 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:50.566 15:22:07 -- common/autotest_common.sh@641 -- # es=1 00:11:50.566 15:22:07 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:50.566 15:22:07 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:50.566 15:22:07 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:50.566 15:22:07 -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:11:50.566 15:22:07 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:50.566 15:22:07 -- target/ns_masking.sh@39 -- # grep 0x2 00:11:50.566 [ 0]:0x2 00:11:50.566 15:22:07 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:50.566 15:22:07 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:50.566 15:22:07 -- target/ns_masking.sh@40 -- # nguid=1c4d7b8b107f42ceadddab56147b09df 00:11:50.566 15:22:07 -- target/ns_masking.sh@41 -- # [[ 1c4d7b8b107f42ceadddab56147b09df != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:50.566 15:22:07 -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:50.827 15:22:08 -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:11:50.827 15:22:08 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:50.827 15:22:08 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:50.827 [ 0]:0x1 00:11:50.827 15:22:08 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:50.827 15:22:08 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:50.827 15:22:08 -- target/ns_masking.sh@40 -- # nguid=0bacb181f50b4074be37c9dc8f635fa9 00:11:50.827 15:22:08 -- target/ns_masking.sh@41 -- # [[ 0bacb181f50b4074be37c9dc8f635fa9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:50.827 15:22:08 -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:11:50.827 15:22:08 -- target/ns_masking.sh@39 -- # grep 0x2 00:11:50.827 15:22:08 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:50.827 [ 1]:0x2 00:11:50.827 15:22:08 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:50.827 15:22:08 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:50.827 15:22:08 -- target/ns_masking.sh@40 -- # nguid=1c4d7b8b107f42ceadddab56147b09df 00:11:50.827 15:22:08 -- target/ns_masking.sh@41 -- # [[ 1c4d7b8b107f42ceadddab56147b09df != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:50.827 15:22:08 -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:51.087 15:22:08 -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:11:51.087 15:22:08 -- common/autotest_common.sh@638 -- # local es=0 00:11:51.087 15:22:08 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:11:51.087 15:22:08 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:11:51.087 15:22:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:51.087 15:22:08 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:11:51.087 15:22:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:51.087 15:22:08 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:11:51.087 15:22:08 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:51.087 15:22:08 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:51.087 15:22:08 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:51.087 15:22:08 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:51.087 15:22:08 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:51.087 15:22:08 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:51.087 15:22:08 -- common/autotest_common.sh@641 -- # es=1 00:11:51.087 15:22:08 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:51.087 15:22:08 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:51.087 15:22:08 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:51.087 15:22:08 -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:11:51.087 15:22:08 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:51.087 15:22:08 -- target/ns_masking.sh@39 -- # grep 0x2 00:11:51.087 [ 0]:0x2 00:11:51.088 15:22:08 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:51.088 15:22:08 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:51.088 15:22:08 -- target/ns_masking.sh@40 -- # nguid=1c4d7b8b107f42ceadddab56147b09df 00:11:51.088 15:22:08 -- target/ns_masking.sh@41 -- # [[ 1c4d7b8b107f42ceadddab56147b09df != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:51.088 15:22:08 -- target/ns_masking.sh@91 -- # disconnect 00:11:51.088 15:22:08 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:51.348 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.348 15:22:08 -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:51.348 15:22:08 -- target/ns_masking.sh@95 -- # connect 2 00:11:51.348 15:22:08 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 6dea467c-f744-4edc-87a6-1245ff6c05c5 -a 10.0.0.2 -s 4420 -i 4 00:11:51.608 15:22:08 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:51.608 15:22:08 -- common/autotest_common.sh@1184 -- # local i=0 00:11:51.608 15:22:08 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:11:51.608 15:22:08 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:11:51.608 15:22:08 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:11:51.608 15:22:08 -- common/autotest_common.sh@1191 -- # sleep 2 00:11:53.515 15:22:10 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:11:53.515 15:22:10 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:11:53.515 15:22:10 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:11:53.775 15:22:10 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:11:53.775 15:22:10 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:11:53.775 15:22:10 -- common/autotest_common.sh@1194 -- # return 0 00:11:53.775 15:22:10 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:11:53.775 15:22:10 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:53.775 15:22:11 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:11:53.775 15:22:11 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:11:53.775 15:22:11 -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:11:53.775 15:22:11 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:53.775 15:22:11 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:53.775 [ 0]:0x1 00:11:53.775 15:22:11 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:53.775 15:22:11 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:53.775 15:22:11 -- target/ns_masking.sh@40 -- # nguid=0bacb181f50b4074be37c9dc8f635fa9 00:11:53.775 15:22:11 -- target/ns_masking.sh@41 -- # [[ 0bacb181f50b4074be37c9dc8f635fa9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:53.775 15:22:11 -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:11:53.775 15:22:11 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:53.775 15:22:11 -- target/ns_masking.sh@39 -- # grep 0x2 00:11:54.035 [ 1]:0x2 00:11:54.035 15:22:11 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:54.035 15:22:11 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:54.035 15:22:11 -- target/ns_masking.sh@40 -- # nguid=1c4d7b8b107f42ceadddab56147b09df 00:11:54.035 15:22:11 -- target/ns_masking.sh@41 -- # [[ 1c4d7b8b107f42ceadddab56147b09df != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:54.035 15:22:11 -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:54.035 15:22:11 -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:11:54.035 15:22:11 -- common/autotest_common.sh@638 -- # local es=0 00:11:54.035 15:22:11 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:11:54.035 15:22:11 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:11:54.035 15:22:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:54.035 15:22:11 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:11:54.035 15:22:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:54.035 15:22:11 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:11:54.035 15:22:11 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:54.035 15:22:11 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:54.295 15:22:11 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:54.295 15:22:11 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:54.295 15:22:11 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:54.295 15:22:11 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:54.295 15:22:11 -- common/autotest_common.sh@641 -- # es=1 00:11:54.295 15:22:11 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:54.295 15:22:11 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:54.295 15:22:11 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:54.295 15:22:11 -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:11:54.295 15:22:11 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:54.295 15:22:11 -- target/ns_masking.sh@39 -- # grep 0x2 00:11:54.295 [ 0]:0x2 00:11:54.295 15:22:11 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:54.295 15:22:11 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:54.295 15:22:11 -- target/ns_masking.sh@40 -- # nguid=1c4d7b8b107f42ceadddab56147b09df 00:11:54.295 15:22:11 -- target/ns_masking.sh@41 -- # [[ 1c4d7b8b107f42ceadddab56147b09df != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:54.295 15:22:11 -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:54.295 15:22:11 -- common/autotest_common.sh@638 -- # local es=0 00:11:54.295 15:22:11 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:54.295 15:22:11 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:54.295 15:22:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:54.295 15:22:11 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:54.295 15:22:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:54.295 15:22:11 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:54.295 15:22:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:54.295 15:22:11 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:54.295 15:22:11 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:54.295 15:22:11 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:54.295 [2024-04-26 15:22:11.724383] nvmf_rpc.c:1779:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:11:54.295 request: 00:11:54.295 { 00:11:54.295 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:54.295 "nsid": 2, 00:11:54.295 "host": "nqn.2016-06.io.spdk:host1", 00:11:54.295 "method": "nvmf_ns_remove_host", 00:11:54.295 "req_id": 1 00:11:54.295 } 00:11:54.295 Got JSON-RPC error response 00:11:54.295 response: 00:11:54.295 { 00:11:54.295 "code": -32602, 00:11:54.295 "message": "Invalid parameters" 00:11:54.295 } 00:11:54.555 15:22:11 -- common/autotest_common.sh@641 -- # es=1 00:11:54.555 15:22:11 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:54.555 15:22:11 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:54.555 15:22:11 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:54.555 15:22:11 -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:11:54.555 15:22:11 -- common/autotest_common.sh@638 -- # local es=0 00:11:54.555 15:22:11 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:11:54.555 15:22:11 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:11:54.555 15:22:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:54.555 15:22:11 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:11:54.555 15:22:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:54.555 15:22:11 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:11:54.555 15:22:11 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:54.555 15:22:11 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:54.555 15:22:11 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:54.555 15:22:11 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:54.555 15:22:11 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:54.555 15:22:11 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:54.555 15:22:11 -- common/autotest_common.sh@641 -- # es=1 00:11:54.555 15:22:11 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:54.555 15:22:11 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:54.555 15:22:11 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:54.555 15:22:11 -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:11:54.555 15:22:11 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:54.555 15:22:11 -- target/ns_masking.sh@39 -- # grep 0x2 00:11:54.555 [ 0]:0x2 00:11:54.555 15:22:11 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:54.555 15:22:11 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:54.555 15:22:11 -- target/ns_masking.sh@40 -- # nguid=1c4d7b8b107f42ceadddab56147b09df 00:11:54.555 15:22:11 -- target/ns_masking.sh@41 -- # [[ 1c4d7b8b107f42ceadddab56147b09df != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:54.555 15:22:11 -- target/ns_masking.sh@108 -- # disconnect 00:11:54.555 15:22:11 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:54.816 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.816 15:22:12 -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:54.816 15:22:12 -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:11:54.816 15:22:12 -- target/ns_masking.sh@114 -- # nvmftestfini 00:11:54.816 15:22:12 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:54.816 15:22:12 -- nvmf/common.sh@117 -- # sync 00:11:54.816 15:22:12 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:54.816 15:22:12 -- nvmf/common.sh@120 -- # set +e 00:11:54.816 15:22:12 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:54.816 15:22:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:54.816 rmmod nvme_tcp 00:11:54.816 rmmod nvme_fabrics 00:11:55.077 rmmod nvme_keyring 00:11:55.077 15:22:12 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:55.077 15:22:12 -- nvmf/common.sh@124 -- # set -e 00:11:55.077 15:22:12 -- nvmf/common.sh@125 -- # return 0 00:11:55.077 15:22:12 -- nvmf/common.sh@478 -- # '[' -n 1535447 ']' 00:11:55.077 15:22:12 -- nvmf/common.sh@479 -- # killprocess 1535447 00:11:55.077 15:22:12 -- common/autotest_common.sh@936 -- # '[' -z 1535447 ']' 00:11:55.077 15:22:12 -- common/autotest_common.sh@940 -- # kill -0 1535447 00:11:55.077 15:22:12 -- common/autotest_common.sh@941 -- # uname 00:11:55.077 15:22:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:55.077 15:22:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1535447 00:11:55.077 15:22:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:55.077 15:22:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:55.077 15:22:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1535447' 00:11:55.077 killing process with pid 1535447 00:11:55.077 15:22:12 -- common/autotest_common.sh@955 -- # kill 1535447 00:11:55.077 15:22:12 -- common/autotest_common.sh@960 -- # wait 1535447 00:11:55.077 15:22:12 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:55.077 15:22:12 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:55.077 15:22:12 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:55.077 15:22:12 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:55.077 15:22:12 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:55.077 15:22:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:55.077 15:22:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:55.077 15:22:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:57.619 15:22:14 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:57.619 00:11:57.619 real 0m21.129s 00:11:57.619 user 0m51.202s 00:11:57.619 sys 0m6.819s 00:11:57.619 15:22:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:57.619 15:22:14 -- common/autotest_common.sh@10 -- # set +x 00:11:57.619 ************************************ 00:11:57.619 END TEST nvmf_ns_masking 00:11:57.619 ************************************ 00:11:57.619 15:22:14 -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:11:57.619 15:22:14 -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:57.619 15:22:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:57.619 15:22:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:57.619 15:22:14 -- common/autotest_common.sh@10 -- # set +x 00:11:57.619 ************************************ 00:11:57.619 START TEST nvmf_nvme_cli 00:11:57.619 ************************************ 00:11:57.619 15:22:14 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:57.619 * Looking for test storage... 00:11:57.619 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:57.619 15:22:14 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:57.619 15:22:14 -- nvmf/common.sh@7 -- # uname -s 00:11:57.619 15:22:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:57.619 15:22:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:57.619 15:22:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:57.619 15:22:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:57.619 15:22:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:57.619 15:22:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:57.619 15:22:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:57.619 15:22:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:57.619 15:22:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:57.619 15:22:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:57.619 15:22:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:57.619 15:22:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:57.619 15:22:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:57.619 15:22:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:57.619 15:22:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:57.619 15:22:14 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:57.619 15:22:14 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:57.619 15:22:14 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:57.619 15:22:14 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:57.619 15:22:14 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:57.619 15:22:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.619 15:22:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.619 15:22:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.619 15:22:14 -- paths/export.sh@5 -- # export PATH 00:11:57.619 15:22:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.619 15:22:14 -- nvmf/common.sh@47 -- # : 0 00:11:57.619 15:22:14 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:57.619 15:22:14 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:57.619 15:22:14 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:57.619 15:22:14 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:57.619 15:22:14 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:57.619 15:22:14 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:57.619 15:22:14 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:57.619 15:22:14 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:57.619 15:22:14 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:57.619 15:22:14 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:57.619 15:22:14 -- target/nvme_cli.sh@14 -- # devs=() 00:11:57.619 15:22:14 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:11:57.619 15:22:14 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:57.619 15:22:14 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:57.619 15:22:14 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:57.619 15:22:14 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:57.619 15:22:14 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:57.619 15:22:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:57.619 15:22:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:57.619 15:22:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:57.619 15:22:14 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:57.619 15:22:14 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:57.619 15:22:14 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:57.619 15:22:14 -- common/autotest_common.sh@10 -- # set +x 00:12:05.759 15:22:21 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:05.759 15:22:21 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:05.759 15:22:21 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:05.759 15:22:21 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:05.759 15:22:21 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:05.759 15:22:21 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:05.759 15:22:21 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:05.759 15:22:21 -- nvmf/common.sh@295 -- # net_devs=() 00:12:05.759 15:22:21 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:05.759 15:22:21 -- nvmf/common.sh@296 -- # e810=() 00:12:05.759 15:22:21 -- nvmf/common.sh@296 -- # local -ga e810 00:12:05.759 15:22:21 -- nvmf/common.sh@297 -- # x722=() 00:12:05.759 15:22:21 -- nvmf/common.sh@297 -- # local -ga x722 00:12:05.759 15:22:21 -- nvmf/common.sh@298 -- # mlx=() 00:12:05.759 15:22:21 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:05.759 15:22:21 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:05.759 15:22:21 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:05.759 15:22:21 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:05.759 15:22:21 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:05.759 15:22:21 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:05.759 15:22:21 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:05.759 15:22:21 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:05.759 15:22:21 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:05.759 15:22:21 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:05.759 15:22:21 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:05.759 15:22:21 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:05.759 15:22:21 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:05.759 15:22:21 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:05.759 15:22:21 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:05.759 15:22:21 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:05.759 15:22:21 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:05.759 15:22:21 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:05.759 15:22:21 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:05.759 15:22:21 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:05.759 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:05.759 15:22:21 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:05.759 15:22:21 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:05.759 15:22:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:05.759 15:22:21 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:05.759 15:22:21 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:05.759 15:22:21 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:05.759 15:22:21 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:05.759 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:05.759 15:22:21 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:05.759 15:22:21 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:05.759 15:22:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:05.759 15:22:21 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:05.759 15:22:21 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:05.759 15:22:21 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:05.759 15:22:21 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:05.759 15:22:21 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:05.759 15:22:21 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:05.759 15:22:21 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:05.759 15:22:21 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:05.759 15:22:21 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:05.759 15:22:21 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:05.759 Found net devices under 0000:31:00.0: cvl_0_0 00:12:05.759 15:22:21 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:05.759 15:22:21 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:05.759 15:22:21 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:05.759 15:22:21 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:05.759 15:22:21 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:05.759 15:22:21 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:05.759 Found net devices under 0000:31:00.1: cvl_0_1 00:12:05.759 15:22:21 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:05.759 15:22:21 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:05.759 15:22:21 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:05.759 15:22:21 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:05.759 15:22:21 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:12:05.759 15:22:21 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:12:05.759 15:22:21 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:05.759 15:22:21 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:05.759 15:22:21 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:05.759 15:22:21 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:05.759 15:22:21 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:05.759 15:22:21 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:05.759 15:22:21 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:05.759 15:22:21 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:05.760 15:22:21 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:05.760 15:22:21 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:05.760 15:22:21 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:05.760 15:22:21 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:05.760 15:22:21 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:05.760 15:22:21 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:05.760 15:22:21 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:05.760 15:22:21 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:05.760 15:22:21 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:05.760 15:22:21 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:05.760 15:22:21 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:05.760 15:22:21 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:05.760 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:05.760 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.740 ms 00:12:05.760 00:12:05.760 --- 10.0.0.2 ping statistics --- 00:12:05.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:05.760 rtt min/avg/max/mdev = 0.740/0.740/0.740/0.000 ms 00:12:05.760 15:22:21 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:05.760 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:05.760 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.336 ms 00:12:05.760 00:12:05.760 --- 10.0.0.1 ping statistics --- 00:12:05.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:05.760 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:12:05.760 15:22:22 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:05.760 15:22:22 -- nvmf/common.sh@411 -- # return 0 00:12:05.760 15:22:22 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:05.760 15:22:22 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:05.760 15:22:22 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:05.760 15:22:22 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:05.760 15:22:22 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:05.760 15:22:22 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:05.760 15:22:22 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:05.760 15:22:22 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:12:05.760 15:22:22 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:05.760 15:22:22 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:05.760 15:22:22 -- common/autotest_common.sh@10 -- # set +x 00:12:05.760 15:22:22 -- nvmf/common.sh@470 -- # nvmfpid=1542026 00:12:05.760 15:22:22 -- nvmf/common.sh@471 -- # waitforlisten 1542026 00:12:05.760 15:22:22 -- common/autotest_common.sh@817 -- # '[' -z 1542026 ']' 00:12:05.760 15:22:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:05.760 15:22:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:05.760 15:22:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:05.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:05.760 15:22:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:05.760 15:22:22 -- common/autotest_common.sh@10 -- # set +x 00:12:05.760 15:22:22 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:05.760 [2024-04-26 15:22:22.109899] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:12:05.760 [2024-04-26 15:22:22.109964] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:05.760 EAL: No free 2048 kB hugepages reported on node 1 00:12:05.760 [2024-04-26 15:22:22.182532] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:05.760 [2024-04-26 15:22:22.255466] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:05.760 [2024-04-26 15:22:22.255508] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:05.760 [2024-04-26 15:22:22.255517] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:05.760 [2024-04-26 15:22:22.255524] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:05.760 [2024-04-26 15:22:22.255530] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:05.760 [2024-04-26 15:22:22.255688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:05.760 [2024-04-26 15:22:22.255777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:05.760 [2024-04-26 15:22:22.255934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.760 [2024-04-26 15:22:22.255934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:05.760 15:22:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:05.760 15:22:22 -- common/autotest_common.sh@850 -- # return 0 00:12:05.760 15:22:22 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:05.760 15:22:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:05.760 15:22:22 -- common/autotest_common.sh@10 -- # set +x 00:12:05.760 15:22:22 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:05.760 15:22:22 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:05.760 15:22:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:05.760 15:22:22 -- common/autotest_common.sh@10 -- # set +x 00:12:05.760 [2024-04-26 15:22:22.930437] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:05.760 15:22:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:05.760 15:22:22 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:05.760 15:22:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:05.760 15:22:22 -- common/autotest_common.sh@10 -- # set +x 00:12:05.760 Malloc0 00:12:05.760 15:22:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:05.760 15:22:22 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:05.760 15:22:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:05.760 15:22:22 -- common/autotest_common.sh@10 -- # set +x 00:12:05.760 Malloc1 00:12:05.760 15:22:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:05.760 15:22:22 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:12:05.760 15:22:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:05.760 15:22:22 -- common/autotest_common.sh@10 -- # set +x 00:12:05.760 15:22:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:05.760 15:22:22 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:05.760 15:22:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:05.760 15:22:22 -- common/autotest_common.sh@10 -- # set +x 00:12:05.760 15:22:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:05.760 15:22:23 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:05.760 15:22:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:05.760 15:22:23 -- common/autotest_common.sh@10 -- # set +x 00:12:05.760 15:22:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:05.760 15:22:23 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:05.760 15:22:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:05.760 15:22:23 -- common/autotest_common.sh@10 -- # set +x 00:12:05.760 [2024-04-26 15:22:23.020488] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:05.760 15:22:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:05.760 15:22:23 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:05.760 15:22:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:05.760 15:22:23 -- common/autotest_common.sh@10 -- # set +x 00:12:05.760 15:22:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:05.760 15:22:23 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:12:05.760 00:12:05.760 Discovery Log Number of Records 2, Generation counter 2 00:12:05.760 =====Discovery Log Entry 0====== 00:12:05.760 trtype: tcp 00:12:05.760 adrfam: ipv4 00:12:05.760 subtype: current discovery subsystem 00:12:05.760 treq: not required 00:12:05.760 portid: 0 00:12:05.760 trsvcid: 4420 00:12:05.760 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:05.760 traddr: 10.0.0.2 00:12:05.760 eflags: explicit discovery connections, duplicate discovery information 00:12:05.760 sectype: none 00:12:05.760 =====Discovery Log Entry 1====== 00:12:05.760 trtype: tcp 00:12:05.760 adrfam: ipv4 00:12:05.760 subtype: nvme subsystem 00:12:05.760 treq: not required 00:12:05.760 portid: 0 00:12:05.760 trsvcid: 4420 00:12:05.760 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:05.760 traddr: 10.0.0.2 00:12:05.760 eflags: none 00:12:05.760 sectype: none 00:12:05.760 15:22:23 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:12:05.760 15:22:23 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:12:05.760 15:22:23 -- nvmf/common.sh@511 -- # local dev _ 00:12:05.760 15:22:23 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:05.760 15:22:23 -- nvmf/common.sh@510 -- # nvme list 00:12:05.760 15:22:23 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:12:05.760 15:22:23 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:05.760 15:22:23 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:12:05.760 15:22:23 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:05.760 15:22:23 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:12:05.760 15:22:23 -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:07.681 15:22:24 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:07.681 15:22:24 -- common/autotest_common.sh@1184 -- # local i=0 00:12:07.681 15:22:24 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:12:07.681 15:22:24 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:12:07.681 15:22:24 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:12:07.681 15:22:24 -- common/autotest_common.sh@1191 -- # sleep 2 00:12:09.625 15:22:26 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:12:09.625 15:22:26 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:12:09.625 15:22:26 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:12:09.625 15:22:26 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:12:09.625 15:22:26 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:12:09.625 15:22:26 -- common/autotest_common.sh@1194 -- # return 0 00:12:09.625 15:22:26 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:12:09.625 15:22:26 -- nvmf/common.sh@511 -- # local dev _ 00:12:09.625 15:22:26 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:09.625 15:22:26 -- nvmf/common.sh@510 -- # nvme list 00:12:09.625 15:22:26 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:12:09.625 15:22:26 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:09.625 15:22:26 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:12:09.625 15:22:26 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:09.625 15:22:26 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:09.625 15:22:26 -- nvmf/common.sh@515 -- # echo /dev/nvme0n2 00:12:09.625 15:22:26 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:09.625 15:22:26 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:09.625 15:22:26 -- nvmf/common.sh@515 -- # echo /dev/nvme0n1 00:12:09.625 15:22:26 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:09.625 15:22:26 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:12:09.625 /dev/nvme0n1 ]] 00:12:09.625 15:22:26 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:12:09.625 15:22:26 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:12:09.625 15:22:26 -- nvmf/common.sh@511 -- # local dev _ 00:12:09.625 15:22:26 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:09.625 15:22:26 -- nvmf/common.sh@510 -- # nvme list 00:12:09.625 15:22:26 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:12:09.625 15:22:26 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:09.625 15:22:26 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:12:09.625 15:22:26 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:09.625 15:22:26 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:09.625 15:22:26 -- nvmf/common.sh@515 -- # echo /dev/nvme0n2 00:12:09.625 15:22:26 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:09.625 15:22:26 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:09.625 15:22:26 -- nvmf/common.sh@515 -- # echo /dev/nvme0n1 00:12:09.625 15:22:26 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:09.625 15:22:26 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:12:09.625 15:22:26 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:09.625 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.625 15:22:26 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:09.625 15:22:26 -- common/autotest_common.sh@1205 -- # local i=0 00:12:09.625 15:22:26 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:12:09.625 15:22:26 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:09.625 15:22:26 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:12:09.625 15:22:26 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:09.625 15:22:26 -- common/autotest_common.sh@1217 -- # return 0 00:12:09.625 15:22:26 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:12:09.625 15:22:26 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:09.625 15:22:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:09.625 15:22:26 -- common/autotest_common.sh@10 -- # set +x 00:12:09.625 15:22:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:09.625 15:22:26 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:09.625 15:22:26 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:12:09.625 15:22:26 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:09.625 15:22:26 -- nvmf/common.sh@117 -- # sync 00:12:09.625 15:22:26 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:09.625 15:22:26 -- nvmf/common.sh@120 -- # set +e 00:12:09.625 15:22:26 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:09.625 15:22:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:09.625 rmmod nvme_tcp 00:12:09.625 rmmod nvme_fabrics 00:12:09.625 rmmod nvme_keyring 00:12:09.625 15:22:26 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:09.625 15:22:26 -- nvmf/common.sh@124 -- # set -e 00:12:09.625 15:22:26 -- nvmf/common.sh@125 -- # return 0 00:12:09.625 15:22:26 -- nvmf/common.sh@478 -- # '[' -n 1542026 ']' 00:12:09.625 15:22:26 -- nvmf/common.sh@479 -- # killprocess 1542026 00:12:09.625 15:22:26 -- common/autotest_common.sh@936 -- # '[' -z 1542026 ']' 00:12:09.625 15:22:26 -- common/autotest_common.sh@940 -- # kill -0 1542026 00:12:09.625 15:22:26 -- common/autotest_common.sh@941 -- # uname 00:12:09.625 15:22:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:09.625 15:22:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1542026 00:12:09.625 15:22:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:09.625 15:22:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:09.625 15:22:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1542026' 00:12:09.625 killing process with pid 1542026 00:12:09.625 15:22:27 -- common/autotest_common.sh@955 -- # kill 1542026 00:12:09.625 15:22:27 -- common/autotest_common.sh@960 -- # wait 1542026 00:12:09.885 15:22:27 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:09.885 15:22:27 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:09.885 15:22:27 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:09.885 15:22:27 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:09.885 15:22:27 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:09.885 15:22:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:09.885 15:22:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:09.886 15:22:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:12.427 15:22:29 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:12.427 00:12:12.427 real 0m14.491s 00:12:12.427 user 0m21.723s 00:12:12.427 sys 0m5.912s 00:12:12.427 15:22:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:12.427 15:22:29 -- common/autotest_common.sh@10 -- # set +x 00:12:12.427 ************************************ 00:12:12.427 END TEST nvmf_nvme_cli 00:12:12.427 ************************************ 00:12:12.427 15:22:29 -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:12:12.427 15:22:29 -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:12.427 15:22:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:12.427 15:22:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:12.427 15:22:29 -- common/autotest_common.sh@10 -- # set +x 00:12:12.427 ************************************ 00:12:12.427 START TEST nvmf_vfio_user 00:12:12.427 ************************************ 00:12:12.427 15:22:29 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:12.427 * Looking for test storage... 00:12:12.427 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:12.427 15:22:29 -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:12.427 15:22:29 -- nvmf/common.sh@7 -- # uname -s 00:12:12.427 15:22:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:12.427 15:22:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:12.427 15:22:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:12.427 15:22:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:12.427 15:22:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:12.427 15:22:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:12.427 15:22:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:12.427 15:22:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:12.427 15:22:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:12.427 15:22:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:12.427 15:22:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:12.427 15:22:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:12.427 15:22:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:12.427 15:22:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:12.427 15:22:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:12.427 15:22:29 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:12.427 15:22:29 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:12.427 15:22:29 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:12.427 15:22:29 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:12.427 15:22:29 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:12.427 15:22:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.427 15:22:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.427 15:22:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.427 15:22:29 -- paths/export.sh@5 -- # export PATH 00:12:12.427 15:22:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.427 15:22:29 -- nvmf/common.sh@47 -- # : 0 00:12:12.427 15:22:29 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:12.427 15:22:29 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:12.427 15:22:29 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:12.427 15:22:29 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:12.427 15:22:29 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:12.427 15:22:29 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:12.427 15:22:29 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:12.427 15:22:29 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:12.427 15:22:29 -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:12.427 15:22:29 -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:12.427 15:22:29 -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:12:12.427 15:22:29 -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:12.427 15:22:29 -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:12.427 15:22:29 -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:12.427 15:22:29 -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:12:12.427 15:22:29 -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:12:12.427 15:22:29 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:12:12.427 15:22:29 -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:12:12.427 15:22:29 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1543694 00:12:12.427 15:22:29 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1543694' 00:12:12.427 Process pid: 1543694 00:12:12.427 15:22:29 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:12.427 15:22:29 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1543694 00:12:12.427 15:22:29 -- common/autotest_common.sh@817 -- # '[' -z 1543694 ']' 00:12:12.427 15:22:29 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:12:12.427 15:22:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:12.427 15:22:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:12.427 15:22:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:12.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:12.427 15:22:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:12.427 15:22:29 -- common/autotest_common.sh@10 -- # set +x 00:12:12.427 [2024-04-26 15:22:29.659983] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:12:12.427 [2024-04-26 15:22:29.660043] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:12.427 EAL: No free 2048 kB hugepages reported on node 1 00:12:12.427 [2024-04-26 15:22:29.727344] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:12.427 [2024-04-26 15:22:29.798764] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:12.427 [2024-04-26 15:22:29.798808] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:12.427 [2024-04-26 15:22:29.798817] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:12.427 [2024-04-26 15:22:29.798825] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:12.427 [2024-04-26 15:22:29.798832] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:12.427 [2024-04-26 15:22:29.798934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:12.427 [2024-04-26 15:22:29.799048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:12.427 [2024-04-26 15:22:29.799205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.427 [2024-04-26 15:22:29.799207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:12.996 15:22:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:12.996 15:22:30 -- common/autotest_common.sh@850 -- # return 0 00:12:12.996 15:22:30 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:14.378 15:22:31 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:12:14.378 15:22:31 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:14.378 15:22:31 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:14.378 15:22:31 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:14.378 15:22:31 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:14.378 15:22:31 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:14.378 Malloc1 00:12:14.378 15:22:31 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:14.639 15:22:31 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:14.901 15:22:32 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:14.901 15:22:32 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:14.901 15:22:32 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:14.901 15:22:32 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:15.161 Malloc2 00:12:15.161 15:22:32 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:15.421 15:22:32 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:15.421 15:22:32 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:15.684 15:22:32 -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:12:15.684 15:22:32 -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:12:15.684 15:22:32 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:15.684 15:22:32 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:15.684 15:22:32 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:12:15.684 15:22:32 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:15.684 [2024-04-26 15:22:33.000234] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:12:15.684 [2024-04-26 15:22:33.000281] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1544336 ] 00:12:15.684 EAL: No free 2048 kB hugepages reported on node 1 00:12:15.684 [2024-04-26 15:22:33.033506] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:12:15.684 [2024-04-26 15:22:33.038786] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:15.684 [2024-04-26 15:22:33.038805] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f5ece94f000 00:12:15.684 [2024-04-26 15:22:33.039781] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:15.684 [2024-04-26 15:22:33.040783] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:15.684 [2024-04-26 15:22:33.041781] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:15.684 [2024-04-26 15:22:33.042794] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:15.684 [2024-04-26 15:22:33.043794] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:15.684 [2024-04-26 15:22:33.044805] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:15.684 [2024-04-26 15:22:33.045807] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:15.684 [2024-04-26 15:22:33.046810] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:15.684 [2024-04-26 15:22:33.047820] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:15.684 [2024-04-26 15:22:33.047832] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f5ece944000 00:12:15.685 [2024-04-26 15:22:33.049163] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:15.685 [2024-04-26 15:22:33.066095] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:12:15.685 [2024-04-26 15:22:33.066117] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:12:15.685 [2024-04-26 15:22:33.070959] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:15.685 [2024-04-26 15:22:33.071001] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:15.685 [2024-04-26 15:22:33.071088] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:12:15.685 [2024-04-26 15:22:33.071108] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:12:15.685 [2024-04-26 15:22:33.071114] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:12:15.685 [2024-04-26 15:22:33.071956] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:12:15.685 [2024-04-26 15:22:33.071966] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:12:15.685 [2024-04-26 15:22:33.071974] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:12:15.685 [2024-04-26 15:22:33.072960] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:15.685 [2024-04-26 15:22:33.072969] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:12:15.685 [2024-04-26 15:22:33.072976] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:12:15.685 [2024-04-26 15:22:33.073958] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:12:15.685 [2024-04-26 15:22:33.073966] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:15.685 [2024-04-26 15:22:33.074968] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:12:15.685 [2024-04-26 15:22:33.074977] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:12:15.685 [2024-04-26 15:22:33.074982] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:12:15.685 [2024-04-26 15:22:33.074989] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:15.685 [2024-04-26 15:22:33.075094] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:12:15.685 [2024-04-26 15:22:33.075099] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:15.685 [2024-04-26 15:22:33.075104] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:12:15.685 [2024-04-26 15:22:33.075973] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:12:15.685 [2024-04-26 15:22:33.076977] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:12:15.685 [2024-04-26 15:22:33.077984] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:15.685 [2024-04-26 15:22:33.078981] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:15.685 [2024-04-26 15:22:33.079039] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:15.685 [2024-04-26 15:22:33.079995] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:12:15.685 [2024-04-26 15:22:33.080003] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:15.685 [2024-04-26 15:22:33.080010] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:12:15.685 [2024-04-26 15:22:33.080032] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:12:15.685 [2024-04-26 15:22:33.080039] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:12:15.685 [2024-04-26 15:22:33.080055] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:15.685 [2024-04-26 15:22:33.080060] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:15.685 [2024-04-26 15:22:33.080074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:15.685 [2024-04-26 15:22:33.080110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:15.685 [2024-04-26 15:22:33.080119] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:12:15.685 [2024-04-26 15:22:33.080124] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:12:15.685 [2024-04-26 15:22:33.080128] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:12:15.685 [2024-04-26 15:22:33.080133] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:15.685 [2024-04-26 15:22:33.080138] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:12:15.685 [2024-04-26 15:22:33.080142] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:12:15.685 [2024-04-26 15:22:33.080147] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:12:15.685 [2024-04-26 15:22:33.080155] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:12:15.685 [2024-04-26 15:22:33.080165] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:15.685 [2024-04-26 15:22:33.080173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:15.685 [2024-04-26 15:22:33.080185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:15.685 [2024-04-26 15:22:33.080194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:15.685 [2024-04-26 15:22:33.080202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:15.685 [2024-04-26 15:22:33.080210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:15.685 [2024-04-26 15:22:33.080215] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:12:15.685 [2024-04-26 15:22:33.080223] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:15.685 [2024-04-26 15:22:33.080232] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:15.685 [2024-04-26 15:22:33.080239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:15.685 [2024-04-26 15:22:33.080247] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:12:15.685 [2024-04-26 15:22:33.080252] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:15.685 [2024-04-26 15:22:33.080261] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:12:15.685 [2024-04-26 15:22:33.080268] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:12:15.685 [2024-04-26 15:22:33.080277] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:15.685 [2024-04-26 15:22:33.080286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:15.685 [2024-04-26 15:22:33.080335] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:12:15.686 [2024-04-26 15:22:33.080343] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:12:15.686 [2024-04-26 15:22:33.080351] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:15.686 [2024-04-26 15:22:33.080355] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:15.686 [2024-04-26 15:22:33.080361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:15.686 [2024-04-26 15:22:33.080370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:15.686 [2024-04-26 15:22:33.080383] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:12:15.686 [2024-04-26 15:22:33.080391] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:12:15.686 [2024-04-26 15:22:33.080399] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:12:15.686 [2024-04-26 15:22:33.080406] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:15.686 [2024-04-26 15:22:33.080410] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:15.686 [2024-04-26 15:22:33.080416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:15.686 [2024-04-26 15:22:33.080433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:15.686 [2024-04-26 15:22:33.080445] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:15.686 [2024-04-26 15:22:33.080453] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:15.686 [2024-04-26 15:22:33.080460] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:15.686 [2024-04-26 15:22:33.080464] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:15.686 [2024-04-26 15:22:33.080470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:15.686 [2024-04-26 15:22:33.080479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:15.686 [2024-04-26 15:22:33.080488] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:15.686 [2024-04-26 15:22:33.080495] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:12:15.686 [2024-04-26 15:22:33.080502] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:12:15.686 [2024-04-26 15:22:33.080508] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:15.686 [2024-04-26 15:22:33.080513] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:12:15.686 [2024-04-26 15:22:33.080518] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:12:15.686 [2024-04-26 15:22:33.080523] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:12:15.686 [2024-04-26 15:22:33.080528] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:12:15.686 [2024-04-26 15:22:33.080545] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:15.686 [2024-04-26 15:22:33.080557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:15.686 [2024-04-26 15:22:33.080568] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:15.686 [2024-04-26 15:22:33.080577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:15.686 [2024-04-26 15:22:33.080587] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:15.686 [2024-04-26 15:22:33.080601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:15.686 [2024-04-26 15:22:33.080611] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:15.686 [2024-04-26 15:22:33.080620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:15.686 [2024-04-26 15:22:33.080630] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:15.686 [2024-04-26 15:22:33.080635] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:15.686 [2024-04-26 15:22:33.080638] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:15.686 [2024-04-26 15:22:33.080642] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:15.686 [2024-04-26 15:22:33.080648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:15.686 [2024-04-26 15:22:33.080656] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:15.686 [2024-04-26 15:22:33.080660] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:15.686 [2024-04-26 15:22:33.080666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:15.686 [2024-04-26 15:22:33.080673] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:15.686 [2024-04-26 15:22:33.080677] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:15.686 [2024-04-26 15:22:33.080683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:15.686 [2024-04-26 15:22:33.080692] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:15.686 [2024-04-26 15:22:33.080696] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:15.686 [2024-04-26 15:22:33.080702] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:15.686 [2024-04-26 15:22:33.080709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:15.686 [2024-04-26 15:22:33.080722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:15.686 [2024-04-26 15:22:33.080731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:15.686 [2024-04-26 15:22:33.080738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:15.686 ===================================================== 00:12:15.686 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:15.686 ===================================================== 00:12:15.686 Controller Capabilities/Features 00:12:15.686 ================================ 00:12:15.686 Vendor ID: 4e58 00:12:15.686 Subsystem Vendor ID: 4e58 00:12:15.686 Serial Number: SPDK1 00:12:15.686 Model Number: SPDK bdev Controller 00:12:15.686 Firmware Version: 24.05 00:12:15.686 Recommended Arb Burst: 6 00:12:15.686 IEEE OUI Identifier: 8d 6b 50 00:12:15.686 Multi-path I/O 00:12:15.686 May have multiple subsystem ports: Yes 00:12:15.686 May have multiple controllers: Yes 00:12:15.686 Associated with SR-IOV VF: No 00:12:15.686 Max Data Transfer Size: 131072 00:12:15.686 Max Number of Namespaces: 32 00:12:15.686 Max Number of I/O Queues: 127 00:12:15.686 NVMe Specification Version (VS): 1.3 00:12:15.686 NVMe Specification Version (Identify): 1.3 00:12:15.686 Maximum Queue Entries: 256 00:12:15.686 Contiguous Queues Required: Yes 00:12:15.686 Arbitration Mechanisms Supported 00:12:15.686 Weighted Round Robin: Not Supported 00:12:15.686 Vendor Specific: Not Supported 00:12:15.686 Reset Timeout: 15000 ms 00:12:15.686 Doorbell Stride: 4 bytes 00:12:15.686 NVM Subsystem Reset: Not Supported 00:12:15.686 Command Sets Supported 00:12:15.686 NVM Command Set: Supported 00:12:15.686 Boot Partition: Not Supported 00:12:15.686 Memory Page Size Minimum: 4096 bytes 00:12:15.686 Memory Page Size Maximum: 4096 bytes 00:12:15.686 Persistent Memory Region: Not Supported 00:12:15.686 Optional Asynchronous Events Supported 00:12:15.686 Namespace Attribute Notices: Supported 00:12:15.687 Firmware Activation Notices: Not Supported 00:12:15.687 ANA Change Notices: Not Supported 00:12:15.687 PLE Aggregate Log Change Notices: Not Supported 00:12:15.687 LBA Status Info Alert Notices: Not Supported 00:12:15.687 EGE Aggregate Log Change Notices: Not Supported 00:12:15.687 Normal NVM Subsystem Shutdown event: Not Supported 00:12:15.687 Zone Descriptor Change Notices: Not Supported 00:12:15.687 Discovery Log Change Notices: Not Supported 00:12:15.687 Controller Attributes 00:12:15.687 128-bit Host Identifier: Supported 00:12:15.687 Non-Operational Permissive Mode: Not Supported 00:12:15.687 NVM Sets: Not Supported 00:12:15.687 Read Recovery Levels: Not Supported 00:12:15.687 Endurance Groups: Not Supported 00:12:15.687 Predictable Latency Mode: Not Supported 00:12:15.687 Traffic Based Keep ALive: Not Supported 00:12:15.687 Namespace Granularity: Not Supported 00:12:15.687 SQ Associations: Not Supported 00:12:15.687 UUID List: Not Supported 00:12:15.687 Multi-Domain Subsystem: Not Supported 00:12:15.687 Fixed Capacity Management: Not Supported 00:12:15.687 Variable Capacity Management: Not Supported 00:12:15.687 Delete Endurance Group: Not Supported 00:12:15.687 Delete NVM Set: Not Supported 00:12:15.687 Extended LBA Formats Supported: Not Supported 00:12:15.687 Flexible Data Placement Supported: Not Supported 00:12:15.687 00:12:15.687 Controller Memory Buffer Support 00:12:15.687 ================================ 00:12:15.687 Supported: No 00:12:15.687 00:12:15.687 Persistent Memory Region Support 00:12:15.687 ================================ 00:12:15.687 Supported: No 00:12:15.687 00:12:15.687 Admin Command Set Attributes 00:12:15.687 ============================ 00:12:15.687 Security Send/Receive: Not Supported 00:12:15.687 Format NVM: Not Supported 00:12:15.687 Firmware Activate/Download: Not Supported 00:12:15.687 Namespace Management: Not Supported 00:12:15.687 Device Self-Test: Not Supported 00:12:15.687 Directives: Not Supported 00:12:15.687 NVMe-MI: Not Supported 00:12:15.687 Virtualization Management: Not Supported 00:12:15.687 Doorbell Buffer Config: Not Supported 00:12:15.687 Get LBA Status Capability: Not Supported 00:12:15.687 Command & Feature Lockdown Capability: Not Supported 00:12:15.687 Abort Command Limit: 4 00:12:15.687 Async Event Request Limit: 4 00:12:15.687 Number of Firmware Slots: N/A 00:12:15.687 Firmware Slot 1 Read-Only: N/A 00:12:15.687 Firmware Activation Without Reset: N/A 00:12:15.687 Multiple Update Detection Support: N/A 00:12:15.687 Firmware Update Granularity: No Information Provided 00:12:15.687 Per-Namespace SMART Log: No 00:12:15.687 Asymmetric Namespace Access Log Page: Not Supported 00:12:15.687 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:12:15.687 Command Effects Log Page: Supported 00:12:15.687 Get Log Page Extended Data: Supported 00:12:15.687 Telemetry Log Pages: Not Supported 00:12:15.687 Persistent Event Log Pages: Not Supported 00:12:15.687 Supported Log Pages Log Page: May Support 00:12:15.687 Commands Supported & Effects Log Page: Not Supported 00:12:15.687 Feature Identifiers & Effects Log Page:May Support 00:12:15.687 NVMe-MI Commands & Effects Log Page: May Support 00:12:15.687 Data Area 4 for Telemetry Log: Not Supported 00:12:15.687 Error Log Page Entries Supported: 128 00:12:15.687 Keep Alive: Supported 00:12:15.687 Keep Alive Granularity: 10000 ms 00:12:15.687 00:12:15.687 NVM Command Set Attributes 00:12:15.687 ========================== 00:12:15.687 Submission Queue Entry Size 00:12:15.687 Max: 64 00:12:15.687 Min: 64 00:12:15.687 Completion Queue Entry Size 00:12:15.687 Max: 16 00:12:15.687 Min: 16 00:12:15.687 Number of Namespaces: 32 00:12:15.687 Compare Command: Supported 00:12:15.687 Write Uncorrectable Command: Not Supported 00:12:15.687 Dataset Management Command: Supported 00:12:15.687 Write Zeroes Command: Supported 00:12:15.687 Set Features Save Field: Not Supported 00:12:15.687 Reservations: Not Supported 00:12:15.687 Timestamp: Not Supported 00:12:15.687 Copy: Supported 00:12:15.687 Volatile Write Cache: Present 00:12:15.687 Atomic Write Unit (Normal): 1 00:12:15.687 Atomic Write Unit (PFail): 1 00:12:15.687 Atomic Compare & Write Unit: 1 00:12:15.687 Fused Compare & Write: Supported 00:12:15.687 Scatter-Gather List 00:12:15.687 SGL Command Set: Supported (Dword aligned) 00:12:15.687 SGL Keyed: Not Supported 00:12:15.687 SGL Bit Bucket Descriptor: Not Supported 00:12:15.687 SGL Metadata Pointer: Not Supported 00:12:15.687 Oversized SGL: Not Supported 00:12:15.687 SGL Metadata Address: Not Supported 00:12:15.687 SGL Offset: Not Supported 00:12:15.687 Transport SGL Data Block: Not Supported 00:12:15.687 Replay Protected Memory Block: Not Supported 00:12:15.687 00:12:15.687 Firmware Slot Information 00:12:15.687 ========================= 00:12:15.687 Active slot: 1 00:12:15.687 Slot 1 Firmware Revision: 24.05 00:12:15.687 00:12:15.687 00:12:15.687 Commands Supported and Effects 00:12:15.687 ============================== 00:12:15.687 Admin Commands 00:12:15.687 -------------- 00:12:15.687 Get Log Page (02h): Supported 00:12:15.687 Identify (06h): Supported 00:12:15.687 Abort (08h): Supported 00:12:15.687 Set Features (09h): Supported 00:12:15.687 Get Features (0Ah): Supported 00:12:15.687 Asynchronous Event Request (0Ch): Supported 00:12:15.687 Keep Alive (18h): Supported 00:12:15.687 I/O Commands 00:12:15.687 ------------ 00:12:15.687 Flush (00h): Supported LBA-Change 00:12:15.687 Write (01h): Supported LBA-Change 00:12:15.687 Read (02h): Supported 00:12:15.687 Compare (05h): Supported 00:12:15.687 Write Zeroes (08h): Supported LBA-Change 00:12:15.687 Dataset Management (09h): Supported LBA-Change 00:12:15.687 Copy (19h): Supported LBA-Change 00:12:15.687 Unknown (79h): Supported LBA-Change 00:12:15.687 Unknown (7Ah): Supported 00:12:15.687 00:12:15.687 Error Log 00:12:15.687 ========= 00:12:15.687 00:12:15.687 Arbitration 00:12:15.687 =========== 00:12:15.687 Arbitration Burst: 1 00:12:15.687 00:12:15.687 Power Management 00:12:15.687 ================ 00:12:15.687 Number of Power States: 1 00:12:15.687 Current Power State: Power State #0 00:12:15.687 Power State #0: 00:12:15.687 Max Power: 0.00 W 00:12:15.687 Non-Operational State: Operational 00:12:15.687 Entry Latency: Not Reported 00:12:15.687 Exit Latency: Not Reported 00:12:15.687 Relative Read Throughput: 0 00:12:15.688 Relative Read Latency: 0 00:12:15.688 Relative Write Throughput: 0 00:12:15.688 Relative Write Latency: 0 00:12:15.688 Idle Power: Not Reported 00:12:15.688 Active Power: Not Reported 00:12:15.688 Non-Operational Permissive Mode: Not Supported 00:12:15.688 00:12:15.688 Health Information 00:12:15.688 ================== 00:12:15.688 Critical Warnings: 00:12:15.688 Available Spare Space: OK 00:12:15.688 Temperature: OK 00:12:15.688 Device Reliability: OK 00:12:15.688 Read Only: No 00:12:15.688 Volatile Memory Backup: OK 00:12:15.688 Current Temperature: 0 Kelvin (-2[2024-04-26 15:22:33.080847] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:15.688 [2024-04-26 15:22:33.080856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:15.688 [2024-04-26 15:22:33.080880] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:12:15.688 [2024-04-26 15:22:33.080889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:15.688 [2024-04-26 15:22:33.080895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:15.688 [2024-04-26 15:22:33.080902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:15.688 [2024-04-26 15:22:33.080908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:15.688 [2024-04-26 15:22:33.081006] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:15.688 [2024-04-26 15:22:33.081016] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:12:15.688 [2024-04-26 15:22:33.082003] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:15.688 [2024-04-26 15:22:33.082043] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:12:15.688 [2024-04-26 15:22:33.082049] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:12:15.688 [2024-04-26 15:22:33.083012] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:12:15.688 [2024-04-26 15:22:33.083023] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:12:15.688 [2024-04-26 15:22:33.083083] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:12:15.688 [2024-04-26 15:22:33.086846] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:15.688 73 Celsius) 00:12:15.688 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:15.688 Available Spare: 0% 00:12:15.688 Available Spare Threshold: 0% 00:12:15.688 Life Percentage Used: 0% 00:12:15.688 Data Units Read: 0 00:12:15.688 Data Units Written: 0 00:12:15.688 Host Read Commands: 0 00:12:15.688 Host Write Commands: 0 00:12:15.688 Controller Busy Time: 0 minutes 00:12:15.688 Power Cycles: 0 00:12:15.688 Power On Hours: 0 hours 00:12:15.688 Unsafe Shutdowns: 0 00:12:15.688 Unrecoverable Media Errors: 0 00:12:15.688 Lifetime Error Log Entries: 0 00:12:15.688 Warning Temperature Time: 0 minutes 00:12:15.688 Critical Temperature Time: 0 minutes 00:12:15.688 00:12:15.688 Number of Queues 00:12:15.688 ================ 00:12:15.688 Number of I/O Submission Queues: 127 00:12:15.688 Number of I/O Completion Queues: 127 00:12:15.688 00:12:15.688 Active Namespaces 00:12:15.688 ================= 00:12:15.688 Namespace ID:1 00:12:15.688 Error Recovery Timeout: Unlimited 00:12:15.688 Command Set Identifier: NVM (00h) 00:12:15.688 Deallocate: Supported 00:12:15.688 Deallocated/Unwritten Error: Not Supported 00:12:15.688 Deallocated Read Value: Unknown 00:12:15.688 Deallocate in Write Zeroes: Not Supported 00:12:15.688 Deallocated Guard Field: 0xFFFF 00:12:15.688 Flush: Supported 00:12:15.688 Reservation: Supported 00:12:15.688 Namespace Sharing Capabilities: Multiple Controllers 00:12:15.688 Size (in LBAs): 131072 (0GiB) 00:12:15.688 Capacity (in LBAs): 131072 (0GiB) 00:12:15.688 Utilization (in LBAs): 131072 (0GiB) 00:12:15.688 NGUID: 21797AD07EB842948A7E69864A479167 00:12:15.688 UUID: 21797ad0-7eb8-4294-8a7e-69864a479167 00:12:15.688 Thin Provisioning: Not Supported 00:12:15.688 Per-NS Atomic Units: Yes 00:12:15.688 Atomic Boundary Size (Normal): 0 00:12:15.688 Atomic Boundary Size (PFail): 0 00:12:15.688 Atomic Boundary Offset: 0 00:12:15.688 Maximum Single Source Range Length: 65535 00:12:15.688 Maximum Copy Length: 65535 00:12:15.688 Maximum Source Range Count: 1 00:12:15.688 NGUID/EUI64 Never Reused: No 00:12:15.688 Namespace Write Protected: No 00:12:15.688 Number of LBA Formats: 1 00:12:15.688 Current LBA Format: LBA Format #00 00:12:15.688 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:15.688 00:12:15.688 15:22:33 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:15.954 EAL: No free 2048 kB hugepages reported on node 1 00:12:15.954 [2024-04-26 15:22:33.272472] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:21.243 [2024-04-26 15:22:38.292240] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:21.243 Initializing NVMe Controllers 00:12:21.243 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:21.243 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:21.243 Initialization complete. Launching workers. 00:12:21.243 ======================================================== 00:12:21.243 Latency(us) 00:12:21.243 Device Information : IOPS MiB/s Average min max 00:12:21.243 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39946.77 156.04 3204.13 847.27 7804.81 00:12:21.243 ======================================================== 00:12:21.243 Total : 39946.77 156.04 3204.13 847.27 7804.81 00:12:21.243 00:12:21.243 15:22:38 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:21.243 EAL: No free 2048 kB hugepages reported on node 1 00:12:21.243 [2024-04-26 15:22:38.463029] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:26.541 [2024-04-26 15:22:43.498424] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:26.541 Initializing NVMe Controllers 00:12:26.541 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:26.541 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:26.541 Initialization complete. Launching workers. 00:12:26.541 ======================================================== 00:12:26.541 Latency(us) 00:12:26.541 Device Information : IOPS MiB/s Average min max 00:12:26.541 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.20 62.70 7980.78 6987.69 8971.15 00:12:26.541 ======================================================== 00:12:26.541 Total : 16051.20 62.70 7980.78 6987.69 8971.15 00:12:26.541 00:12:26.541 15:22:43 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:26.541 EAL: No free 2048 kB hugepages reported on node 1 00:12:26.541 [2024-04-26 15:22:43.679229] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:31.826 [2024-04-26 15:22:48.758090] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:31.826 Initializing NVMe Controllers 00:12:31.826 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:31.826 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:31.826 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:12:31.826 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:12:31.826 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:12:31.826 Initialization complete. Launching workers. 00:12:31.826 Starting thread on core 2 00:12:31.826 Starting thread on core 3 00:12:31.826 Starting thread on core 1 00:12:31.826 15:22:48 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:12:31.826 EAL: No free 2048 kB hugepages reported on node 1 00:12:31.826 [2024-04-26 15:22:49.022237] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:35.123 [2024-04-26 15:22:52.081548] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:35.123 Initializing NVMe Controllers 00:12:35.123 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:35.123 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:35.123 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:12:35.123 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:12:35.123 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:12:35.123 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:12:35.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:35.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:35.123 Initialization complete. Launching workers. 00:12:35.123 Starting thread on core 1 with urgent priority queue 00:12:35.123 Starting thread on core 2 with urgent priority queue 00:12:35.123 Starting thread on core 3 with urgent priority queue 00:12:35.123 Starting thread on core 0 with urgent priority queue 00:12:35.123 SPDK bdev Controller (SPDK1 ) core 0: 11272.33 IO/s 8.87 secs/100000 ios 00:12:35.123 SPDK bdev Controller (SPDK1 ) core 1: 14088.00 IO/s 7.10 secs/100000 ios 00:12:35.123 SPDK bdev Controller (SPDK1 ) core 2: 8717.00 IO/s 11.47 secs/100000 ios 00:12:35.123 SPDK bdev Controller (SPDK1 ) core 3: 16619.00 IO/s 6.02 secs/100000 ios 00:12:35.123 ======================================================== 00:12:35.123 00:12:35.123 15:22:52 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:35.123 EAL: No free 2048 kB hugepages reported on node 1 00:12:35.123 [2024-04-26 15:22:52.339304] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:35.123 [2024-04-26 15:22:52.375519] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:35.123 Initializing NVMe Controllers 00:12:35.123 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:35.123 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:35.123 Namespace ID: 1 size: 0GB 00:12:35.123 Initialization complete. 00:12:35.123 INFO: using host memory buffer for IO 00:12:35.123 Hello world! 00:12:35.123 15:22:52 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:35.123 EAL: No free 2048 kB hugepages reported on node 1 00:12:35.384 [2024-04-26 15:22:52.629968] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:36.325 Initializing NVMe Controllers 00:12:36.325 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:36.325 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:36.325 Initialization complete. Launching workers. 00:12:36.325 submit (in ns) avg, min, max = 7747.7, 3893.3, 4000695.8 00:12:36.325 complete (in ns) avg, min, max = 18133.5, 2342.5, 4996932.5 00:12:36.325 00:12:36.325 Submit histogram 00:12:36.325 ================ 00:12:36.325 Range in us Cumulative Count 00:12:36.325 3.893 - 3.920: 0.6678% ( 130) 00:12:36.325 3.920 - 3.947: 5.2088% ( 884) 00:12:36.325 3.947 - 3.973: 14.5066% ( 1810) 00:12:36.325 3.973 - 4.000: 26.5424% ( 2343) 00:12:36.325 4.000 - 4.027: 39.0507% ( 2435) 00:12:36.325 4.027 - 4.053: 54.0196% ( 2914) 00:12:36.325 4.053 - 4.080: 71.2899% ( 3362) 00:12:36.325 4.080 - 4.107: 84.1527% ( 2504) 00:12:36.325 4.107 - 4.133: 92.0481% ( 1537) 00:12:36.325 4.133 - 4.160: 96.9744% ( 959) 00:12:36.325 4.160 - 4.187: 98.7055% ( 337) 00:12:36.325 4.187 - 4.213: 99.2603% ( 108) 00:12:36.325 4.213 - 4.240: 99.4812% ( 43) 00:12:36.325 4.240 - 4.267: 99.5120% ( 6) 00:12:36.325 4.267 - 4.293: 99.5428% ( 6) 00:12:36.325 4.427 - 4.453: 99.5480% ( 1) 00:12:36.325 4.533 - 4.560: 99.5531% ( 1) 00:12:36.325 4.613 - 4.640: 99.5582% ( 1) 00:12:36.325 4.667 - 4.693: 99.5634% ( 1) 00:12:36.325 4.827 - 4.853: 99.5685% ( 1) 00:12:36.325 4.880 - 4.907: 99.5736% ( 1) 00:12:36.325 4.987 - 5.013: 99.5839% ( 2) 00:12:36.325 5.093 - 5.120: 99.5890% ( 1) 00:12:36.325 5.120 - 5.147: 99.5942% ( 1) 00:12:36.325 5.387 - 5.413: 99.5993% ( 1) 00:12:36.325 5.413 - 5.440: 99.6045% ( 1) 00:12:36.325 5.520 - 5.547: 99.6096% ( 1) 00:12:36.325 5.733 - 5.760: 99.6147% ( 1) 00:12:36.325 6.027 - 6.053: 99.6199% ( 1) 00:12:36.325 6.080 - 6.107: 99.6301% ( 2) 00:12:36.325 6.107 - 6.133: 99.6404% ( 2) 00:12:36.325 6.160 - 6.187: 99.6610% ( 4) 00:12:36.325 6.187 - 6.213: 99.6661% ( 1) 00:12:36.325 6.213 - 6.240: 99.6815% ( 3) 00:12:36.325 6.240 - 6.267: 99.6866% ( 1) 00:12:36.325 6.267 - 6.293: 99.6918% ( 1) 00:12:36.325 6.347 - 6.373: 99.6969% ( 1) 00:12:36.325 6.373 - 6.400: 99.7021% ( 1) 00:12:36.325 6.400 - 6.427: 99.7072% ( 1) 00:12:36.325 6.427 - 6.453: 99.7123% ( 1) 00:12:36.325 6.453 - 6.480: 99.7175% ( 1) 00:12:36.325 6.560 - 6.587: 99.7226% ( 1) 00:12:36.325 6.613 - 6.640: 99.7329% ( 2) 00:12:36.325 6.693 - 6.720: 99.7380% ( 1) 00:12:36.325 6.747 - 6.773: 99.7432% ( 1) 00:12:36.325 6.827 - 6.880: 99.7483% ( 1) 00:12:36.325 6.933 - 6.987: 99.7534% ( 1) 00:12:36.325 7.093 - 7.147: 99.7586% ( 1) 00:12:36.325 7.200 - 7.253: 99.7637% ( 1) 00:12:36.325 7.253 - 7.307: 99.7688% ( 1) 00:12:36.325 7.413 - 7.467: 99.7843% ( 3) 00:12:36.325 7.573 - 7.627: 99.7894% ( 1) 00:12:36.325 7.627 - 7.680: 99.7997% ( 2) 00:12:36.325 7.680 - 7.733: 99.8099% ( 2) 00:12:36.325 7.733 - 7.787: 99.8151% ( 1) 00:12:36.325 7.787 - 7.840: 99.8202% ( 1) 00:12:36.325 7.840 - 7.893: 99.8305% ( 2) 00:12:36.325 8.000 - 8.053: 99.8408% ( 2) 00:12:36.325 8.053 - 8.107: 99.8510% ( 2) 00:12:36.325 8.160 - 8.213: 99.8613% ( 2) 00:12:36.325 8.213 - 8.267: 99.8664% ( 1) 00:12:36.325 8.267 - 8.320: 99.8716% ( 1) 00:12:36.325 8.320 - 8.373: 99.8767% ( 1) 00:12:36.325 8.693 - 8.747: 99.8819% ( 1) 00:12:36.325 8.747 - 8.800: 99.8921% ( 2) 00:12:36.325 9.013 - 9.067: 99.8973% ( 1) 00:12:36.325 9.867 - 9.920: 99.9024% ( 1) 00:12:36.325 10.027 - 10.080: 99.9075% ( 1) 00:12:36.325 3986.773 - 4014.080: 100.0000% ( 18) 00:12:36.325 00:12:36.325 Complete histogram 00:12:36.325 ================== 00:12:36.325 Range in us Cumulative Count 00:12:36.325 2.333 - 2.347: 0.0051% ( 1) 00:12:36.325 2.347 - 2.360: 0.0205% ( 3) 00:12:36.325 2.360 - 2.373: 0.3031% ( 55) 00:12:36.325 2.373 - 2.387: 1.1147% ( 158) 00:12:36.325 2.387 - 2.400: 1.2277% ( 22) 00:12:36.325 2.400 - 2.413: 1.2894% ( 12) 00:12:36.325 2.413 - 2.427: 1.3202% ( 6) 00:12:36.325 2.427 - 2.440: 35.3830% ( 6631) 00:12:36.325 2.440 - [2024-04-26 15:22:53.651394] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:36.325 2.453: 59.9938% ( 4791) 00:12:36.325 2.453 - 2.467: 69.8721% ( 1923) 00:12:36.325 2.467 - 2.480: 78.1271% ( 1607) 00:12:36.325 2.480 - 2.493: 81.4918% ( 655) 00:12:36.325 2.493 - 2.507: 83.4900% ( 389) 00:12:36.325 2.507 - 2.520: 89.3409% ( 1139) 00:12:36.325 2.520 - 2.533: 94.2261% ( 951) 00:12:36.325 2.533 - 2.547: 96.5942% ( 461) 00:12:36.325 2.547 - 2.560: 98.2432% ( 321) 00:12:36.325 2.560 - 2.573: 99.0343% ( 154) 00:12:36.325 2.573 - 2.587: 99.2346% ( 39) 00:12:36.325 2.587 - 2.600: 99.2808% ( 9) 00:12:36.325 2.600 - 2.613: 99.2962% ( 3) 00:12:36.325 2.627 - 2.640: 99.3014% ( 1) 00:12:36.325 4.613 - 4.640: 99.3065% ( 1) 00:12:36.325 4.720 - 4.747: 99.3168% ( 2) 00:12:36.325 4.800 - 4.827: 99.3271% ( 2) 00:12:36.325 4.907 - 4.933: 99.3322% ( 1) 00:12:36.325 4.987 - 5.013: 99.3425% ( 2) 00:12:36.325 5.093 - 5.120: 99.3476% ( 1) 00:12:36.325 5.173 - 5.200: 99.3528% ( 1) 00:12:36.325 5.227 - 5.253: 99.3579% ( 1) 00:12:36.325 5.333 - 5.360: 99.3630% ( 1) 00:12:36.325 5.440 - 5.467: 99.3733% ( 2) 00:12:36.325 5.467 - 5.493: 99.3784% ( 1) 00:12:36.325 5.573 - 5.600: 99.3887% ( 2) 00:12:36.325 5.600 - 5.627: 99.3938% ( 1) 00:12:36.325 5.627 - 5.653: 99.3990% ( 1) 00:12:36.325 5.653 - 5.680: 99.4093% ( 2) 00:12:36.325 5.680 - 5.707: 99.4247% ( 3) 00:12:36.325 5.707 - 5.733: 99.4298% ( 1) 00:12:36.326 5.733 - 5.760: 99.4349% ( 1) 00:12:36.326 5.760 - 5.787: 99.4401% ( 1) 00:12:36.326 5.813 - 5.840: 99.4452% ( 1) 00:12:36.326 5.867 - 5.893: 99.4504% ( 1) 00:12:36.326 5.947 - 5.973: 99.4555% ( 1) 00:12:36.326 6.080 - 6.107: 99.4606% ( 1) 00:12:36.326 6.160 - 6.187: 99.4658% ( 1) 00:12:36.326 6.187 - 6.213: 99.4709% ( 1) 00:12:36.326 6.213 - 6.240: 99.4760% ( 1) 00:12:36.326 6.240 - 6.267: 99.4812% ( 1) 00:12:36.326 6.267 - 6.293: 99.4863% ( 1) 00:12:36.326 6.320 - 6.347: 99.4914% ( 1) 00:12:36.326 6.373 - 6.400: 99.4966% ( 1) 00:12:36.326 6.400 - 6.427: 99.5069% ( 2) 00:12:36.326 6.427 - 6.453: 99.5171% ( 2) 00:12:36.326 6.533 - 6.560: 99.5223% ( 1) 00:12:36.326 6.827 - 6.880: 99.5274% ( 1) 00:12:36.326 6.987 - 7.040: 99.5377% ( 2) 00:12:36.326 7.040 - 7.093: 99.5428% ( 1) 00:12:36.326 7.147 - 7.200: 99.5480% ( 1) 00:12:36.326 7.253 - 7.307: 99.5531% ( 1) 00:12:36.326 7.467 - 7.520: 99.5582% ( 1) 00:12:36.326 7.573 - 7.627: 99.5634% ( 1) 00:12:36.326 7.733 - 7.787: 99.5685% ( 1) 00:12:36.326 7.893 - 7.947: 99.5736% ( 1) 00:12:36.326 8.267 - 8.320: 99.5788% ( 1) 00:12:36.326 9.813 - 9.867: 99.5890% ( 2) 00:12:36.326 13.547 - 13.600: 99.5942% ( 1) 00:12:36.326 13.973 - 14.080: 99.5993% ( 1) 00:12:36.326 14.507 - 14.613: 99.6045% ( 1) 00:12:36.326 44.160 - 44.373: 99.6096% ( 1) 00:12:36.326 3017.387 - 3031.040: 99.6147% ( 1) 00:12:36.326 3031.040 - 3044.693: 99.6199% ( 1) 00:12:36.326 3986.773 - 4014.080: 99.9795% ( 70) 00:12:36.326 4096.000 - 4123.307: 99.9846% ( 1) 00:12:36.326 4969.813 - 4997.120: 100.0000% ( 3) 00:12:36.326 00:12:36.326 15:22:53 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:12:36.326 15:22:53 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:36.326 15:22:53 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:12:36.326 15:22:53 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:12:36.326 15:22:53 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:36.586 [2024-04-26 15:22:53.841906] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:12:36.586 [ 00:12:36.586 { 00:12:36.586 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:36.586 "subtype": "Discovery", 00:12:36.586 "listen_addresses": [], 00:12:36.586 "allow_any_host": true, 00:12:36.586 "hosts": [] 00:12:36.586 }, 00:12:36.586 { 00:12:36.586 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:36.586 "subtype": "NVMe", 00:12:36.586 "listen_addresses": [ 00:12:36.586 { 00:12:36.586 "transport": "VFIOUSER", 00:12:36.586 "trtype": "VFIOUSER", 00:12:36.586 "adrfam": "IPv4", 00:12:36.586 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:36.586 "trsvcid": "0" 00:12:36.586 } 00:12:36.586 ], 00:12:36.586 "allow_any_host": true, 00:12:36.586 "hosts": [], 00:12:36.586 "serial_number": "SPDK1", 00:12:36.586 "model_number": "SPDK bdev Controller", 00:12:36.586 "max_namespaces": 32, 00:12:36.586 "min_cntlid": 1, 00:12:36.586 "max_cntlid": 65519, 00:12:36.586 "namespaces": [ 00:12:36.586 { 00:12:36.586 "nsid": 1, 00:12:36.586 "bdev_name": "Malloc1", 00:12:36.586 "name": "Malloc1", 00:12:36.586 "nguid": "21797AD07EB842948A7E69864A479167", 00:12:36.586 "uuid": "21797ad0-7eb8-4294-8a7e-69864a479167" 00:12:36.586 } 00:12:36.586 ] 00:12:36.586 }, 00:12:36.586 { 00:12:36.586 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:36.586 "subtype": "NVMe", 00:12:36.586 "listen_addresses": [ 00:12:36.586 { 00:12:36.586 "transport": "VFIOUSER", 00:12:36.586 "trtype": "VFIOUSER", 00:12:36.586 "adrfam": "IPv4", 00:12:36.586 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:36.586 "trsvcid": "0" 00:12:36.586 } 00:12:36.586 ], 00:12:36.586 "allow_any_host": true, 00:12:36.586 "hosts": [], 00:12:36.586 "serial_number": "SPDK2", 00:12:36.586 "model_number": "SPDK bdev Controller", 00:12:36.586 "max_namespaces": 32, 00:12:36.586 "min_cntlid": 1, 00:12:36.586 "max_cntlid": 65519, 00:12:36.586 "namespaces": [ 00:12:36.586 { 00:12:36.586 "nsid": 1, 00:12:36.586 "bdev_name": "Malloc2", 00:12:36.586 "name": "Malloc2", 00:12:36.586 "nguid": "75A9F7B81CF44D9FBADE156BA2DEEFD0", 00:12:36.586 "uuid": "75a9f7b8-1cf4-4d9f-bade-156ba2deefd0" 00:12:36.586 } 00:12:36.586 ] 00:12:36.586 } 00:12:36.586 ] 00:12:36.586 15:22:53 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:36.586 15:22:53 -- target/nvmf_vfio_user.sh@34 -- # aerpid=1548502 00:12:36.586 15:22:53 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:36.586 15:22:53 -- common/autotest_common.sh@1251 -- # local i=0 00:12:36.586 15:22:53 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:12:36.586 15:22:53 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:36.586 15:22:53 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:36.586 15:22:53 -- common/autotest_common.sh@1262 -- # return 0 00:12:36.586 15:22:53 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:36.586 15:22:53 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:12:36.586 EAL: No free 2048 kB hugepages reported on node 1 00:12:36.586 Malloc3 00:12:36.861 [2024-04-26 15:22:54.042303] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:36.862 15:22:54 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:12:36.862 [2024-04-26 15:22:54.188325] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:36.862 15:22:54 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:36.862 Asynchronous Event Request test 00:12:36.862 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:36.862 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:36.862 Registering asynchronous event callbacks... 00:12:36.862 Starting namespace attribute notice tests for all controllers... 00:12:36.862 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:36.862 aer_cb - Changed Namespace 00:12:36.862 Cleaning up... 00:12:37.125 [ 00:12:37.125 { 00:12:37.125 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:37.125 "subtype": "Discovery", 00:12:37.125 "listen_addresses": [], 00:12:37.125 "allow_any_host": true, 00:12:37.125 "hosts": [] 00:12:37.125 }, 00:12:37.125 { 00:12:37.125 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:37.125 "subtype": "NVMe", 00:12:37.125 "listen_addresses": [ 00:12:37.125 { 00:12:37.125 "transport": "VFIOUSER", 00:12:37.125 "trtype": "VFIOUSER", 00:12:37.125 "adrfam": "IPv4", 00:12:37.125 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:37.125 "trsvcid": "0" 00:12:37.125 } 00:12:37.125 ], 00:12:37.125 "allow_any_host": true, 00:12:37.125 "hosts": [], 00:12:37.125 "serial_number": "SPDK1", 00:12:37.125 "model_number": "SPDK bdev Controller", 00:12:37.125 "max_namespaces": 32, 00:12:37.125 "min_cntlid": 1, 00:12:37.125 "max_cntlid": 65519, 00:12:37.125 "namespaces": [ 00:12:37.125 { 00:12:37.125 "nsid": 1, 00:12:37.125 "bdev_name": "Malloc1", 00:12:37.125 "name": "Malloc1", 00:12:37.125 "nguid": "21797AD07EB842948A7E69864A479167", 00:12:37.125 "uuid": "21797ad0-7eb8-4294-8a7e-69864a479167" 00:12:37.125 }, 00:12:37.125 { 00:12:37.125 "nsid": 2, 00:12:37.125 "bdev_name": "Malloc3", 00:12:37.125 "name": "Malloc3", 00:12:37.125 "nguid": "528D824374F9457A9EC0C6403E1A0B43", 00:12:37.125 "uuid": "528d8243-74f9-457a-9ec0-c6403e1a0b43" 00:12:37.125 } 00:12:37.125 ] 00:12:37.125 }, 00:12:37.125 { 00:12:37.125 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:37.125 "subtype": "NVMe", 00:12:37.125 "listen_addresses": [ 00:12:37.125 { 00:12:37.125 "transport": "VFIOUSER", 00:12:37.125 "trtype": "VFIOUSER", 00:12:37.125 "adrfam": "IPv4", 00:12:37.125 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:37.125 "trsvcid": "0" 00:12:37.125 } 00:12:37.125 ], 00:12:37.125 "allow_any_host": true, 00:12:37.125 "hosts": [], 00:12:37.125 "serial_number": "SPDK2", 00:12:37.125 "model_number": "SPDK bdev Controller", 00:12:37.125 "max_namespaces": 32, 00:12:37.125 "min_cntlid": 1, 00:12:37.125 "max_cntlid": 65519, 00:12:37.125 "namespaces": [ 00:12:37.125 { 00:12:37.125 "nsid": 1, 00:12:37.125 "bdev_name": "Malloc2", 00:12:37.125 "name": "Malloc2", 00:12:37.125 "nguid": "75A9F7B81CF44D9FBADE156BA2DEEFD0", 00:12:37.125 "uuid": "75a9f7b8-1cf4-4d9f-bade-156ba2deefd0" 00:12:37.125 } 00:12:37.125 ] 00:12:37.125 } 00:12:37.125 ] 00:12:37.125 15:22:54 -- target/nvmf_vfio_user.sh@44 -- # wait 1548502 00:12:37.125 15:22:54 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:37.125 15:22:54 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:37.125 15:22:54 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:12:37.125 15:22:54 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:37.125 [2024-04-26 15:22:54.402537] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:12:37.125 [2024-04-26 15:22:54.402571] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1548577 ] 00:12:37.125 EAL: No free 2048 kB hugepages reported on node 1 00:12:37.125 [2024-04-26 15:22:54.433349] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:12:37.125 [2024-04-26 15:22:54.442614] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:37.125 [2024-04-26 15:22:54.442634] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f6c3a28e000 00:12:37.125 [2024-04-26 15:22:54.443612] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:37.125 [2024-04-26 15:22:54.444618] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:37.125 [2024-04-26 15:22:54.445631] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:37.125 [2024-04-26 15:22:54.446637] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:37.125 [2024-04-26 15:22:54.447642] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:37.125 [2024-04-26 15:22:54.448650] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:37.125 [2024-04-26 15:22:54.449660] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:37.125 [2024-04-26 15:22:54.450666] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:37.125 [2024-04-26 15:22:54.451676] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:37.125 [2024-04-26 15:22:54.451689] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f6c3a283000 00:12:37.125 [2024-04-26 15:22:54.453017] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:37.125 [2024-04-26 15:22:54.474000] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:12:37.125 [2024-04-26 15:22:54.474019] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:12:37.125 [2024-04-26 15:22:54.476066] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:37.125 [2024-04-26 15:22:54.476110] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:37.125 [2024-04-26 15:22:54.476188] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:12:37.125 [2024-04-26 15:22:54.476203] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:12:37.125 [2024-04-26 15:22:54.476208] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:12:37.125 [2024-04-26 15:22:54.477073] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:12:37.125 [2024-04-26 15:22:54.477082] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:12:37.125 [2024-04-26 15:22:54.477089] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:12:37.126 [2024-04-26 15:22:54.478078] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:37.126 [2024-04-26 15:22:54.478086] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:12:37.126 [2024-04-26 15:22:54.478093] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:12:37.126 [2024-04-26 15:22:54.479078] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:12:37.126 [2024-04-26 15:22:54.479087] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:37.126 [2024-04-26 15:22:54.480088] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:12:37.126 [2024-04-26 15:22:54.480096] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:12:37.126 [2024-04-26 15:22:54.480101] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:12:37.126 [2024-04-26 15:22:54.480108] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:37.126 [2024-04-26 15:22:54.480213] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:12:37.126 [2024-04-26 15:22:54.480222] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:37.126 [2024-04-26 15:22:54.480227] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:12:37.126 [2024-04-26 15:22:54.481102] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:12:37.126 [2024-04-26 15:22:54.482104] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:12:37.126 [2024-04-26 15:22:54.483117] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:37.126 [2024-04-26 15:22:54.484117] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:37.126 [2024-04-26 15:22:54.484156] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:37.126 [2024-04-26 15:22:54.485132] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:12:37.126 [2024-04-26 15:22:54.485140] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:37.126 [2024-04-26 15:22:54.485145] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:12:37.126 [2024-04-26 15:22:54.485166] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:12:37.126 [2024-04-26 15:22:54.485177] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:12:37.126 [2024-04-26 15:22:54.485191] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:37.126 [2024-04-26 15:22:54.485196] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:37.126 [2024-04-26 15:22:54.485208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:37.126 [2024-04-26 15:22:54.491844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:37.126 [2024-04-26 15:22:54.491856] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:12:37.126 [2024-04-26 15:22:54.491860] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:12:37.126 [2024-04-26 15:22:54.491865] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:12:37.126 [2024-04-26 15:22:54.491869] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:37.126 [2024-04-26 15:22:54.491874] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:12:37.126 [2024-04-26 15:22:54.491878] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:12:37.126 [2024-04-26 15:22:54.491883] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:12:37.126 [2024-04-26 15:22:54.491890] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:12:37.126 [2024-04-26 15:22:54.491900] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:37.126 [2024-04-26 15:22:54.499844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:37.126 [2024-04-26 15:22:54.499859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:37.126 [2024-04-26 15:22:54.499868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:37.126 [2024-04-26 15:22:54.499876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:37.126 [2024-04-26 15:22:54.499884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:37.126 [2024-04-26 15:22:54.499889] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:12:37.126 [2024-04-26 15:22:54.499897] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:37.126 [2024-04-26 15:22:54.499906] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:37.126 [2024-04-26 15:22:54.507844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:37.126 [2024-04-26 15:22:54.507852] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:12:37.126 [2024-04-26 15:22:54.507856] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:37.126 [2024-04-26 15:22:54.507865] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:12:37.126 [2024-04-26 15:22:54.507870] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:12:37.126 [2024-04-26 15:22:54.507879] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:37.126 [2024-04-26 15:22:54.515844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:37.126 [2024-04-26 15:22:54.515896] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:12:37.126 [2024-04-26 15:22:54.515904] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:12:37.126 [2024-04-26 15:22:54.515911] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:37.126 [2024-04-26 15:22:54.515915] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:37.126 [2024-04-26 15:22:54.515922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:37.126 [2024-04-26 15:22:54.523843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:37.126 [2024-04-26 15:22:54.523853] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:12:37.126 [2024-04-26 15:22:54.523862] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:12:37.126 [2024-04-26 15:22:54.523869] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:12:37.126 [2024-04-26 15:22:54.523876] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:37.126 [2024-04-26 15:22:54.523882] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:37.126 [2024-04-26 15:22:54.523889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:37.126 [2024-04-26 15:22:54.531842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:37.126 [2024-04-26 15:22:54.531855] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:37.127 [2024-04-26 15:22:54.531863] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:37.127 [2024-04-26 15:22:54.531870] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:37.127 [2024-04-26 15:22:54.531874] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:37.127 [2024-04-26 15:22:54.531881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:37.127 [2024-04-26 15:22:54.539844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:37.127 [2024-04-26 15:22:54.539853] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:37.127 [2024-04-26 15:22:54.539860] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:12:37.127 [2024-04-26 15:22:54.539867] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:12:37.127 [2024-04-26 15:22:54.539873] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:37.127 [2024-04-26 15:22:54.539877] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:12:37.127 [2024-04-26 15:22:54.539882] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:12:37.127 [2024-04-26 15:22:54.539886] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:12:37.127 [2024-04-26 15:22:54.539891] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:12:37.127 [2024-04-26 15:22:54.539907] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:37.127 [2024-04-26 15:22:54.547844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:37.127 [2024-04-26 15:22:54.547857] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:37.127 [2024-04-26 15:22:54.555843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:37.127 [2024-04-26 15:22:54.555856] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:37.127 [2024-04-26 15:22:54.563843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:37.127 [2024-04-26 15:22:54.563856] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:37.127 [2024-04-26 15:22:54.571844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:37.127 [2024-04-26 15:22:54.571857] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:37.127 [2024-04-26 15:22:54.571863] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:37.127 [2024-04-26 15:22:54.571867] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:37.127 [2024-04-26 15:22:54.571870] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:37.127 [2024-04-26 15:22:54.571876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:37.127 [2024-04-26 15:22:54.571884] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:37.127 [2024-04-26 15:22:54.571888] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:37.127 [2024-04-26 15:22:54.571894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:37.127 [2024-04-26 15:22:54.571901] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:37.127 [2024-04-26 15:22:54.571905] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:37.127 [2024-04-26 15:22:54.571911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:37.127 [2024-04-26 15:22:54.571919] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:37.127 [2024-04-26 15:22:54.571924] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:37.127 [2024-04-26 15:22:54.571929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:37.388 [2024-04-26 15:22:54.579845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:37.388 [2024-04-26 15:22:54.579860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:37.388 [2024-04-26 15:22:54.579869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:37.388 [2024-04-26 15:22:54.579876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:37.388 ===================================================== 00:12:37.388 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:37.388 ===================================================== 00:12:37.388 Controller Capabilities/Features 00:12:37.388 ================================ 00:12:37.388 Vendor ID: 4e58 00:12:37.388 Subsystem Vendor ID: 4e58 00:12:37.388 Serial Number: SPDK2 00:12:37.388 Model Number: SPDK bdev Controller 00:12:37.388 Firmware Version: 24.05 00:12:37.388 Recommended Arb Burst: 6 00:12:37.388 IEEE OUI Identifier: 8d 6b 50 00:12:37.388 Multi-path I/O 00:12:37.388 May have multiple subsystem ports: Yes 00:12:37.388 May have multiple controllers: Yes 00:12:37.388 Associated with SR-IOV VF: No 00:12:37.388 Max Data Transfer Size: 131072 00:12:37.388 Max Number of Namespaces: 32 00:12:37.388 Max Number of I/O Queues: 127 00:12:37.388 NVMe Specification Version (VS): 1.3 00:12:37.388 NVMe Specification Version (Identify): 1.3 00:12:37.388 Maximum Queue Entries: 256 00:12:37.388 Contiguous Queues Required: Yes 00:12:37.388 Arbitration Mechanisms Supported 00:12:37.388 Weighted Round Robin: Not Supported 00:12:37.388 Vendor Specific: Not Supported 00:12:37.388 Reset Timeout: 15000 ms 00:12:37.388 Doorbell Stride: 4 bytes 00:12:37.388 NVM Subsystem Reset: Not Supported 00:12:37.388 Command Sets Supported 00:12:37.388 NVM Command Set: Supported 00:12:37.388 Boot Partition: Not Supported 00:12:37.388 Memory Page Size Minimum: 4096 bytes 00:12:37.388 Memory Page Size Maximum: 4096 bytes 00:12:37.388 Persistent Memory Region: Not Supported 00:12:37.388 Optional Asynchronous Events Supported 00:12:37.388 Namespace Attribute Notices: Supported 00:12:37.388 Firmware Activation Notices: Not Supported 00:12:37.388 ANA Change Notices: Not Supported 00:12:37.388 PLE Aggregate Log Change Notices: Not Supported 00:12:37.388 LBA Status Info Alert Notices: Not Supported 00:12:37.388 EGE Aggregate Log Change Notices: Not Supported 00:12:37.388 Normal NVM Subsystem Shutdown event: Not Supported 00:12:37.388 Zone Descriptor Change Notices: Not Supported 00:12:37.388 Discovery Log Change Notices: Not Supported 00:12:37.388 Controller Attributes 00:12:37.388 128-bit Host Identifier: Supported 00:12:37.388 Non-Operational Permissive Mode: Not Supported 00:12:37.388 NVM Sets: Not Supported 00:12:37.388 Read Recovery Levels: Not Supported 00:12:37.388 Endurance Groups: Not Supported 00:12:37.388 Predictable Latency Mode: Not Supported 00:12:37.388 Traffic Based Keep ALive: Not Supported 00:12:37.388 Namespace Granularity: Not Supported 00:12:37.388 SQ Associations: Not Supported 00:12:37.388 UUID List: Not Supported 00:12:37.388 Multi-Domain Subsystem: Not Supported 00:12:37.388 Fixed Capacity Management: Not Supported 00:12:37.388 Variable Capacity Management: Not Supported 00:12:37.388 Delete Endurance Group: Not Supported 00:12:37.388 Delete NVM Set: Not Supported 00:12:37.388 Extended LBA Formats Supported: Not Supported 00:12:37.388 Flexible Data Placement Supported: Not Supported 00:12:37.388 00:12:37.388 Controller Memory Buffer Support 00:12:37.388 ================================ 00:12:37.388 Supported: No 00:12:37.388 00:12:37.388 Persistent Memory Region Support 00:12:37.388 ================================ 00:12:37.388 Supported: No 00:12:37.388 00:12:37.388 Admin Command Set Attributes 00:12:37.389 ============================ 00:12:37.389 Security Send/Receive: Not Supported 00:12:37.389 Format NVM: Not Supported 00:12:37.389 Firmware Activate/Download: Not Supported 00:12:37.389 Namespace Management: Not Supported 00:12:37.389 Device Self-Test: Not Supported 00:12:37.389 Directives: Not Supported 00:12:37.389 NVMe-MI: Not Supported 00:12:37.389 Virtualization Management: Not Supported 00:12:37.389 Doorbell Buffer Config: Not Supported 00:12:37.389 Get LBA Status Capability: Not Supported 00:12:37.389 Command & Feature Lockdown Capability: Not Supported 00:12:37.389 Abort Command Limit: 4 00:12:37.389 Async Event Request Limit: 4 00:12:37.389 Number of Firmware Slots: N/A 00:12:37.389 Firmware Slot 1 Read-Only: N/A 00:12:37.389 Firmware Activation Without Reset: N/A 00:12:37.389 Multiple Update Detection Support: N/A 00:12:37.389 Firmware Update Granularity: No Information Provided 00:12:37.389 Per-Namespace SMART Log: No 00:12:37.389 Asymmetric Namespace Access Log Page: Not Supported 00:12:37.389 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:12:37.389 Command Effects Log Page: Supported 00:12:37.389 Get Log Page Extended Data: Supported 00:12:37.389 Telemetry Log Pages: Not Supported 00:12:37.389 Persistent Event Log Pages: Not Supported 00:12:37.389 Supported Log Pages Log Page: May Support 00:12:37.389 Commands Supported & Effects Log Page: Not Supported 00:12:37.389 Feature Identifiers & Effects Log Page:May Support 00:12:37.389 NVMe-MI Commands & Effects Log Page: May Support 00:12:37.389 Data Area 4 for Telemetry Log: Not Supported 00:12:37.389 Error Log Page Entries Supported: 128 00:12:37.389 Keep Alive: Supported 00:12:37.389 Keep Alive Granularity: 10000 ms 00:12:37.389 00:12:37.389 NVM Command Set Attributes 00:12:37.389 ========================== 00:12:37.389 Submission Queue Entry Size 00:12:37.389 Max: 64 00:12:37.389 Min: 64 00:12:37.389 Completion Queue Entry Size 00:12:37.389 Max: 16 00:12:37.389 Min: 16 00:12:37.389 Number of Namespaces: 32 00:12:37.389 Compare Command: Supported 00:12:37.389 Write Uncorrectable Command: Not Supported 00:12:37.389 Dataset Management Command: Supported 00:12:37.389 Write Zeroes Command: Supported 00:12:37.389 Set Features Save Field: Not Supported 00:12:37.389 Reservations: Not Supported 00:12:37.389 Timestamp: Not Supported 00:12:37.389 Copy: Supported 00:12:37.389 Volatile Write Cache: Present 00:12:37.389 Atomic Write Unit (Normal): 1 00:12:37.389 Atomic Write Unit (PFail): 1 00:12:37.389 Atomic Compare & Write Unit: 1 00:12:37.389 Fused Compare & Write: Supported 00:12:37.389 Scatter-Gather List 00:12:37.389 SGL Command Set: Supported (Dword aligned) 00:12:37.389 SGL Keyed: Not Supported 00:12:37.389 SGL Bit Bucket Descriptor: Not Supported 00:12:37.389 SGL Metadata Pointer: Not Supported 00:12:37.389 Oversized SGL: Not Supported 00:12:37.389 SGL Metadata Address: Not Supported 00:12:37.389 SGL Offset: Not Supported 00:12:37.389 Transport SGL Data Block: Not Supported 00:12:37.389 Replay Protected Memory Block: Not Supported 00:12:37.389 00:12:37.389 Firmware Slot Information 00:12:37.389 ========================= 00:12:37.389 Active slot: 1 00:12:37.389 Slot 1 Firmware Revision: 24.05 00:12:37.389 00:12:37.389 00:12:37.389 Commands Supported and Effects 00:12:37.389 ============================== 00:12:37.389 Admin Commands 00:12:37.389 -------------- 00:12:37.389 Get Log Page (02h): Supported 00:12:37.389 Identify (06h): Supported 00:12:37.389 Abort (08h): Supported 00:12:37.389 Set Features (09h): Supported 00:12:37.389 Get Features (0Ah): Supported 00:12:37.389 Asynchronous Event Request (0Ch): Supported 00:12:37.389 Keep Alive (18h): Supported 00:12:37.389 I/O Commands 00:12:37.389 ------------ 00:12:37.389 Flush (00h): Supported LBA-Change 00:12:37.389 Write (01h): Supported LBA-Change 00:12:37.389 Read (02h): Supported 00:12:37.389 Compare (05h): Supported 00:12:37.389 Write Zeroes (08h): Supported LBA-Change 00:12:37.389 Dataset Management (09h): Supported LBA-Change 00:12:37.389 Copy (19h): Supported LBA-Change 00:12:37.389 Unknown (79h): Supported LBA-Change 00:12:37.389 Unknown (7Ah): Supported 00:12:37.389 00:12:37.389 Error Log 00:12:37.389 ========= 00:12:37.389 00:12:37.389 Arbitration 00:12:37.389 =========== 00:12:37.389 Arbitration Burst: 1 00:12:37.389 00:12:37.389 Power Management 00:12:37.389 ================ 00:12:37.389 Number of Power States: 1 00:12:37.389 Current Power State: Power State #0 00:12:37.389 Power State #0: 00:12:37.389 Max Power: 0.00 W 00:12:37.389 Non-Operational State: Operational 00:12:37.389 Entry Latency: Not Reported 00:12:37.389 Exit Latency: Not Reported 00:12:37.389 Relative Read Throughput: 0 00:12:37.389 Relative Read Latency: 0 00:12:37.389 Relative Write Throughput: 0 00:12:37.389 Relative Write Latency: 0 00:12:37.389 Idle Power: Not Reported 00:12:37.389 Active Power: Not Reported 00:12:37.389 Non-Operational Permissive Mode: Not Supported 00:12:37.389 00:12:37.389 Health Information 00:12:37.389 ================== 00:12:37.389 Critical Warnings: 00:12:37.389 Available Spare Space: OK 00:12:37.389 Temperature: OK 00:12:37.389 Device Reliability: OK 00:12:37.389 Read Only: No 00:12:37.389 Volatile Memory Backup: OK 00:12:37.389 Current Temperature: 0 Kelvin (-2[2024-04-26 15:22:54.579977] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:37.389 [2024-04-26 15:22:54.587844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:37.389 [2024-04-26 15:22:54.587870] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:12:37.389 [2024-04-26 15:22:54.587879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:37.389 [2024-04-26 15:22:54.587885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:37.389 [2024-04-26 15:22:54.587891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:37.389 [2024-04-26 15:22:54.587897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:37.389 [2024-04-26 15:22:54.587949] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:37.389 [2024-04-26 15:22:54.587959] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:12:37.389 [2024-04-26 15:22:54.588955] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:37.389 [2024-04-26 15:22:54.589001] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:12:37.389 [2024-04-26 15:22:54.589010] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:12:37.389 [2024-04-26 15:22:54.589964] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:12:37.389 [2024-04-26 15:22:54.589976] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:12:37.389 [2024-04-26 15:22:54.590023] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:12:37.389 [2024-04-26 15:22:54.592844] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:37.389 73 Celsius) 00:12:37.389 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:37.389 Available Spare: 0% 00:12:37.389 Available Spare Threshold: 0% 00:12:37.389 Life Percentage Used: 0% 00:12:37.389 Data Units Read: 0 00:12:37.389 Data Units Written: 0 00:12:37.389 Host Read Commands: 0 00:12:37.389 Host Write Commands: 0 00:12:37.389 Controller Busy Time: 0 minutes 00:12:37.389 Power Cycles: 0 00:12:37.389 Power On Hours: 0 hours 00:12:37.389 Unsafe Shutdowns: 0 00:12:37.389 Unrecoverable Media Errors: 0 00:12:37.389 Lifetime Error Log Entries: 0 00:12:37.389 Warning Temperature Time: 0 minutes 00:12:37.389 Critical Temperature Time: 0 minutes 00:12:37.389 00:12:37.389 Number of Queues 00:12:37.389 ================ 00:12:37.389 Number of I/O Submission Queues: 127 00:12:37.389 Number of I/O Completion Queues: 127 00:12:37.389 00:12:37.389 Active Namespaces 00:12:37.389 ================= 00:12:37.389 Namespace ID:1 00:12:37.389 Error Recovery Timeout: Unlimited 00:12:37.389 Command Set Identifier: NVM (00h) 00:12:37.389 Deallocate: Supported 00:12:37.389 Deallocated/Unwritten Error: Not Supported 00:12:37.389 Deallocated Read Value: Unknown 00:12:37.389 Deallocate in Write Zeroes: Not Supported 00:12:37.389 Deallocated Guard Field: 0xFFFF 00:12:37.389 Flush: Supported 00:12:37.389 Reservation: Supported 00:12:37.389 Namespace Sharing Capabilities: Multiple Controllers 00:12:37.389 Size (in LBAs): 131072 (0GiB) 00:12:37.389 Capacity (in LBAs): 131072 (0GiB) 00:12:37.389 Utilization (in LBAs): 131072 (0GiB) 00:12:37.390 NGUID: 75A9F7B81CF44D9FBADE156BA2DEEFD0 00:12:37.390 UUID: 75a9f7b8-1cf4-4d9f-bade-156ba2deefd0 00:12:37.390 Thin Provisioning: Not Supported 00:12:37.390 Per-NS Atomic Units: Yes 00:12:37.390 Atomic Boundary Size (Normal): 0 00:12:37.390 Atomic Boundary Size (PFail): 0 00:12:37.390 Atomic Boundary Offset: 0 00:12:37.390 Maximum Single Source Range Length: 65535 00:12:37.390 Maximum Copy Length: 65535 00:12:37.390 Maximum Source Range Count: 1 00:12:37.390 NGUID/EUI64 Never Reused: No 00:12:37.390 Namespace Write Protected: No 00:12:37.390 Number of LBA Formats: 1 00:12:37.390 Current LBA Format: LBA Format #00 00:12:37.390 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:37.390 00:12:37.390 15:22:54 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:37.390 EAL: No free 2048 kB hugepages reported on node 1 00:12:37.390 [2024-04-26 15:22:54.776848] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:42.674 [2024-04-26 15:22:59.885014] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:42.674 Initializing NVMe Controllers 00:12:42.674 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:42.674 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:42.674 Initialization complete. Launching workers. 00:12:42.674 ======================================================== 00:12:42.674 Latency(us) 00:12:42.674 Device Information : IOPS MiB/s Average min max 00:12:42.674 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39958.20 156.09 3205.73 847.34 6821.85 00:12:42.674 ======================================================== 00:12:42.674 Total : 39958.20 156.09 3205.73 847.34 6821.85 00:12:42.674 00:12:42.674 15:22:59 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:42.674 EAL: No free 2048 kB hugepages reported on node 1 00:12:42.674 [2024-04-26 15:23:00.065576] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:47.970 [2024-04-26 15:23:05.084342] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:47.970 Initializing NVMe Controllers 00:12:47.970 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:47.970 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:47.970 Initialization complete. Launching workers. 00:12:47.970 ======================================================== 00:12:47.970 Latency(us) 00:12:47.970 Device Information : IOPS MiB/s Average min max 00:12:47.970 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 36069.52 140.90 3548.59 1118.79 8625.30 00:12:47.970 ======================================================== 00:12:47.970 Total : 36069.52 140.90 3548.59 1118.79 8625.30 00:12:47.970 00:12:47.970 15:23:05 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:47.970 EAL: No free 2048 kB hugepages reported on node 1 00:12:47.970 [2024-04-26 15:23:05.270510] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:53.252 [2024-04-26 15:23:10.406923] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:53.252 Initializing NVMe Controllers 00:12:53.252 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:53.253 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:53.253 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:12:53.253 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:12:53.253 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:12:53.253 Initialization complete. Launching workers. 00:12:53.253 Starting thread on core 2 00:12:53.253 Starting thread on core 3 00:12:53.253 Starting thread on core 1 00:12:53.253 15:23:10 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:12:53.253 EAL: No free 2048 kB hugepages reported on node 1 00:12:53.253 [2024-04-26 15:23:10.659254] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:56.554 [2024-04-26 15:23:13.718220] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:56.554 Initializing NVMe Controllers 00:12:56.554 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:56.554 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:56.554 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:12:56.554 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:12:56.554 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:12:56.554 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:12:56.554 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:56.554 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:56.554 Initialization complete. Launching workers. 00:12:56.554 Starting thread on core 1 with urgent priority queue 00:12:56.554 Starting thread on core 2 with urgent priority queue 00:12:56.554 Starting thread on core 3 with urgent priority queue 00:12:56.554 Starting thread on core 0 with urgent priority queue 00:12:56.554 SPDK bdev Controller (SPDK2 ) core 0: 13608.33 IO/s 7.35 secs/100000 ios 00:12:56.554 SPDK bdev Controller (SPDK2 ) core 1: 10246.00 IO/s 9.76 secs/100000 ios 00:12:56.554 SPDK bdev Controller (SPDK2 ) core 2: 13515.33 IO/s 7.40 secs/100000 ios 00:12:56.554 SPDK bdev Controller (SPDK2 ) core 3: 12306.33 IO/s 8.13 secs/100000 ios 00:12:56.554 ======================================================== 00:12:56.554 00:12:56.554 15:23:13 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:56.554 EAL: No free 2048 kB hugepages reported on node 1 00:12:56.554 [2024-04-26 15:23:13.976307] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:56.554 [2024-04-26 15:23:13.984357] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:56.815 Initializing NVMe Controllers 00:12:56.815 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:56.815 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:56.815 Namespace ID: 1 size: 0GB 00:12:56.815 Initialization complete. 00:12:56.815 INFO: using host memory buffer for IO 00:12:56.815 Hello world! 00:12:56.815 15:23:14 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:56.815 EAL: No free 2048 kB hugepages reported on node 1 00:12:56.815 [2024-04-26 15:23:14.250848] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:58.324 Initializing NVMe Controllers 00:12:58.324 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:58.324 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:58.324 Initialization complete. Launching workers. 00:12:58.324 submit (in ns) avg, min, max = 9360.0, 3863.3, 7991396.7 00:12:58.324 complete (in ns) avg, min, max = 16624.1, 2365.0, 8023754.2 00:12:58.324 00:12:58.324 Submit histogram 00:12:58.324 ================ 00:12:58.324 Range in us Cumulative Count 00:12:58.324 3.840 - 3.867: 0.0308% ( 6) 00:12:58.324 3.867 - 3.893: 1.7089% ( 327) 00:12:58.324 3.893 - 3.920: 6.9794% ( 1027) 00:12:58.324 3.920 - 3.947: 16.6735% ( 1889) 00:12:58.324 3.947 - 3.973: 28.4409% ( 2293) 00:12:58.324 3.973 - 4.000: 40.5522% ( 2360) 00:12:58.324 4.000 - 4.027: 54.3775% ( 2694) 00:12:58.324 4.027 - 4.053: 72.3802% ( 3508) 00:12:58.324 4.053 - 4.080: 85.7436% ( 2604) 00:12:58.324 4.080 - 4.107: 93.6057% ( 1532) 00:12:58.324 4.107 - 4.133: 97.7676% ( 811) 00:12:58.324 4.133 - 4.160: 98.9685% ( 234) 00:12:58.324 4.160 - 4.187: 99.2918% ( 63) 00:12:58.324 4.187 - 4.213: 99.4150% ( 24) 00:12:58.324 4.213 - 4.240: 99.4406% ( 5) 00:12:58.324 4.240 - 4.267: 99.4458% ( 1) 00:12:58.324 4.320 - 4.347: 99.4509% ( 1) 00:12:58.324 4.453 - 4.480: 99.4560% ( 1) 00:12:58.324 4.480 - 4.507: 99.4612% ( 1) 00:12:58.324 4.560 - 4.587: 99.4765% ( 3) 00:12:58.324 4.587 - 4.613: 99.4817% ( 1) 00:12:58.324 4.880 - 4.907: 99.4919% ( 2) 00:12:58.324 5.013 - 5.040: 99.4971% ( 1) 00:12:58.324 5.467 - 5.493: 99.5073% ( 2) 00:12:58.324 5.520 - 5.547: 99.5125% ( 1) 00:12:58.324 5.627 - 5.653: 99.5227% ( 2) 00:12:58.324 5.733 - 5.760: 99.5279% ( 1) 00:12:58.324 5.787 - 5.813: 99.5330% ( 1) 00:12:58.324 5.893 - 5.920: 99.5381% ( 1) 00:12:58.324 5.973 - 6.000: 99.5433% ( 1) 00:12:58.324 6.027 - 6.053: 99.5484% ( 1) 00:12:58.324 6.053 - 6.080: 99.5535% ( 1) 00:12:58.324 6.080 - 6.107: 99.5638% ( 2) 00:12:58.324 6.107 - 6.133: 99.5689% ( 1) 00:12:58.324 6.187 - 6.213: 99.5741% ( 1) 00:12:58.324 6.240 - 6.267: 99.5792% ( 1) 00:12:58.324 6.400 - 6.427: 99.5843% ( 1) 00:12:58.324 6.453 - 6.480: 99.5894% ( 1) 00:12:58.324 6.507 - 6.533: 99.5946% ( 1) 00:12:58.324 6.667 - 6.693: 99.5997% ( 1) 00:12:58.324 6.827 - 6.880: 99.6048% ( 1) 00:12:58.324 6.933 - 6.987: 99.6151% ( 2) 00:12:58.324 7.040 - 7.093: 99.6254% ( 2) 00:12:58.324 7.093 - 7.147: 99.6356% ( 2) 00:12:58.324 7.200 - 7.253: 99.6408% ( 1) 00:12:58.324 7.253 - 7.307: 99.6510% ( 2) 00:12:58.324 7.360 - 7.413: 99.6562% ( 1) 00:12:58.324 7.520 - 7.573: 99.6613% ( 1) 00:12:58.324 7.573 - 7.627: 99.6716% ( 2) 00:12:58.324 7.627 - 7.680: 99.6870% ( 3) 00:12:58.324 7.680 - 7.733: 99.6972% ( 2) 00:12:58.324 7.787 - 7.840: 99.7075% ( 2) 00:12:58.324 7.840 - 7.893: 99.7126% ( 1) 00:12:58.324 7.893 - 7.947: 99.7280% ( 3) 00:12:58.324 8.000 - 8.053: 99.7383% ( 2) 00:12:58.324 8.053 - 8.107: 99.7485% ( 2) 00:12:58.324 8.107 - 8.160: 99.7588% ( 2) 00:12:58.324 8.213 - 8.267: 99.7742% ( 3) 00:12:58.324 8.320 - 8.373: 99.7793% ( 1) 00:12:58.324 8.427 - 8.480: 99.7896% ( 2) 00:12:58.324 8.480 - 8.533: 99.7947% ( 1) 00:12:58.324 8.640 - 8.693: 99.8050% ( 2) 00:12:58.324 8.747 - 8.800: 99.8101% ( 1) 00:12:58.324 8.800 - 8.853: 99.8153% ( 1) 00:12:58.325 9.013 - 9.067: 99.8204% ( 1) 00:12:58.325 9.067 - 9.120: 99.8255% ( 1) 00:12:58.325 9.120 - 9.173: 99.8306% ( 1) 00:12:58.325 9.227 - 9.280: 99.8409% ( 2) 00:12:58.325 9.333 - 9.387: 99.8460% ( 1) 00:12:58.325 9.440 - 9.493: 99.8512% ( 1) 00:12:58.325 9.813 - 9.867: 99.8563% ( 1) 00:12:58.325 12.533 - 12.587: 99.8614% ( 1) 00:12:58.325 13.653 - 13.760: 99.8666% ( 1) 00:12:58.325 17.280 - 17.387: 99.8717% ( 1) 00:12:58.325 3986.773 - 4014.080: 99.9897% ( 23) 00:12:58.325 4014.080 - 4041.387: 99.9949% ( 1) 00:12:58.325 7973.547 - 8028.160: 100.0000% ( 1) 00:12:58.325 00:12:58.325 Complete histogram 00:12:58.325 ================== 00:12:58.325 Range in us Cumulative Count 00:12:58.325 2.360 - [2024-04-26 15:23:15.357506] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:58.325 2.373: 0.0154% ( 3) 00:12:58.325 2.373 - 2.387: 0.0205% ( 1) 00:12:58.325 2.387 - 2.400: 0.9083% ( 173) 00:12:58.325 2.400 - 2.413: 1.2419% ( 65) 00:12:58.325 2.413 - 2.427: 1.3702% ( 25) 00:12:58.325 2.427 - 2.440: 15.9140% ( 2834) 00:12:58.325 2.440 - 2.453: 56.9434% ( 7995) 00:12:58.325 2.453 - 2.467: 66.7351% ( 1908) 00:12:58.325 2.467 - 2.480: 76.4959% ( 1902) 00:12:58.325 2.480 - 2.493: 80.6528% ( 810) 00:12:58.325 2.493 - 2.507: 82.8133% ( 421) 00:12:58.325 2.507 - 2.520: 86.9291% ( 802) 00:12:58.325 2.520 - 2.533: 92.9898% ( 1181) 00:12:58.325 2.533 - 2.547: 96.5001% ( 684) 00:12:58.325 2.547 - 2.560: 98.0396% ( 300) 00:12:58.325 2.560 - 2.573: 99.0147% ( 190) 00:12:58.325 2.573 - 2.587: 99.3483% ( 65) 00:12:58.325 2.587 - 2.600: 99.4304% ( 16) 00:12:58.325 2.600 - 2.613: 99.4406% ( 2) 00:12:58.325 4.480 - 4.507: 99.4458% ( 1) 00:12:58.325 4.587 - 4.613: 99.4509% ( 1) 00:12:58.325 4.613 - 4.640: 99.4560% ( 1) 00:12:58.325 4.640 - 4.667: 99.4663% ( 2) 00:12:58.325 4.720 - 4.747: 99.4714% ( 1) 00:12:58.325 4.800 - 4.827: 99.4765% ( 1) 00:12:58.325 5.813 - 5.840: 99.4817% ( 1) 00:12:58.325 5.867 - 5.893: 99.4868% ( 1) 00:12:58.325 5.920 - 5.947: 99.5022% ( 3) 00:12:58.325 5.947 - 5.973: 99.5073% ( 1) 00:12:58.325 6.053 - 6.080: 99.5125% ( 1) 00:12:58.325 6.080 - 6.107: 99.5176% ( 1) 00:12:58.325 6.133 - 6.160: 99.5279% ( 2) 00:12:58.325 6.160 - 6.187: 99.5330% ( 1) 00:12:58.325 6.347 - 6.373: 99.5433% ( 2) 00:12:58.325 6.373 - 6.400: 99.5535% ( 2) 00:12:58.325 6.453 - 6.480: 99.5587% ( 1) 00:12:58.325 6.480 - 6.507: 99.5689% ( 2) 00:12:58.325 6.560 - 6.587: 99.5741% ( 1) 00:12:58.325 6.667 - 6.693: 99.5792% ( 1) 00:12:58.325 6.693 - 6.720: 99.5843% ( 1) 00:12:58.325 6.773 - 6.800: 99.5894% ( 1) 00:12:58.325 6.880 - 6.933: 99.5946% ( 1) 00:12:58.325 6.933 - 6.987: 99.5997% ( 1) 00:12:58.325 7.200 - 7.253: 99.6048% ( 1) 00:12:58.325 7.253 - 7.307: 99.6100% ( 1) 00:12:58.325 7.360 - 7.413: 99.6151% ( 1) 00:12:58.325 7.520 - 7.573: 99.6202% ( 1) 00:12:58.325 7.733 - 7.787: 99.6254% ( 1) 00:12:58.325 8.160 - 8.213: 99.6305% ( 1) 00:12:58.325 8.373 - 8.427: 99.6356% ( 1) 00:12:58.325 8.747 - 8.800: 99.6408% ( 1) 00:12:58.325 11.253 - 11.307: 99.6459% ( 1) 00:12:58.325 12.373 - 12.427: 99.6510% ( 1) 00:12:58.325 3986.773 - 4014.080: 99.9897% ( 66) 00:12:58.325 4014.080 - 4041.387: 99.9949% ( 1) 00:12:58.325 7973.547 - 8028.160: 100.0000% ( 1) 00:12:58.325 00:12:58.325 15:23:15 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:12:58.325 15:23:15 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:58.325 15:23:15 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:12:58.325 15:23:15 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:12:58.325 15:23:15 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:58.325 [ 00:12:58.325 { 00:12:58.325 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:58.325 "subtype": "Discovery", 00:12:58.325 "listen_addresses": [], 00:12:58.325 "allow_any_host": true, 00:12:58.325 "hosts": [] 00:12:58.325 }, 00:12:58.325 { 00:12:58.325 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:58.325 "subtype": "NVMe", 00:12:58.325 "listen_addresses": [ 00:12:58.325 { 00:12:58.325 "transport": "VFIOUSER", 00:12:58.325 "trtype": "VFIOUSER", 00:12:58.325 "adrfam": "IPv4", 00:12:58.325 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:58.325 "trsvcid": "0" 00:12:58.325 } 00:12:58.325 ], 00:12:58.325 "allow_any_host": true, 00:12:58.325 "hosts": [], 00:12:58.325 "serial_number": "SPDK1", 00:12:58.325 "model_number": "SPDK bdev Controller", 00:12:58.325 "max_namespaces": 32, 00:12:58.325 "min_cntlid": 1, 00:12:58.325 "max_cntlid": 65519, 00:12:58.325 "namespaces": [ 00:12:58.325 { 00:12:58.325 "nsid": 1, 00:12:58.325 "bdev_name": "Malloc1", 00:12:58.325 "name": "Malloc1", 00:12:58.325 "nguid": "21797AD07EB842948A7E69864A479167", 00:12:58.325 "uuid": "21797ad0-7eb8-4294-8a7e-69864a479167" 00:12:58.325 }, 00:12:58.325 { 00:12:58.325 "nsid": 2, 00:12:58.325 "bdev_name": "Malloc3", 00:12:58.325 "name": "Malloc3", 00:12:58.325 "nguid": "528D824374F9457A9EC0C6403E1A0B43", 00:12:58.325 "uuid": "528d8243-74f9-457a-9ec0-c6403e1a0b43" 00:12:58.325 } 00:12:58.325 ] 00:12:58.325 }, 00:12:58.325 { 00:12:58.325 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:58.325 "subtype": "NVMe", 00:12:58.325 "listen_addresses": [ 00:12:58.325 { 00:12:58.325 "transport": "VFIOUSER", 00:12:58.325 "trtype": "VFIOUSER", 00:12:58.325 "adrfam": "IPv4", 00:12:58.325 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:58.325 "trsvcid": "0" 00:12:58.325 } 00:12:58.325 ], 00:12:58.325 "allow_any_host": true, 00:12:58.325 "hosts": [], 00:12:58.325 "serial_number": "SPDK2", 00:12:58.325 "model_number": "SPDK bdev Controller", 00:12:58.325 "max_namespaces": 32, 00:12:58.325 "min_cntlid": 1, 00:12:58.325 "max_cntlid": 65519, 00:12:58.325 "namespaces": [ 00:12:58.325 { 00:12:58.325 "nsid": 1, 00:12:58.325 "bdev_name": "Malloc2", 00:12:58.325 "name": "Malloc2", 00:12:58.325 "nguid": "75A9F7B81CF44D9FBADE156BA2DEEFD0", 00:12:58.325 "uuid": "75a9f7b8-1cf4-4d9f-bade-156ba2deefd0" 00:12:58.325 } 00:12:58.325 ] 00:12:58.325 } 00:12:58.325 ] 00:12:58.325 15:23:15 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:58.325 15:23:15 -- target/nvmf_vfio_user.sh@34 -- # aerpid=1552630 00:12:58.325 15:23:15 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:58.325 15:23:15 -- common/autotest_common.sh@1251 -- # local i=0 00:12:58.325 15:23:15 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:12:58.325 15:23:15 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:58.325 15:23:15 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:58.325 15:23:15 -- common/autotest_common.sh@1262 -- # return 0 00:12:58.325 15:23:15 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:58.325 15:23:15 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:12:58.325 EAL: No free 2048 kB hugepages reported on node 1 00:12:58.325 Malloc4 00:12:58.325 [2024-04-26 15:23:15.748275] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:58.325 15:23:15 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:12:58.586 [2024-04-26 15:23:15.918416] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:58.586 15:23:15 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:58.586 Asynchronous Event Request test 00:12:58.586 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:58.586 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:58.586 Registering asynchronous event callbacks... 00:12:58.587 Starting namespace attribute notice tests for all controllers... 00:12:58.587 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:58.587 aer_cb - Changed Namespace 00:12:58.587 Cleaning up... 00:12:58.847 [ 00:12:58.847 { 00:12:58.847 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:58.847 "subtype": "Discovery", 00:12:58.847 "listen_addresses": [], 00:12:58.847 "allow_any_host": true, 00:12:58.847 "hosts": [] 00:12:58.847 }, 00:12:58.847 { 00:12:58.847 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:58.847 "subtype": "NVMe", 00:12:58.847 "listen_addresses": [ 00:12:58.847 { 00:12:58.847 "transport": "VFIOUSER", 00:12:58.847 "trtype": "VFIOUSER", 00:12:58.847 "adrfam": "IPv4", 00:12:58.847 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:58.847 "trsvcid": "0" 00:12:58.847 } 00:12:58.847 ], 00:12:58.847 "allow_any_host": true, 00:12:58.847 "hosts": [], 00:12:58.847 "serial_number": "SPDK1", 00:12:58.847 "model_number": "SPDK bdev Controller", 00:12:58.847 "max_namespaces": 32, 00:12:58.847 "min_cntlid": 1, 00:12:58.847 "max_cntlid": 65519, 00:12:58.847 "namespaces": [ 00:12:58.847 { 00:12:58.847 "nsid": 1, 00:12:58.847 "bdev_name": "Malloc1", 00:12:58.847 "name": "Malloc1", 00:12:58.847 "nguid": "21797AD07EB842948A7E69864A479167", 00:12:58.847 "uuid": "21797ad0-7eb8-4294-8a7e-69864a479167" 00:12:58.847 }, 00:12:58.847 { 00:12:58.847 "nsid": 2, 00:12:58.847 "bdev_name": "Malloc3", 00:12:58.847 "name": "Malloc3", 00:12:58.847 "nguid": "528D824374F9457A9EC0C6403E1A0B43", 00:12:58.847 "uuid": "528d8243-74f9-457a-9ec0-c6403e1a0b43" 00:12:58.847 } 00:12:58.847 ] 00:12:58.847 }, 00:12:58.847 { 00:12:58.847 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:58.847 "subtype": "NVMe", 00:12:58.847 "listen_addresses": [ 00:12:58.847 { 00:12:58.847 "transport": "VFIOUSER", 00:12:58.847 "trtype": "VFIOUSER", 00:12:58.847 "adrfam": "IPv4", 00:12:58.847 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:58.847 "trsvcid": "0" 00:12:58.847 } 00:12:58.847 ], 00:12:58.847 "allow_any_host": true, 00:12:58.847 "hosts": [], 00:12:58.847 "serial_number": "SPDK2", 00:12:58.847 "model_number": "SPDK bdev Controller", 00:12:58.847 "max_namespaces": 32, 00:12:58.847 "min_cntlid": 1, 00:12:58.847 "max_cntlid": 65519, 00:12:58.847 "namespaces": [ 00:12:58.847 { 00:12:58.847 "nsid": 1, 00:12:58.847 "bdev_name": "Malloc2", 00:12:58.847 "name": "Malloc2", 00:12:58.847 "nguid": "75A9F7B81CF44D9FBADE156BA2DEEFD0", 00:12:58.847 "uuid": "75a9f7b8-1cf4-4d9f-bade-156ba2deefd0" 00:12:58.847 }, 00:12:58.847 { 00:12:58.847 "nsid": 2, 00:12:58.847 "bdev_name": "Malloc4", 00:12:58.847 "name": "Malloc4", 00:12:58.847 "nguid": "872A3DE8F8C647A9B8C188812A83DA0D", 00:12:58.847 "uuid": "872a3de8-f8c6-47a9-b8c1-88812a83da0d" 00:12:58.847 } 00:12:58.847 ] 00:12:58.847 } 00:12:58.847 ] 00:12:58.847 15:23:16 -- target/nvmf_vfio_user.sh@44 -- # wait 1552630 00:12:58.848 15:23:16 -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:12:58.848 15:23:16 -- target/nvmf_vfio_user.sh@95 -- # killprocess 1543694 00:12:58.848 15:23:16 -- common/autotest_common.sh@936 -- # '[' -z 1543694 ']' 00:12:58.848 15:23:16 -- common/autotest_common.sh@940 -- # kill -0 1543694 00:12:58.848 15:23:16 -- common/autotest_common.sh@941 -- # uname 00:12:58.848 15:23:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:58.848 15:23:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1543694 00:12:58.848 15:23:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:58.848 15:23:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:58.848 15:23:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1543694' 00:12:58.848 killing process with pid 1543694 00:12:58.848 15:23:16 -- common/autotest_common.sh@955 -- # kill 1543694 00:12:58.848 [2024-04-26 15:23:16.169951] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:12:58.848 15:23:16 -- common/autotest_common.sh@960 -- # wait 1543694 00:12:59.109 15:23:16 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:59.109 15:23:16 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:59.109 15:23:16 -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:12:59.109 15:23:16 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:12:59.109 15:23:16 -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:12:59.109 15:23:16 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1552945 00:12:59.109 15:23:16 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1552945' 00:12:59.109 Process pid: 1552945 00:12:59.109 15:23:16 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:59.109 15:23:16 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:12:59.109 15:23:16 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1552945 00:12:59.109 15:23:16 -- common/autotest_common.sh@817 -- # '[' -z 1552945 ']' 00:12:59.109 15:23:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:59.109 15:23:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:59.109 15:23:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:59.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:59.109 15:23:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:59.109 15:23:16 -- common/autotest_common.sh@10 -- # set +x 00:12:59.109 [2024-04-26 15:23:16.398941] thread.c:2927:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:12:59.109 [2024-04-26 15:23:16.399866] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:12:59.109 [2024-04-26 15:23:16.399906] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:59.109 EAL: No free 2048 kB hugepages reported on node 1 00:12:59.109 [2024-04-26 15:23:16.463700] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:59.109 [2024-04-26 15:23:16.526291] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:59.109 [2024-04-26 15:23:16.526332] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:59.109 [2024-04-26 15:23:16.526341] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:59.109 [2024-04-26 15:23:16.526349] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:59.109 [2024-04-26 15:23:16.526356] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:59.109 [2024-04-26 15:23:16.526524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:59.109 [2024-04-26 15:23:16.526630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:59.109 [2024-04-26 15:23:16.526787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.109 [2024-04-26 15:23:16.526788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:59.369 [2024-04-26 15:23:16.588731] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_1) to intr mode from intr mode. 00:12:59.369 [2024-04-26 15:23:16.588744] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_0) to intr mode from intr mode. 00:12:59.369 [2024-04-26 15:23:16.589069] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_2) to intr mode from intr mode. 00:12:59.369 [2024-04-26 15:23:16.589235] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:12:59.369 [2024-04-26 15:23:16.589323] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_3) to intr mode from intr mode. 00:12:59.940 15:23:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:59.940 15:23:17 -- common/autotest_common.sh@850 -- # return 0 00:12:59.940 15:23:17 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:00.880 15:23:18 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:13:01.140 15:23:18 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:01.140 15:23:18 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:01.140 15:23:18 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:01.140 15:23:18 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:01.140 15:23:18 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:01.140 Malloc1 00:13:01.140 15:23:18 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:01.400 15:23:18 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:01.400 15:23:18 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:01.660 15:23:19 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:01.660 15:23:19 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:01.660 15:23:19 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:01.921 Malloc2 00:13:01.921 15:23:19 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:01.921 15:23:19 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:02.191 15:23:19 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:02.452 15:23:19 -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:13:02.452 15:23:19 -- target/nvmf_vfio_user.sh@95 -- # killprocess 1552945 00:13:02.452 15:23:19 -- common/autotest_common.sh@936 -- # '[' -z 1552945 ']' 00:13:02.452 15:23:19 -- common/autotest_common.sh@940 -- # kill -0 1552945 00:13:02.452 15:23:19 -- common/autotest_common.sh@941 -- # uname 00:13:02.452 15:23:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:02.452 15:23:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1552945 00:13:02.453 15:23:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:02.453 15:23:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:02.453 15:23:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1552945' 00:13:02.453 killing process with pid 1552945 00:13:02.453 15:23:19 -- common/autotest_common.sh@955 -- # kill 1552945 00:13:02.453 15:23:19 -- common/autotest_common.sh@960 -- # wait 1552945 00:13:02.453 15:23:19 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:02.453 15:23:19 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:02.453 00:13:02.453 real 0m50.418s 00:13:02.453 user 3m19.870s 00:13:02.453 sys 0m3.010s 00:13:02.453 15:23:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:02.453 15:23:19 -- common/autotest_common.sh@10 -- # set +x 00:13:02.453 ************************************ 00:13:02.453 END TEST nvmf_vfio_user 00:13:02.453 ************************************ 00:13:02.715 15:23:19 -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:02.715 15:23:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:02.715 15:23:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:02.715 15:23:19 -- common/autotest_common.sh@10 -- # set +x 00:13:02.715 ************************************ 00:13:02.715 START TEST nvmf_vfio_user_nvme_compliance 00:13:02.715 ************************************ 00:13:02.715 15:23:20 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:02.715 * Looking for test storage... 00:13:02.976 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:13:02.976 15:23:20 -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:02.976 15:23:20 -- nvmf/common.sh@7 -- # uname -s 00:13:02.976 15:23:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:02.976 15:23:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:02.976 15:23:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:02.976 15:23:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:02.976 15:23:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:02.976 15:23:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:02.976 15:23:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:02.976 15:23:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:02.976 15:23:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:02.976 15:23:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:02.976 15:23:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:02.976 15:23:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:02.976 15:23:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:02.976 15:23:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:02.976 15:23:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:02.976 15:23:20 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:02.976 15:23:20 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:02.976 15:23:20 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:02.976 15:23:20 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:02.976 15:23:20 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:02.976 15:23:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.976 15:23:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.976 15:23:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.976 15:23:20 -- paths/export.sh@5 -- # export PATH 00:13:02.976 15:23:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.976 15:23:20 -- nvmf/common.sh@47 -- # : 0 00:13:02.976 15:23:20 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:02.976 15:23:20 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:02.976 15:23:20 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:02.976 15:23:20 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:02.976 15:23:20 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:02.976 15:23:20 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:02.976 15:23:20 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:02.976 15:23:20 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:02.976 15:23:20 -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:02.976 15:23:20 -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:02.976 15:23:20 -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:13:02.976 15:23:20 -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:13:02.976 15:23:20 -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:13:02.976 15:23:20 -- compliance/compliance.sh@20 -- # nvmfpid=1553704 00:13:02.976 15:23:20 -- compliance/compliance.sh@21 -- # echo 'Process pid: 1553704' 00:13:02.976 Process pid: 1553704 00:13:02.976 15:23:20 -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:02.976 15:23:20 -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:02.976 15:23:20 -- compliance/compliance.sh@24 -- # waitforlisten 1553704 00:13:02.976 15:23:20 -- common/autotest_common.sh@817 -- # '[' -z 1553704 ']' 00:13:02.976 15:23:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:02.976 15:23:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:02.976 15:23:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:02.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:02.976 15:23:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:02.976 15:23:20 -- common/autotest_common.sh@10 -- # set +x 00:13:02.976 [2024-04-26 15:23:20.259724] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:13:02.976 [2024-04-26 15:23:20.259794] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:02.976 EAL: No free 2048 kB hugepages reported on node 1 00:13:02.976 [2024-04-26 15:23:20.325702] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:02.976 [2024-04-26 15:23:20.397771] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:02.976 [2024-04-26 15:23:20.397811] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:02.976 [2024-04-26 15:23:20.397819] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:02.976 [2024-04-26 15:23:20.397825] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:02.976 [2024-04-26 15:23:20.397830] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:02.976 [2024-04-26 15:23:20.397907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:02.976 [2024-04-26 15:23:20.398042] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:02.976 [2024-04-26 15:23:20.398045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.918 15:23:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:03.918 15:23:21 -- common/autotest_common.sh@850 -- # return 0 00:13:03.918 15:23:21 -- compliance/compliance.sh@26 -- # sleep 1 00:13:04.860 15:23:22 -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:04.860 15:23:22 -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:13:04.860 15:23:22 -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:04.860 15:23:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:04.860 15:23:22 -- common/autotest_common.sh@10 -- # set +x 00:13:04.860 15:23:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:04.860 15:23:22 -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:13:04.860 15:23:22 -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:04.860 15:23:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:04.860 15:23:22 -- common/autotest_common.sh@10 -- # set +x 00:13:04.860 malloc0 00:13:04.860 15:23:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:04.860 15:23:22 -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:13:04.860 15:23:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:04.860 15:23:22 -- common/autotest_common.sh@10 -- # set +x 00:13:04.860 15:23:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:04.860 15:23:22 -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:04.860 15:23:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:04.860 15:23:22 -- common/autotest_common.sh@10 -- # set +x 00:13:04.860 15:23:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:04.860 15:23:22 -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:04.860 15:23:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:04.860 15:23:22 -- common/autotest_common.sh@10 -- # set +x 00:13:04.860 15:23:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:04.860 15:23:22 -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:13:04.860 EAL: No free 2048 kB hugepages reported on node 1 00:13:04.860 00:13:04.860 00:13:04.860 CUnit - A unit testing framework for C - Version 2.1-3 00:13:04.860 http://cunit.sourceforge.net/ 00:13:04.860 00:13:04.860 00:13:04.860 Suite: nvme_compliance 00:13:04.860 Test: admin_identify_ctrlr_verify_dptr ...[2024-04-26 15:23:22.284665] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:04.860 [2024-04-26 15:23:22.285984] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:13:04.860 [2024-04-26 15:23:22.285994] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:13:04.860 [2024-04-26 15:23:22.285999] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:13:04.860 [2024-04-26 15:23:22.287680] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:05.121 passed 00:13:05.121 Test: admin_identify_ctrlr_verify_fused ...[2024-04-26 15:23:22.384249] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:05.121 [2024-04-26 15:23:22.387261] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:05.121 passed 00:13:05.121 Test: admin_identify_ns ...[2024-04-26 15:23:22.483088] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:05.121 [2024-04-26 15:23:22.542850] ctrlr.c:2656:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:13:05.121 [2024-04-26 15:23:22.550849] ctrlr.c:2656:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:13:05.382 [2024-04-26 15:23:22.571958] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:05.382 passed 00:13:05.382 Test: admin_get_features_mandatory_features ...[2024-04-26 15:23:22.665956] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:05.382 [2024-04-26 15:23:22.668985] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:05.382 passed 00:13:05.382 Test: admin_get_features_optional_features ...[2024-04-26 15:23:22.762525] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:05.382 [2024-04-26 15:23:22.768553] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:05.382 passed 00:13:05.642 Test: admin_set_features_number_of_queues ...[2024-04-26 15:23:22.859089] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:05.642 [2024-04-26 15:23:22.963941] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:05.642 passed 00:13:05.642 Test: admin_get_log_page_mandatory_logs ...[2024-04-26 15:23:23.057974] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:05.642 [2024-04-26 15:23:23.060990] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:05.903 passed 00:13:05.903 Test: admin_get_log_page_with_lpo ...[2024-04-26 15:23:23.154067] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:05.903 [2024-04-26 15:23:23.221846] ctrlr.c:2604:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:13:05.903 [2024-04-26 15:23:23.234890] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:05.903 passed 00:13:05.903 Test: fabric_property_get ...[2024-04-26 15:23:23.328938] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:05.903 [2024-04-26 15:23:23.330169] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:13:05.903 [2024-04-26 15:23:23.331958] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:06.163 passed 00:13:06.163 Test: admin_delete_io_sq_use_admin_qid ...[2024-04-26 15:23:23.425490] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:06.163 [2024-04-26 15:23:23.426736] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:13:06.163 [2024-04-26 15:23:23.428520] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:06.163 passed 00:13:06.163 Test: admin_delete_io_sq_delete_sq_twice ...[2024-04-26 15:23:23.521669] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:06.163 [2024-04-26 15:23:23.604844] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:06.423 [2024-04-26 15:23:23.620846] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:06.423 [2024-04-26 15:23:23.625940] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:06.423 passed 00:13:06.423 Test: admin_delete_io_cq_use_admin_qid ...[2024-04-26 15:23:23.717916] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:06.423 [2024-04-26 15:23:23.719148] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:13:06.423 [2024-04-26 15:23:23.720940] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:06.423 passed 00:13:06.423 Test: admin_delete_io_cq_delete_cq_first ...[2024-04-26 15:23:23.813028] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:06.684 [2024-04-26 15:23:23.887851] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:06.684 [2024-04-26 15:23:23.911849] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:06.684 [2024-04-26 15:23:23.916926] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:06.684 passed 00:13:06.684 Test: admin_create_io_cq_verify_iv_pc ...[2024-04-26 15:23:24.010932] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:06.684 [2024-04-26 15:23:24.012159] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:13:06.684 [2024-04-26 15:23:24.012176] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:13:06.684 [2024-04-26 15:23:24.013944] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:06.684 passed 00:13:06.684 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-04-26 15:23:24.107075] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:06.945 [2024-04-26 15:23:24.198847] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:13:06.945 [2024-04-26 15:23:24.206843] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:13:06.945 [2024-04-26 15:23:24.214851] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:13:06.945 [2024-04-26 15:23:24.222846] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:13:06.945 [2024-04-26 15:23:24.251934] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:06.945 passed 00:13:06.945 Test: admin_create_io_sq_verify_pc ...[2024-04-26 15:23:24.345927] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:06.945 [2024-04-26 15:23:24.364852] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:13:06.945 [2024-04-26 15:23:24.382084] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:07.205 passed 00:13:07.205 Test: admin_create_io_qp_max_qps ...[2024-04-26 15:23:24.473625] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:08.148 [2024-04-26 15:23:25.580848] nvme_ctrlr.c:5329:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:13:08.720 [2024-04-26 15:23:25.966383] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:08.720 passed 00:13:08.720 Test: admin_create_io_sq_shared_cq ...[2024-04-26 15:23:26.056538] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:08.982 [2024-04-26 15:23:26.191854] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:08.982 [2024-04-26 15:23:26.228903] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:08.982 passed 00:13:08.982 00:13:08.982 Run Summary: Type Total Ran Passed Failed Inactive 00:13:08.982 suites 1 1 n/a 0 0 00:13:08.982 tests 18 18 18 0 0 00:13:08.982 asserts 360 360 360 0 n/a 00:13:08.982 00:13:08.982 Elapsed time = 1.653 seconds 00:13:08.982 15:23:26 -- compliance/compliance.sh@42 -- # killprocess 1553704 00:13:08.982 15:23:26 -- common/autotest_common.sh@936 -- # '[' -z 1553704 ']' 00:13:08.982 15:23:26 -- common/autotest_common.sh@940 -- # kill -0 1553704 00:13:08.982 15:23:26 -- common/autotest_common.sh@941 -- # uname 00:13:08.982 15:23:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:08.982 15:23:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1553704 00:13:08.982 15:23:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:08.982 15:23:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:08.982 15:23:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1553704' 00:13:08.982 killing process with pid 1553704 00:13:08.982 15:23:26 -- common/autotest_common.sh@955 -- # kill 1553704 00:13:08.982 15:23:26 -- common/autotest_common.sh@960 -- # wait 1553704 00:13:09.243 15:23:26 -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:13:09.243 15:23:26 -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:13:09.243 00:13:09.243 real 0m6.411s 00:13:09.243 user 0m18.340s 00:13:09.243 sys 0m0.449s 00:13:09.243 15:23:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:09.243 15:23:26 -- common/autotest_common.sh@10 -- # set +x 00:13:09.243 ************************************ 00:13:09.243 END TEST nvmf_vfio_user_nvme_compliance 00:13:09.243 ************************************ 00:13:09.243 15:23:26 -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:09.243 15:23:26 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:09.243 15:23:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:09.243 15:23:26 -- common/autotest_common.sh@10 -- # set +x 00:13:09.243 ************************************ 00:13:09.243 START TEST nvmf_vfio_user_fuzz 00:13:09.243 ************************************ 00:13:09.243 15:23:26 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:09.503 * Looking for test storage... 00:13:09.503 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:09.503 15:23:26 -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:09.503 15:23:26 -- nvmf/common.sh@7 -- # uname -s 00:13:09.503 15:23:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:09.503 15:23:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:09.503 15:23:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:09.503 15:23:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:09.503 15:23:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:09.503 15:23:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:09.503 15:23:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:09.503 15:23:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:09.503 15:23:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:09.503 15:23:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:09.503 15:23:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:09.503 15:23:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:09.503 15:23:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:09.503 15:23:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:09.503 15:23:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:09.503 15:23:26 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:09.503 15:23:26 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:09.504 15:23:26 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:09.504 15:23:26 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:09.504 15:23:26 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:09.504 15:23:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.504 15:23:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.504 15:23:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.504 15:23:26 -- paths/export.sh@5 -- # export PATH 00:13:09.504 15:23:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.504 15:23:26 -- nvmf/common.sh@47 -- # : 0 00:13:09.504 15:23:26 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:09.504 15:23:26 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:09.504 15:23:26 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:09.504 15:23:26 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:09.504 15:23:26 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:09.504 15:23:26 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:09.504 15:23:26 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:09.504 15:23:26 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:09.504 15:23:26 -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:09.504 15:23:26 -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:09.504 15:23:26 -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:09.504 15:23:26 -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:13:09.504 15:23:26 -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:09.504 15:23:26 -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:09.504 15:23:26 -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:13:09.504 15:23:26 -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1555110 00:13:09.504 15:23:26 -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1555110' 00:13:09.504 Process pid: 1555110 00:13:09.504 15:23:26 -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:09.504 15:23:26 -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:09.504 15:23:26 -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1555110 00:13:09.504 15:23:26 -- common/autotest_common.sh@817 -- # '[' -z 1555110 ']' 00:13:09.504 15:23:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:09.504 15:23:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:09.504 15:23:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:09.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:09.504 15:23:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:09.504 15:23:26 -- common/autotest_common.sh@10 -- # set +x 00:13:10.444 15:23:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:10.444 15:23:27 -- common/autotest_common.sh@850 -- # return 0 00:13:10.444 15:23:27 -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:13:11.385 15:23:28 -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:11.386 15:23:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:11.386 15:23:28 -- common/autotest_common.sh@10 -- # set +x 00:13:11.386 15:23:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:11.386 15:23:28 -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:13:11.386 15:23:28 -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:11.386 15:23:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:11.386 15:23:28 -- common/autotest_common.sh@10 -- # set +x 00:13:11.386 malloc0 00:13:11.386 15:23:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:11.386 15:23:28 -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:13:11.386 15:23:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:11.386 15:23:28 -- common/autotest_common.sh@10 -- # set +x 00:13:11.386 15:23:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:11.386 15:23:28 -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:11.386 15:23:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:11.386 15:23:28 -- common/autotest_common.sh@10 -- # set +x 00:13:11.386 15:23:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:11.386 15:23:28 -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:11.386 15:23:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:11.386 15:23:28 -- common/autotest_common.sh@10 -- # set +x 00:13:11.386 15:23:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:11.386 15:23:28 -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:13:11.386 15:23:28 -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:13:43.500 Fuzzing completed. Shutting down the fuzz application 00:13:43.500 00:13:43.500 Dumping successful admin opcodes: 00:13:43.500 8, 9, 10, 24, 00:13:43.500 Dumping successful io opcodes: 00:13:43.500 0, 00:13:43.500 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1182924, total successful commands: 4649, random_seed: 1751342400 00:13:43.500 NS: 0x200003a1ef00 admin qp, Total commands completed: 148692, total successful commands: 1198, random_seed: 3518983104 00:13:43.500 15:24:00 -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:13:43.500 15:24:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:43.500 15:24:00 -- common/autotest_common.sh@10 -- # set +x 00:13:43.500 15:24:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:43.500 15:24:00 -- target/vfio_user_fuzz.sh@46 -- # killprocess 1555110 00:13:43.500 15:24:00 -- common/autotest_common.sh@936 -- # '[' -z 1555110 ']' 00:13:43.500 15:24:00 -- common/autotest_common.sh@940 -- # kill -0 1555110 00:13:43.500 15:24:00 -- common/autotest_common.sh@941 -- # uname 00:13:43.500 15:24:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:43.500 15:24:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1555110 00:13:43.500 15:24:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:43.500 15:24:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:43.500 15:24:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1555110' 00:13:43.500 killing process with pid 1555110 00:13:43.500 15:24:00 -- common/autotest_common.sh@955 -- # kill 1555110 00:13:43.500 15:24:00 -- common/autotest_common.sh@960 -- # wait 1555110 00:13:43.500 15:24:00 -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:13:43.500 15:24:00 -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:13:43.500 00:13:43.500 real 0m33.680s 00:13:43.500 user 0m40.031s 00:13:43.500 sys 0m22.570s 00:13:43.500 15:24:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:43.500 15:24:00 -- common/autotest_common.sh@10 -- # set +x 00:13:43.500 ************************************ 00:13:43.500 END TEST nvmf_vfio_user_fuzz 00:13:43.500 ************************************ 00:13:43.500 15:24:00 -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:43.500 15:24:00 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:43.500 15:24:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:43.500 15:24:00 -- common/autotest_common.sh@10 -- # set +x 00:13:43.500 ************************************ 00:13:43.500 START TEST nvmf_host_management 00:13:43.500 ************************************ 00:13:43.500 15:24:00 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:43.500 * Looking for test storage... 00:13:43.500 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:43.500 15:24:00 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:43.500 15:24:00 -- nvmf/common.sh@7 -- # uname -s 00:13:43.500 15:24:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:43.500 15:24:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:43.500 15:24:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:43.500 15:24:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:43.500 15:24:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:43.500 15:24:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:43.500 15:24:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:43.500 15:24:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:43.500 15:24:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:43.500 15:24:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:43.500 15:24:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:43.500 15:24:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:43.500 15:24:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:43.500 15:24:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:43.500 15:24:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:43.500 15:24:00 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:43.500 15:24:00 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:43.500 15:24:00 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:43.500 15:24:00 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:43.500 15:24:00 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:43.500 15:24:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.500 15:24:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.500 15:24:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.500 15:24:00 -- paths/export.sh@5 -- # export PATH 00:13:43.500 15:24:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.500 15:24:00 -- nvmf/common.sh@47 -- # : 0 00:13:43.500 15:24:00 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:43.500 15:24:00 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:43.500 15:24:00 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:43.500 15:24:00 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:43.500 15:24:00 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:43.500 15:24:00 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:43.500 15:24:00 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:43.500 15:24:00 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:43.500 15:24:00 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:43.500 15:24:00 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:43.500 15:24:00 -- target/host_management.sh@105 -- # nvmftestinit 00:13:43.500 15:24:00 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:43.500 15:24:00 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:43.500 15:24:00 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:43.500 15:24:00 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:43.500 15:24:00 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:43.500 15:24:00 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:43.500 15:24:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:43.500 15:24:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:43.500 15:24:00 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:43.500 15:24:00 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:43.500 15:24:00 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:43.500 15:24:00 -- common/autotest_common.sh@10 -- # set +x 00:13:51.648 15:24:07 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:51.648 15:24:07 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:51.648 15:24:07 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:51.648 15:24:07 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:51.649 15:24:07 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:51.649 15:24:07 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:51.649 15:24:07 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:51.649 15:24:07 -- nvmf/common.sh@295 -- # net_devs=() 00:13:51.649 15:24:07 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:51.649 15:24:07 -- nvmf/common.sh@296 -- # e810=() 00:13:51.649 15:24:07 -- nvmf/common.sh@296 -- # local -ga e810 00:13:51.649 15:24:07 -- nvmf/common.sh@297 -- # x722=() 00:13:51.649 15:24:07 -- nvmf/common.sh@297 -- # local -ga x722 00:13:51.649 15:24:07 -- nvmf/common.sh@298 -- # mlx=() 00:13:51.649 15:24:07 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:51.649 15:24:07 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:51.649 15:24:07 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:51.649 15:24:07 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:51.649 15:24:07 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:51.649 15:24:07 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:51.649 15:24:07 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:51.649 15:24:07 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:51.649 15:24:07 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:51.649 15:24:07 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:51.649 15:24:07 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:51.649 15:24:07 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:51.649 15:24:07 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:51.649 15:24:07 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:51.649 15:24:07 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:51.649 15:24:07 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:51.649 15:24:07 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:51.649 15:24:07 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:51.649 15:24:07 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:51.649 15:24:07 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:51.649 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:51.649 15:24:07 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:51.649 15:24:07 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:51.649 15:24:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:51.649 15:24:07 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:51.649 15:24:07 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:51.649 15:24:07 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:51.649 15:24:07 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:51.649 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:51.649 15:24:07 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:51.649 15:24:07 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:51.649 15:24:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:51.649 15:24:07 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:51.649 15:24:07 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:51.649 15:24:07 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:51.649 15:24:07 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:51.649 15:24:07 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:51.649 15:24:07 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:51.649 15:24:07 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:51.649 15:24:07 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:51.649 15:24:07 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:51.649 15:24:07 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:51.649 Found net devices under 0000:31:00.0: cvl_0_0 00:13:51.649 15:24:07 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:51.649 15:24:07 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:51.649 15:24:07 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:51.649 15:24:07 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:51.649 15:24:07 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:51.649 15:24:07 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:51.649 Found net devices under 0000:31:00.1: cvl_0_1 00:13:51.649 15:24:07 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:51.649 15:24:07 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:51.649 15:24:07 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:51.649 15:24:07 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:51.649 15:24:07 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:51.649 15:24:07 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:51.649 15:24:07 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:51.649 15:24:07 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:51.649 15:24:07 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:51.649 15:24:07 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:51.649 15:24:07 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:51.649 15:24:07 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:51.649 15:24:07 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:51.649 15:24:07 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:51.649 15:24:07 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:51.649 15:24:07 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:51.649 15:24:07 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:51.649 15:24:07 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:51.649 15:24:07 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:51.649 15:24:07 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:51.649 15:24:07 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:51.649 15:24:07 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:51.649 15:24:07 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:51.649 15:24:08 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:51.649 15:24:08 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:51.649 15:24:08 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:51.649 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:51.649 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.530 ms 00:13:51.649 00:13:51.649 --- 10.0.0.2 ping statistics --- 00:13:51.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:51.649 rtt min/avg/max/mdev = 0.530/0.530/0.530/0.000 ms 00:13:51.649 15:24:08 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:51.649 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:51.649 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:13:51.649 00:13:51.649 --- 10.0.0.1 ping statistics --- 00:13:51.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:51.649 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:13:51.649 15:24:08 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:51.649 15:24:08 -- nvmf/common.sh@411 -- # return 0 00:13:51.649 15:24:08 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:51.649 15:24:08 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:51.649 15:24:08 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:51.649 15:24:08 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:51.649 15:24:08 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:51.649 15:24:08 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:51.649 15:24:08 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:51.649 15:24:08 -- target/host_management.sh@107 -- # run_test nvmf_host_management nvmf_host_management 00:13:51.649 15:24:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:51.649 15:24:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:51.649 15:24:08 -- common/autotest_common.sh@10 -- # set +x 00:13:51.649 ************************************ 00:13:51.649 START TEST nvmf_host_management 00:13:51.649 ************************************ 00:13:51.649 15:24:08 -- common/autotest_common.sh@1111 -- # nvmf_host_management 00:13:51.649 15:24:08 -- target/host_management.sh@69 -- # starttarget 00:13:51.649 15:24:08 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:13:51.649 15:24:08 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:51.649 15:24:08 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:51.649 15:24:08 -- common/autotest_common.sh@10 -- # set +x 00:13:51.649 15:24:08 -- nvmf/common.sh@470 -- # nvmfpid=1565831 00:13:51.649 15:24:08 -- nvmf/common.sh@471 -- # waitforlisten 1565831 00:13:51.649 15:24:08 -- common/autotest_common.sh@817 -- # '[' -z 1565831 ']' 00:13:51.649 15:24:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:51.649 15:24:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:51.649 15:24:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:51.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:51.649 15:24:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:51.649 15:24:08 -- common/autotest_common.sh@10 -- # set +x 00:13:51.649 15:24:08 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:13:51.649 [2024-04-26 15:24:08.379801] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:13:51.649 [2024-04-26 15:24:08.379855] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:51.649 EAL: No free 2048 kB hugepages reported on node 1 00:13:51.649 [2024-04-26 15:24:08.465285] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:51.649 [2024-04-26 15:24:08.557270] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:51.649 [2024-04-26 15:24:08.557329] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:51.649 [2024-04-26 15:24:08.557338] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:51.649 [2024-04-26 15:24:08.557346] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:51.649 [2024-04-26 15:24:08.557353] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:51.650 [2024-04-26 15:24:08.557486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:51.650 [2024-04-26 15:24:08.557652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:51.650 [2024-04-26 15:24:08.557880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:51.650 [2024-04-26 15:24:08.557880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:13:51.911 15:24:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:51.911 15:24:09 -- common/autotest_common.sh@850 -- # return 0 00:13:51.911 15:24:09 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:51.911 15:24:09 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:51.911 15:24:09 -- common/autotest_common.sh@10 -- # set +x 00:13:51.911 15:24:09 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:51.911 15:24:09 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:51.911 15:24:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:51.911 15:24:09 -- common/autotest_common.sh@10 -- # set +x 00:13:51.911 [2024-04-26 15:24:09.176254] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:51.911 15:24:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:51.911 15:24:09 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:13:51.911 15:24:09 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:51.911 15:24:09 -- common/autotest_common.sh@10 -- # set +x 00:13:51.911 15:24:09 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:51.911 15:24:09 -- target/host_management.sh@23 -- # cat 00:13:51.911 15:24:09 -- target/host_management.sh@30 -- # rpc_cmd 00:13:51.911 15:24:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:51.911 15:24:09 -- common/autotest_common.sh@10 -- # set +x 00:13:51.911 Malloc0 00:13:51.911 [2024-04-26 15:24:09.239581] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:51.911 15:24:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:51.911 15:24:09 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:13:51.911 15:24:09 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:51.911 15:24:09 -- common/autotest_common.sh@10 -- # set +x 00:13:51.911 15:24:09 -- target/host_management.sh@73 -- # perfpid=1566114 00:13:51.911 15:24:09 -- target/host_management.sh@74 -- # waitforlisten 1566114 /var/tmp/bdevperf.sock 00:13:51.911 15:24:09 -- common/autotest_common.sh@817 -- # '[' -z 1566114 ']' 00:13:51.911 15:24:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:51.911 15:24:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:51.911 15:24:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:51.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:51.911 15:24:09 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:13:51.911 15:24:09 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:13:51.911 15:24:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:51.911 15:24:09 -- common/autotest_common.sh@10 -- # set +x 00:13:51.911 15:24:09 -- nvmf/common.sh@521 -- # config=() 00:13:51.911 15:24:09 -- nvmf/common.sh@521 -- # local subsystem config 00:13:51.911 15:24:09 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:13:51.911 15:24:09 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:13:51.911 { 00:13:51.911 "params": { 00:13:51.911 "name": "Nvme$subsystem", 00:13:51.911 "trtype": "$TEST_TRANSPORT", 00:13:51.911 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:51.911 "adrfam": "ipv4", 00:13:51.911 "trsvcid": "$NVMF_PORT", 00:13:51.911 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:51.911 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:51.911 "hdgst": ${hdgst:-false}, 00:13:51.911 "ddgst": ${ddgst:-false} 00:13:51.911 }, 00:13:51.911 "method": "bdev_nvme_attach_controller" 00:13:51.911 } 00:13:51.911 EOF 00:13:51.911 )") 00:13:51.911 15:24:09 -- nvmf/common.sh@543 -- # cat 00:13:51.911 15:24:09 -- nvmf/common.sh@545 -- # jq . 00:13:51.911 15:24:09 -- nvmf/common.sh@546 -- # IFS=, 00:13:51.911 15:24:09 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:13:51.911 "params": { 00:13:51.911 "name": "Nvme0", 00:13:51.911 "trtype": "tcp", 00:13:51.911 "traddr": "10.0.0.2", 00:13:51.911 "adrfam": "ipv4", 00:13:51.911 "trsvcid": "4420", 00:13:51.911 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:51.911 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:51.911 "hdgst": false, 00:13:51.911 "ddgst": false 00:13:51.911 }, 00:13:51.911 "method": "bdev_nvme_attach_controller" 00:13:51.911 }' 00:13:51.911 [2024-04-26 15:24:09.337230] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:13:51.912 [2024-04-26 15:24:09.337284] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1566114 ] 00:13:52.173 EAL: No free 2048 kB hugepages reported on node 1 00:13:52.173 [2024-04-26 15:24:09.397344] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:52.173 [2024-04-26 15:24:09.460466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.433 Running I/O for 10 seconds... 00:13:52.693 15:24:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:52.693 15:24:10 -- common/autotest_common.sh@850 -- # return 0 00:13:52.693 15:24:10 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:13:52.693 15:24:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:52.693 15:24:10 -- common/autotest_common.sh@10 -- # set +x 00:13:52.693 15:24:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:52.693 15:24:10 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:52.693 15:24:10 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:13:52.693 15:24:10 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:13:52.693 15:24:10 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:13:52.693 15:24:10 -- target/host_management.sh@52 -- # local ret=1 00:13:52.693 15:24:10 -- target/host_management.sh@53 -- # local i 00:13:52.693 15:24:10 -- target/host_management.sh@54 -- # (( i = 10 )) 00:13:52.693 15:24:10 -- target/host_management.sh@54 -- # (( i != 0 )) 00:13:52.693 15:24:10 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:13:52.693 15:24:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:52.693 15:24:10 -- common/autotest_common.sh@10 -- # set +x 00:13:52.693 15:24:10 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:13:52.955 15:24:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:52.955 15:24:10 -- target/host_management.sh@55 -- # read_io_count=590 00:13:52.955 15:24:10 -- target/host_management.sh@58 -- # '[' 590 -ge 100 ']' 00:13:52.955 15:24:10 -- target/host_management.sh@59 -- # ret=0 00:13:52.955 15:24:10 -- target/host_management.sh@60 -- # break 00:13:52.955 15:24:10 -- target/host_management.sh@64 -- # return 0 00:13:52.955 15:24:10 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:52.955 15:24:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:52.955 15:24:10 -- common/autotest_common.sh@10 -- # set +x 00:13:52.955 [2024-04-26 15:24:10.186711] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaccf0 is same with the state(5) to be set 00:13:52.955 [2024-04-26 15:24:10.186758] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaccf0 is same with the state(5) to be set 00:13:52.955 [2024-04-26 15:24:10.186766] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaccf0 is same with the state(5) to be set 00:13:52.955 [2024-04-26 15:24:10.186773] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaccf0 is same with the state(5) to be set 00:13:52.955 [2024-04-26 15:24:10.186780] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaccf0 is same with the state(5) to be set 00:13:52.955 [2024-04-26 15:24:10.186786] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaccf0 is same with the state(5) to be set 00:13:52.955 [2024-04-26 15:24:10.186793] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaccf0 is same with the state(5) to be set 00:13:52.955 [2024-04-26 15:24:10.186799] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaccf0 is same with the state(5) to be set 00:13:52.955 15:24:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:52.955 15:24:10 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:52.955 15:24:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:52.955 15:24:10 -- common/autotest_common.sh@10 -- # set +x 00:13:52.955 [2024-04-26 15:24:10.198739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:93184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.955 [2024-04-26 15:24:10.198781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.955 [2024-04-26 15:24:10.198798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:93312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.955 [2024-04-26 15:24:10.198806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.955 [2024-04-26 15:24:10.198816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:93440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.955 [2024-04-26 15:24:10.198823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.955 [2024-04-26 15:24:10.198833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:93568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.955 [2024-04-26 15:24:10.198845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.955 [2024-04-26 15:24:10.198855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:93696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.955 [2024-04-26 15:24:10.198862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.955 [2024-04-26 15:24:10.198871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:93824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.955 [2024-04-26 15:24:10.198878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.955 [2024-04-26 15:24:10.198887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:93952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.955 [2024-04-26 15:24:10.198894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.955 [2024-04-26 15:24:10.198903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:94080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.955 [2024-04-26 15:24:10.198910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.955 [2024-04-26 15:24:10.198919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:94208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.955 [2024-04-26 15:24:10.198926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.955 [2024-04-26 15:24:10.198935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:94336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.955 [2024-04-26 15:24:10.198943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.955 [2024-04-26 15:24:10.198952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.955 [2024-04-26 15:24:10.198959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.955 [2024-04-26 15:24:10.198968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:94592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.955 [2024-04-26 15:24:10.198976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.955 [2024-04-26 15:24:10.198985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.955 [2024-04-26 15:24:10.198992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.955 [2024-04-26 15:24:10.199004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:94848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.955 [2024-04-26 15:24:10.199012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.955 [2024-04-26 15:24:10.199021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.955 [2024-04-26 15:24:10.199029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.955 [2024-04-26 15:24:10.199038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.955 [2024-04-26 15:24:10.199046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.955 [2024-04-26 15:24:10.199055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.955 [2024-04-26 15:24:10.199063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.956 [2024-04-26 15:24:10.199072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:95360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.956 [2024-04-26 15:24:10.199080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.956 [2024-04-26 15:24:10.199089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.956 [2024-04-26 15:24:10.199097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.956 [2024-04-26 15:24:10.199106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:95616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.956 [2024-04-26 15:24:10.199113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.956 [2024-04-26 15:24:10.199122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.956 [2024-04-26 15:24:10.199130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.956 [2024-04-26 15:24:10.199140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:91520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.956 [2024-04-26 15:24:10.199148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.956 [2024-04-26 15:24:10.199157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:91648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.956 [2024-04-26 15:24:10.199165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.956 [2024-04-26 15:24:10.199174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:91776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.956 [2024-04-26 15:24:10.199182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.956 [2024-04-26 15:24:10.199191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:95872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.956 [2024-04-26 15:24:10.199199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.956 [2024-04-26 15:24:10.199208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:91904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.956 [2024-04-26 15:24:10.199217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.956 [2024-04-26 15:24:10.199227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.956 [2024-04-26 15:24:10.199234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.956 [2024-04-26 15:24:10.199244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.956 [2024-04-26 15:24:10.199251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.956 [2024-04-26 15:24:10.199261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:92032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.956 [2024-04-26 15:24:10.199268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.956 [2024-04-26 15:24:10.199278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.956 [2024-04-26 15:24:10.199285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.956 [2024-04-26 15:24:10.199295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:92160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.956 [2024-04-26 15:24:10.199302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.956 [2024-04-26 15:24:10.199312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.956 [2024-04-26 15:24:10.199319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.956 [2024-04-26 15:24:10.199329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:92288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.956 [2024-04-26 15:24:10.199336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.956 [2024-04-26 15:24:10.199345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.956 [2024-04-26 15:24:10.199353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.956 [2024-04-26 15:24:10.199362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.956 [2024-04-26 15:24:10.199370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.956 [2024-04-26 15:24:10.199380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.956 [2024-04-26 15:24:10.199387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.956 [2024-04-26 15:24:10.199396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.956 [2024-04-26 15:24:10.199404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.956 [2024-04-26 15:24:10.199414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.956 [2024-04-26 15:24:10.199421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.956 [2024-04-26 15:24:10.199432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.956 [2024-04-26 15:24:10.199439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.956 [2024-04-26 15:24:10.199449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.956 [2024-04-26 15:24:10.199456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.956 [2024-04-26 15:24:10.199465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.956 [2024-04-26 15:24:10.199473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.956 [2024-04-26 15:24:10.199482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.956 [2024-04-26 15:24:10.199490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.956 [2024-04-26 15:24:10.199499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.956 [2024-04-26 15:24:10.199506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.956 [2024-04-26 15:24:10.199516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.956 [2024-04-26 15:24:10.199523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.956 [2024-04-26 15:24:10.199533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.956 [2024-04-26 15:24:10.199540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.956 [2024-04-26 15:24:10.199550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.956 [2024-04-26 15:24:10.199557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.956 [2024-04-26 15:24:10.199566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.956 [2024-04-26 15:24:10.199574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.956 [2024-04-26 15:24:10.199583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.956 [2024-04-26 15:24:10.199591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.956 [2024-04-26 15:24:10.199601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.956 [2024-04-26 15:24:10.199608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.956 [2024-04-26 15:24:10.199618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.956 [2024-04-26 15:24:10.199626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.956 [2024-04-26 15:24:10.199635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.956 [2024-04-26 15:24:10.199644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.956 [2024-04-26 15:24:10.199653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.956 [2024-04-26 15:24:10.199661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.956 [2024-04-26 15:24:10.199671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.956 [2024-04-26 15:24:10.199678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.956 [2024-04-26 15:24:10.199688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.956 [2024-04-26 15:24:10.199695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.956 [2024-04-26 15:24:10.199704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:92416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.956 [2024-04-26 15:24:10.199712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.956 [2024-04-26 15:24:10.199721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:92544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.956 [2024-04-26 15:24:10.199729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.956 [2024-04-26 15:24:10.199738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:92672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.957 [2024-04-26 15:24:10.199746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.957 [2024-04-26 15:24:10.199755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:92800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.957 [2024-04-26 15:24:10.199763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.957 [2024-04-26 15:24:10.199772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:92928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.957 [2024-04-26 15:24:10.199779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.957 [2024-04-26 15:24:10.199789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.957 [2024-04-26 15:24:10.199796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.957 [2024-04-26 15:24:10.199806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.957 [2024-04-26 15:24:10.199813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.957 [2024-04-26 15:24:10.199823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.957 [2024-04-26 15:24:10.199830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.957 [2024-04-26 15:24:10.199843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.957 [2024-04-26 15:24:10.199851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.957 [2024-04-26 15:24:10.199861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:93056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.957 [2024-04-26 15:24:10.199869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.957 [2024-04-26 15:24:10.199897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:13:52.957 [2024-04-26 15:24:10.199939] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x23e8e10 was disconnected and freed. reset controller. 00:13:52.957 [2024-04-26 15:24:10.199993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:13:52.957 [2024-04-26 15:24:10.200003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.957 [2024-04-26 15:24:10.200012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:13:52.957 [2024-04-26 15:24:10.200020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.957 [2024-04-26 15:24:10.200028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:13:52.957 [2024-04-26 15:24:10.200036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.957 [2024-04-26 15:24:10.200044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:13:52.957 [2024-04-26 15:24:10.200051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.957 [2024-04-26 15:24:10.200060] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd84a0 is same with the state(5) to be set 00:13:52.957 [2024-04-26 15:24:10.201229] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:13:52.957 task offset: 93184 on job bdev=Nvme0n1 fails 00:13:52.957 00:13:52.957 Latency(us) 00:13:52.957 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:52.957 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:52.957 Job: Nvme0n1 ended in about 0.44 seconds with error 00:13:52.957 Verification LBA range: start 0x0 length 0x400 00:13:52.957 Nvme0n1 : 0.44 1641.94 102.62 146.97 0.00 34697.00 1665.71 33641.81 00:13:52.957 =================================================================================================================== 00:13:52.957 Total : 1641.94 102.62 146.97 0.00 34697.00 1665.71 33641.81 00:13:52.957 [2024-04-26 15:24:10.203199] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:52.957 [2024-04-26 15:24:10.203218] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fd84a0 (9): Bad file descriptor 00:13:52.957 15:24:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:52.957 15:24:10 -- target/host_management.sh@87 -- # sleep 1 00:13:52.957 [2024-04-26 15:24:10.265076] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:53.901 15:24:11 -- target/host_management.sh@91 -- # kill -9 1566114 00:13:53.901 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1566114) - No such process 00:13:53.901 15:24:11 -- target/host_management.sh@91 -- # true 00:13:53.901 15:24:11 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:13:53.901 15:24:11 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:13:53.901 15:24:11 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:13:53.901 15:24:11 -- nvmf/common.sh@521 -- # config=() 00:13:53.901 15:24:11 -- nvmf/common.sh@521 -- # local subsystem config 00:13:53.901 15:24:11 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:13:53.901 15:24:11 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:13:53.901 { 00:13:53.901 "params": { 00:13:53.901 "name": "Nvme$subsystem", 00:13:53.901 "trtype": "$TEST_TRANSPORT", 00:13:53.901 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:53.901 "adrfam": "ipv4", 00:13:53.901 "trsvcid": "$NVMF_PORT", 00:13:53.901 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:53.901 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:53.901 "hdgst": ${hdgst:-false}, 00:13:53.902 "ddgst": ${ddgst:-false} 00:13:53.902 }, 00:13:53.902 "method": "bdev_nvme_attach_controller" 00:13:53.902 } 00:13:53.902 EOF 00:13:53.902 )") 00:13:53.902 15:24:11 -- nvmf/common.sh@543 -- # cat 00:13:53.902 15:24:11 -- nvmf/common.sh@545 -- # jq . 00:13:53.902 15:24:11 -- nvmf/common.sh@546 -- # IFS=, 00:13:53.902 15:24:11 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:13:53.902 "params": { 00:13:53.902 "name": "Nvme0", 00:13:53.902 "trtype": "tcp", 00:13:53.902 "traddr": "10.0.0.2", 00:13:53.902 "adrfam": "ipv4", 00:13:53.902 "trsvcid": "4420", 00:13:53.902 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:53.902 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:53.902 "hdgst": false, 00:13:53.902 "ddgst": false 00:13:53.902 }, 00:13:53.902 "method": "bdev_nvme_attach_controller" 00:13:53.902 }' 00:13:53.902 [2024-04-26 15:24:11.261349] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:13:53.902 [2024-04-26 15:24:11.261406] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1566654 ] 00:13:53.902 EAL: No free 2048 kB hugepages reported on node 1 00:13:53.902 [2024-04-26 15:24:11.319982] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.163 [2024-04-26 15:24:11.382558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:54.424 Running I/O for 1 seconds... 00:13:55.366 00:13:55.366 Latency(us) 00:13:55.366 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:55.366 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:55.366 Verification LBA range: start 0x0 length 0x400 00:13:55.366 Nvme0n1 : 1.03 1617.42 101.09 0.00 0.00 38883.25 6144.00 33641.81 00:13:55.366 =================================================================================================================== 00:13:55.366 Total : 1617.42 101.09 0.00 0.00 38883.25 6144.00 33641.81 00:13:55.366 15:24:12 -- target/host_management.sh@102 -- # stoptarget 00:13:55.366 15:24:12 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:13:55.627 15:24:12 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:13:55.627 15:24:12 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:55.627 15:24:12 -- target/host_management.sh@40 -- # nvmftestfini 00:13:55.627 15:24:12 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:55.627 15:24:12 -- nvmf/common.sh@117 -- # sync 00:13:55.627 15:24:12 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:55.627 15:24:12 -- nvmf/common.sh@120 -- # set +e 00:13:55.627 15:24:12 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:55.627 15:24:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:55.627 rmmod nvme_tcp 00:13:55.627 rmmod nvme_fabrics 00:13:55.627 rmmod nvme_keyring 00:13:55.627 15:24:12 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:55.627 15:24:12 -- nvmf/common.sh@124 -- # set -e 00:13:55.627 15:24:12 -- nvmf/common.sh@125 -- # return 0 00:13:55.627 15:24:12 -- nvmf/common.sh@478 -- # '[' -n 1565831 ']' 00:13:55.627 15:24:12 -- nvmf/common.sh@479 -- # killprocess 1565831 00:13:55.627 15:24:12 -- common/autotest_common.sh@936 -- # '[' -z 1565831 ']' 00:13:55.627 15:24:12 -- common/autotest_common.sh@940 -- # kill -0 1565831 00:13:55.627 15:24:12 -- common/autotest_common.sh@941 -- # uname 00:13:55.627 15:24:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:55.627 15:24:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1565831 00:13:55.627 15:24:12 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:55.627 15:24:12 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:55.627 15:24:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1565831' 00:13:55.627 killing process with pid 1565831 00:13:55.627 15:24:12 -- common/autotest_common.sh@955 -- # kill 1565831 00:13:55.627 15:24:12 -- common/autotest_common.sh@960 -- # wait 1565831 00:13:55.627 [2024-04-26 15:24:13.059386] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:13:55.889 15:24:13 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:55.889 15:24:13 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:55.889 15:24:13 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:55.889 15:24:13 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:55.889 15:24:13 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:55.889 15:24:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:55.889 15:24:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:55.889 15:24:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:57.890 15:24:15 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:57.890 00:13:57.890 real 0m6.840s 00:13:57.890 user 0m20.750s 00:13:57.890 sys 0m1.010s 00:13:57.890 15:24:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:57.890 15:24:15 -- common/autotest_common.sh@10 -- # set +x 00:13:57.890 ************************************ 00:13:57.890 END TEST nvmf_host_management 00:13:57.890 ************************************ 00:13:57.890 15:24:15 -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:13:57.890 00:13:57.890 real 0m14.668s 00:13:57.890 user 0m22.830s 00:13:57.890 sys 0m6.660s 00:13:57.890 15:24:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:57.890 15:24:15 -- common/autotest_common.sh@10 -- # set +x 00:13:57.890 ************************************ 00:13:57.890 END TEST nvmf_host_management 00:13:57.890 ************************************ 00:13:57.890 15:24:15 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:57.890 15:24:15 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:57.890 15:24:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:57.890 15:24:15 -- common/autotest_common.sh@10 -- # set +x 00:13:58.152 ************************************ 00:13:58.152 START TEST nvmf_lvol 00:13:58.152 ************************************ 00:13:58.152 15:24:15 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:58.152 * Looking for test storage... 00:13:58.152 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:58.152 15:24:15 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:58.153 15:24:15 -- nvmf/common.sh@7 -- # uname -s 00:13:58.153 15:24:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:58.153 15:24:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:58.153 15:24:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:58.153 15:24:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:58.153 15:24:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:58.153 15:24:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:58.153 15:24:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:58.153 15:24:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:58.153 15:24:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:58.153 15:24:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:58.153 15:24:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:58.153 15:24:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:58.153 15:24:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:58.153 15:24:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:58.153 15:24:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:58.153 15:24:15 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:58.153 15:24:15 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:58.153 15:24:15 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:58.153 15:24:15 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:58.153 15:24:15 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:58.153 15:24:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.153 15:24:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.153 15:24:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.153 15:24:15 -- paths/export.sh@5 -- # export PATH 00:13:58.153 15:24:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.153 15:24:15 -- nvmf/common.sh@47 -- # : 0 00:13:58.153 15:24:15 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:58.153 15:24:15 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:58.153 15:24:15 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:58.153 15:24:15 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:58.153 15:24:15 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:58.153 15:24:15 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:58.153 15:24:15 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:58.153 15:24:15 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:58.153 15:24:15 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:58.153 15:24:15 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:58.153 15:24:15 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:13:58.153 15:24:15 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:13:58.153 15:24:15 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:58.153 15:24:15 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:13:58.153 15:24:15 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:58.153 15:24:15 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:58.153 15:24:15 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:58.153 15:24:15 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:58.153 15:24:15 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:58.153 15:24:15 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:58.153 15:24:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:58.153 15:24:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:58.153 15:24:15 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:58.153 15:24:15 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:58.153 15:24:15 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:58.153 15:24:15 -- common/autotest_common.sh@10 -- # set +x 00:14:06.303 15:24:22 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:06.303 15:24:22 -- nvmf/common.sh@291 -- # pci_devs=() 00:14:06.303 15:24:22 -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:06.303 15:24:22 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:06.303 15:24:22 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:06.303 15:24:22 -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:06.303 15:24:22 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:06.303 15:24:22 -- nvmf/common.sh@295 -- # net_devs=() 00:14:06.303 15:24:22 -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:06.303 15:24:22 -- nvmf/common.sh@296 -- # e810=() 00:14:06.303 15:24:22 -- nvmf/common.sh@296 -- # local -ga e810 00:14:06.303 15:24:22 -- nvmf/common.sh@297 -- # x722=() 00:14:06.303 15:24:22 -- nvmf/common.sh@297 -- # local -ga x722 00:14:06.303 15:24:22 -- nvmf/common.sh@298 -- # mlx=() 00:14:06.303 15:24:22 -- nvmf/common.sh@298 -- # local -ga mlx 00:14:06.303 15:24:22 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:06.303 15:24:22 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:06.303 15:24:22 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:06.303 15:24:22 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:06.303 15:24:22 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:06.303 15:24:22 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:06.303 15:24:22 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:06.303 15:24:22 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:06.303 15:24:22 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:06.303 15:24:22 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:06.303 15:24:22 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:06.303 15:24:22 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:06.303 15:24:22 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:06.303 15:24:22 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:06.303 15:24:22 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:06.303 15:24:22 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:06.303 15:24:22 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:06.303 15:24:22 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:06.303 15:24:22 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:06.303 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:06.303 15:24:22 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:06.303 15:24:22 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:06.303 15:24:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:06.303 15:24:22 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:06.303 15:24:22 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:06.303 15:24:22 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:06.303 15:24:22 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:06.303 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:06.303 15:24:22 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:06.303 15:24:22 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:06.303 15:24:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:06.303 15:24:22 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:06.303 15:24:22 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:06.303 15:24:22 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:06.303 15:24:22 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:06.303 15:24:22 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:06.303 15:24:22 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:06.303 15:24:22 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:06.303 15:24:22 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:06.303 15:24:22 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:06.303 15:24:22 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:06.303 Found net devices under 0000:31:00.0: cvl_0_0 00:14:06.303 15:24:22 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:06.303 15:24:22 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:06.303 15:24:22 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:06.303 15:24:22 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:06.303 15:24:22 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:06.303 15:24:22 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:06.303 Found net devices under 0000:31:00.1: cvl_0_1 00:14:06.303 15:24:22 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:06.303 15:24:22 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:14:06.303 15:24:22 -- nvmf/common.sh@403 -- # is_hw=yes 00:14:06.303 15:24:22 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:14:06.303 15:24:22 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:14:06.303 15:24:22 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:14:06.303 15:24:22 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:06.303 15:24:22 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:06.303 15:24:22 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:06.303 15:24:22 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:06.303 15:24:22 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:06.303 15:24:22 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:06.303 15:24:22 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:06.303 15:24:22 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:06.303 15:24:22 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:06.303 15:24:22 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:06.303 15:24:22 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:06.303 15:24:22 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:06.303 15:24:22 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:06.303 15:24:22 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:06.303 15:24:22 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:06.303 15:24:22 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:06.303 15:24:22 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:06.303 15:24:22 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:06.303 15:24:22 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:06.303 15:24:22 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:06.304 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:06.304 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.523 ms 00:14:06.304 00:14:06.304 --- 10.0.0.2 ping statistics --- 00:14:06.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:06.304 rtt min/avg/max/mdev = 0.523/0.523/0.523/0.000 ms 00:14:06.304 15:24:22 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:06.304 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:06.304 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:14:06.304 00:14:06.304 --- 10.0.0.1 ping statistics --- 00:14:06.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:06.304 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:14:06.304 15:24:22 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:06.304 15:24:22 -- nvmf/common.sh@411 -- # return 0 00:14:06.304 15:24:22 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:06.304 15:24:22 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:06.304 15:24:22 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:06.304 15:24:22 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:06.304 15:24:22 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:06.304 15:24:22 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:06.304 15:24:22 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:06.304 15:24:22 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:14:06.304 15:24:22 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:06.304 15:24:22 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:06.304 15:24:22 -- common/autotest_common.sh@10 -- # set +x 00:14:06.304 15:24:22 -- nvmf/common.sh@470 -- # nvmfpid=1571232 00:14:06.304 15:24:22 -- nvmf/common.sh@471 -- # waitforlisten 1571232 00:14:06.304 15:24:22 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:06.304 15:24:22 -- common/autotest_common.sh@817 -- # '[' -z 1571232 ']' 00:14:06.304 15:24:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:06.304 15:24:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:06.304 15:24:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:06.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:06.304 15:24:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:06.304 15:24:22 -- common/autotest_common.sh@10 -- # set +x 00:14:06.304 [2024-04-26 15:24:22.924520] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:14:06.304 [2024-04-26 15:24:22.924588] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:06.304 EAL: No free 2048 kB hugepages reported on node 1 00:14:06.304 [2024-04-26 15:24:22.997880] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:06.304 [2024-04-26 15:24:23.070526] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:06.304 [2024-04-26 15:24:23.070568] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:06.304 [2024-04-26 15:24:23.070575] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:06.304 [2024-04-26 15:24:23.070582] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:06.304 [2024-04-26 15:24:23.070587] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:06.304 [2024-04-26 15:24:23.070727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:06.304 [2024-04-26 15:24:23.070881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:06.304 [2024-04-26 15:24:23.070884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:06.304 15:24:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:06.304 15:24:23 -- common/autotest_common.sh@850 -- # return 0 00:14:06.304 15:24:23 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:06.304 15:24:23 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:06.304 15:24:23 -- common/autotest_common.sh@10 -- # set +x 00:14:06.304 15:24:23 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:06.304 15:24:23 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:06.565 [2024-04-26 15:24:23.879515] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:06.565 15:24:23 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:06.825 15:24:24 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:14:06.825 15:24:24 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:07.086 15:24:24 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:14:07.086 15:24:24 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:14:07.086 15:24:24 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:14:07.347 15:24:24 -- target/nvmf_lvol.sh@29 -- # lvs=9ea2fcb8-3e7f-4acf-a108-eae4b7c8f231 00:14:07.347 15:24:24 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9ea2fcb8-3e7f-4acf-a108-eae4b7c8f231 lvol 20 00:14:07.608 15:24:24 -- target/nvmf_lvol.sh@32 -- # lvol=46ed1c70-df82-4ab3-bb28-e562ea03b0e2 00:14:07.608 15:24:24 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:07.608 15:24:24 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 46ed1c70-df82-4ab3-bb28-e562ea03b0e2 00:14:07.869 15:24:25 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:07.869 [2024-04-26 15:24:25.263260] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:07.869 15:24:25 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:08.128 15:24:25 -- target/nvmf_lvol.sh@42 -- # perf_pid=1571921 00:14:08.128 15:24:25 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:14:08.128 15:24:25 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:14:08.128 EAL: No free 2048 kB hugepages reported on node 1 00:14:09.069 15:24:26 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 46ed1c70-df82-4ab3-bb28-e562ea03b0e2 MY_SNAPSHOT 00:14:09.330 15:24:26 -- target/nvmf_lvol.sh@47 -- # snapshot=a162c56a-0082-4205-9bdc-42da0ffda96e 00:14:09.330 15:24:26 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 46ed1c70-df82-4ab3-bb28-e562ea03b0e2 30 00:14:09.591 15:24:26 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone a162c56a-0082-4205-9bdc-42da0ffda96e MY_CLONE 00:14:09.852 15:24:27 -- target/nvmf_lvol.sh@49 -- # clone=c0f862e6-ee97-4c2b-a58d-54ccbcf9f569 00:14:09.852 15:24:27 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate c0f862e6-ee97-4c2b-a58d-54ccbcf9f569 00:14:10.113 15:24:27 -- target/nvmf_lvol.sh@53 -- # wait 1571921 00:14:20.121 Initializing NVMe Controllers 00:14:20.121 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:20.121 Controller IO queue size 128, less than required. 00:14:20.121 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:20.121 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:14:20.121 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:14:20.121 Initialization complete. Launching workers. 00:14:20.121 ======================================================== 00:14:20.121 Latency(us) 00:14:20.121 Device Information : IOPS MiB/s Average min max 00:14:20.121 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12036.00 47.02 10642.65 1616.07 56582.69 00:14:20.121 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17446.50 68.15 7337.05 1217.52 43952.17 00:14:20.121 ======================================================== 00:14:20.121 Total : 29482.50 115.17 8686.54 1217.52 56582.69 00:14:20.121 00:14:20.121 15:24:35 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:20.121 15:24:35 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 46ed1c70-df82-4ab3-bb28-e562ea03b0e2 00:14:20.121 15:24:36 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9ea2fcb8-3e7f-4acf-a108-eae4b7c8f231 00:14:20.121 15:24:36 -- target/nvmf_lvol.sh@60 -- # rm -f 00:14:20.121 15:24:36 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:14:20.121 15:24:36 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:14:20.121 15:24:36 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:20.121 15:24:36 -- nvmf/common.sh@117 -- # sync 00:14:20.121 15:24:36 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:20.121 15:24:36 -- nvmf/common.sh@120 -- # set +e 00:14:20.121 15:24:36 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:20.121 15:24:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:20.121 rmmod nvme_tcp 00:14:20.121 rmmod nvme_fabrics 00:14:20.121 rmmod nvme_keyring 00:14:20.121 15:24:36 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:20.121 15:24:36 -- nvmf/common.sh@124 -- # set -e 00:14:20.121 15:24:36 -- nvmf/common.sh@125 -- # return 0 00:14:20.121 15:24:36 -- nvmf/common.sh@478 -- # '[' -n 1571232 ']' 00:14:20.121 15:24:36 -- nvmf/common.sh@479 -- # killprocess 1571232 00:14:20.121 15:24:36 -- common/autotest_common.sh@936 -- # '[' -z 1571232 ']' 00:14:20.121 15:24:36 -- common/autotest_common.sh@940 -- # kill -0 1571232 00:14:20.121 15:24:36 -- common/autotest_common.sh@941 -- # uname 00:14:20.121 15:24:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:20.121 15:24:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1571232 00:14:20.121 15:24:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:20.121 15:24:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:20.121 15:24:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1571232' 00:14:20.121 killing process with pid 1571232 00:14:20.121 15:24:36 -- common/autotest_common.sh@955 -- # kill 1571232 00:14:20.121 15:24:36 -- common/autotest_common.sh@960 -- # wait 1571232 00:14:20.121 15:24:36 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:20.121 15:24:36 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:20.121 15:24:36 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:20.121 15:24:36 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:20.121 15:24:36 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:20.121 15:24:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:20.121 15:24:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:20.121 15:24:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:21.510 15:24:38 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:21.510 00:14:21.510 real 0m23.305s 00:14:21.510 user 1m3.784s 00:14:21.510 sys 0m7.782s 00:14:21.510 15:24:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:21.510 15:24:38 -- common/autotest_common.sh@10 -- # set +x 00:14:21.510 ************************************ 00:14:21.510 END TEST nvmf_lvol 00:14:21.510 ************************************ 00:14:21.510 15:24:38 -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:21.510 15:24:38 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:21.510 15:24:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:21.510 15:24:38 -- common/autotest_common.sh@10 -- # set +x 00:14:21.510 ************************************ 00:14:21.510 START TEST nvmf_lvs_grow 00:14:21.510 ************************************ 00:14:21.510 15:24:38 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:21.772 * Looking for test storage... 00:14:21.772 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:21.772 15:24:38 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:21.772 15:24:38 -- nvmf/common.sh@7 -- # uname -s 00:14:21.772 15:24:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:21.772 15:24:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:21.772 15:24:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:21.772 15:24:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:21.772 15:24:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:21.772 15:24:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:21.772 15:24:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:21.772 15:24:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:21.772 15:24:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:21.772 15:24:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:21.772 15:24:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:21.772 15:24:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:21.772 15:24:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:21.772 15:24:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:21.772 15:24:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:21.772 15:24:38 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:21.772 15:24:39 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:21.772 15:24:39 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:21.772 15:24:39 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:21.772 15:24:39 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:21.772 15:24:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.772 15:24:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.772 15:24:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.772 15:24:39 -- paths/export.sh@5 -- # export PATH 00:14:21.772 15:24:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.772 15:24:39 -- nvmf/common.sh@47 -- # : 0 00:14:21.772 15:24:39 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:21.772 15:24:39 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:21.772 15:24:39 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:21.772 15:24:39 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:21.772 15:24:39 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:21.772 15:24:39 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:21.772 15:24:39 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:21.772 15:24:39 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:21.772 15:24:39 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:21.772 15:24:39 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:21.772 15:24:39 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:14:21.772 15:24:39 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:21.772 15:24:39 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:21.772 15:24:39 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:21.772 15:24:39 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:21.772 15:24:39 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:21.772 15:24:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:21.772 15:24:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:21.772 15:24:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:21.772 15:24:39 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:14:21.772 15:24:39 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:21.772 15:24:39 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:21.772 15:24:39 -- common/autotest_common.sh@10 -- # set +x 00:14:29.915 15:24:45 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:29.915 15:24:45 -- nvmf/common.sh@291 -- # pci_devs=() 00:14:29.915 15:24:45 -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:29.915 15:24:45 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:29.915 15:24:45 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:29.915 15:24:45 -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:29.915 15:24:45 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:29.915 15:24:45 -- nvmf/common.sh@295 -- # net_devs=() 00:14:29.915 15:24:45 -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:29.915 15:24:45 -- nvmf/common.sh@296 -- # e810=() 00:14:29.915 15:24:45 -- nvmf/common.sh@296 -- # local -ga e810 00:14:29.915 15:24:45 -- nvmf/common.sh@297 -- # x722=() 00:14:29.915 15:24:45 -- nvmf/common.sh@297 -- # local -ga x722 00:14:29.915 15:24:45 -- nvmf/common.sh@298 -- # mlx=() 00:14:29.915 15:24:45 -- nvmf/common.sh@298 -- # local -ga mlx 00:14:29.915 15:24:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:29.915 15:24:45 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:29.915 15:24:45 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:29.915 15:24:45 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:29.915 15:24:45 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:29.915 15:24:45 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:29.915 15:24:45 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:29.915 15:24:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:29.915 15:24:45 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:29.915 15:24:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:29.915 15:24:45 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:29.915 15:24:45 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:29.915 15:24:45 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:29.915 15:24:45 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:29.915 15:24:45 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:29.915 15:24:45 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:29.915 15:24:45 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:29.915 15:24:45 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:29.915 15:24:45 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:29.915 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:29.915 15:24:45 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:29.915 15:24:45 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:29.915 15:24:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:29.915 15:24:45 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:29.915 15:24:45 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:29.915 15:24:45 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:29.915 15:24:45 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:29.915 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:29.915 15:24:45 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:29.915 15:24:45 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:29.915 15:24:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:29.915 15:24:45 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:29.915 15:24:45 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:29.915 15:24:45 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:29.915 15:24:45 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:29.915 15:24:45 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:29.915 15:24:45 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:29.915 15:24:45 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:29.915 15:24:45 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:29.915 15:24:45 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:29.915 15:24:45 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:29.915 Found net devices under 0000:31:00.0: cvl_0_0 00:14:29.915 15:24:45 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:29.915 15:24:45 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:29.915 15:24:45 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:29.915 15:24:45 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:29.915 15:24:45 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:29.915 15:24:45 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:29.915 Found net devices under 0000:31:00.1: cvl_0_1 00:14:29.915 15:24:45 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:29.915 15:24:45 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:14:29.915 15:24:45 -- nvmf/common.sh@403 -- # is_hw=yes 00:14:29.915 15:24:45 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:14:29.915 15:24:45 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:14:29.915 15:24:45 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:14:29.915 15:24:45 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:29.915 15:24:45 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:29.915 15:24:45 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:29.915 15:24:45 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:29.915 15:24:45 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:29.915 15:24:45 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:29.915 15:24:45 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:29.915 15:24:45 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:29.915 15:24:45 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:29.915 15:24:45 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:29.915 15:24:45 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:29.915 15:24:45 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:29.915 15:24:45 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:29.915 15:24:46 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:29.915 15:24:46 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:29.915 15:24:46 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:29.915 15:24:46 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:29.915 15:24:46 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:29.915 15:24:46 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:29.915 15:24:46 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:29.915 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:29.915 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.451 ms 00:14:29.916 00:14:29.916 --- 10.0.0.2 ping statistics --- 00:14:29.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:29.916 rtt min/avg/max/mdev = 0.451/0.451/0.451/0.000 ms 00:14:29.916 15:24:46 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:29.916 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:29.916 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:14:29.916 00:14:29.916 --- 10.0.0.1 ping statistics --- 00:14:29.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:29.916 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:14:29.916 15:24:46 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:29.916 15:24:46 -- nvmf/common.sh@411 -- # return 0 00:14:29.916 15:24:46 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:29.916 15:24:46 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:29.916 15:24:46 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:29.916 15:24:46 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:29.916 15:24:46 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:29.916 15:24:46 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:29.916 15:24:46 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:29.916 15:24:46 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:14:29.916 15:24:46 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:29.916 15:24:46 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:29.916 15:24:46 -- common/autotest_common.sh@10 -- # set +x 00:14:29.916 15:24:46 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:29.916 15:24:46 -- nvmf/common.sh@470 -- # nvmfpid=1578313 00:14:29.916 15:24:46 -- nvmf/common.sh@471 -- # waitforlisten 1578313 00:14:29.916 15:24:46 -- common/autotest_common.sh@817 -- # '[' -z 1578313 ']' 00:14:29.916 15:24:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:29.916 15:24:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:29.916 15:24:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:29.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:29.916 15:24:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:29.916 15:24:46 -- common/autotest_common.sh@10 -- # set +x 00:14:29.916 [2024-04-26 15:24:46.245195] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:14:29.916 [2024-04-26 15:24:46.245245] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:29.916 EAL: No free 2048 kB hugepages reported on node 1 00:14:29.916 [2024-04-26 15:24:46.303438] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.916 [2024-04-26 15:24:46.367066] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:29.916 [2024-04-26 15:24:46.367099] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:29.916 [2024-04-26 15:24:46.367106] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:29.916 [2024-04-26 15:24:46.367112] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:29.916 [2024-04-26 15:24:46.367118] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:29.916 [2024-04-26 15:24:46.367139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:29.916 15:24:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:29.916 15:24:47 -- common/autotest_common.sh@850 -- # return 0 00:14:29.916 15:24:47 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:29.916 15:24:47 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:29.916 15:24:47 -- common/autotest_common.sh@10 -- # set +x 00:14:29.916 15:24:47 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:29.916 15:24:47 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:29.916 [2024-04-26 15:24:47.202197] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:29.916 15:24:47 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:14:29.916 15:24:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:29.916 15:24:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:29.916 15:24:47 -- common/autotest_common.sh@10 -- # set +x 00:14:30.177 ************************************ 00:14:30.177 START TEST lvs_grow_clean 00:14:30.177 ************************************ 00:14:30.177 15:24:47 -- common/autotest_common.sh@1111 -- # lvs_grow 00:14:30.177 15:24:47 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:30.177 15:24:47 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:30.177 15:24:47 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:30.177 15:24:47 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:30.177 15:24:47 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:30.177 15:24:47 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:30.177 15:24:47 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:30.177 15:24:47 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:30.177 15:24:47 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:30.177 15:24:47 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:30.177 15:24:47 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:30.438 15:24:47 -- target/nvmf_lvs_grow.sh@28 -- # lvs=b934d554-a6f1-411a-87e8-d375507de6da 00:14:30.438 15:24:47 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b934d554-a6f1-411a-87e8-d375507de6da 00:14:30.438 15:24:47 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:30.700 15:24:47 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:30.700 15:24:47 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:30.700 15:24:47 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b934d554-a6f1-411a-87e8-d375507de6da lvol 150 00:14:30.700 15:24:48 -- target/nvmf_lvs_grow.sh@33 -- # lvol=d660b536-2bda-469b-a7f1-3009762c05a0 00:14:30.700 15:24:48 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:30.700 15:24:48 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:30.962 [2024-04-26 15:24:48.190827] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:30.962 [2024-04-26 15:24:48.190882] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:30.962 true 00:14:30.962 15:24:48 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b934d554-a6f1-411a-87e8-d375507de6da 00:14:30.962 15:24:48 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:30.962 15:24:48 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:30.962 15:24:48 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:31.222 15:24:48 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d660b536-2bda-469b-a7f1-3009762c05a0 00:14:31.223 15:24:48 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:31.484 [2024-04-26 15:24:48.804672] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:31.484 15:24:48 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:31.746 15:24:48 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1578731 00:14:31.746 15:24:48 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:31.746 15:24:48 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1578731 /var/tmp/bdevperf.sock 00:14:31.746 15:24:48 -- common/autotest_common.sh@817 -- # '[' -z 1578731 ']' 00:14:31.746 15:24:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:31.746 15:24:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:31.746 15:24:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:31.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:31.746 15:24:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:31.746 15:24:48 -- common/autotest_common.sh@10 -- # set +x 00:14:31.746 15:24:48 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:31.746 [2024-04-26 15:24:49.031804] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:14:31.746 [2024-04-26 15:24:49.031859] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1578731 ] 00:14:31.746 EAL: No free 2048 kB hugepages reported on node 1 00:14:31.746 [2024-04-26 15:24:49.107673] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:31.746 [2024-04-26 15:24:49.169657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:32.690 15:24:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:32.690 15:24:49 -- common/autotest_common.sh@850 -- # return 0 00:14:32.690 15:24:49 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:32.690 Nvme0n1 00:14:32.690 15:24:50 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:32.951 [ 00:14:32.951 { 00:14:32.951 "name": "Nvme0n1", 00:14:32.951 "aliases": [ 00:14:32.951 "d660b536-2bda-469b-a7f1-3009762c05a0" 00:14:32.951 ], 00:14:32.951 "product_name": "NVMe disk", 00:14:32.951 "block_size": 4096, 00:14:32.951 "num_blocks": 38912, 00:14:32.951 "uuid": "d660b536-2bda-469b-a7f1-3009762c05a0", 00:14:32.951 "assigned_rate_limits": { 00:14:32.951 "rw_ios_per_sec": 0, 00:14:32.951 "rw_mbytes_per_sec": 0, 00:14:32.951 "r_mbytes_per_sec": 0, 00:14:32.951 "w_mbytes_per_sec": 0 00:14:32.951 }, 00:14:32.951 "claimed": false, 00:14:32.951 "zoned": false, 00:14:32.951 "supported_io_types": { 00:14:32.951 "read": true, 00:14:32.951 "write": true, 00:14:32.951 "unmap": true, 00:14:32.951 "write_zeroes": true, 00:14:32.951 "flush": true, 00:14:32.951 "reset": true, 00:14:32.951 "compare": true, 00:14:32.951 "compare_and_write": true, 00:14:32.951 "abort": true, 00:14:32.951 "nvme_admin": true, 00:14:32.951 "nvme_io": true 00:14:32.951 }, 00:14:32.951 "memory_domains": [ 00:14:32.951 { 00:14:32.951 "dma_device_id": "system", 00:14:32.951 "dma_device_type": 1 00:14:32.951 } 00:14:32.951 ], 00:14:32.951 "driver_specific": { 00:14:32.951 "nvme": [ 00:14:32.951 { 00:14:32.951 "trid": { 00:14:32.951 "trtype": "TCP", 00:14:32.951 "adrfam": "IPv4", 00:14:32.951 "traddr": "10.0.0.2", 00:14:32.951 "trsvcid": "4420", 00:14:32.951 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:32.951 }, 00:14:32.951 "ctrlr_data": { 00:14:32.951 "cntlid": 1, 00:14:32.951 "vendor_id": "0x8086", 00:14:32.951 "model_number": "SPDK bdev Controller", 00:14:32.951 "serial_number": "SPDK0", 00:14:32.951 "firmware_revision": "24.05", 00:14:32.951 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:32.951 "oacs": { 00:14:32.951 "security": 0, 00:14:32.951 "format": 0, 00:14:32.951 "firmware": 0, 00:14:32.951 "ns_manage": 0 00:14:32.951 }, 00:14:32.951 "multi_ctrlr": true, 00:14:32.951 "ana_reporting": false 00:14:32.951 }, 00:14:32.951 "vs": { 00:14:32.951 "nvme_version": "1.3" 00:14:32.951 }, 00:14:32.951 "ns_data": { 00:14:32.951 "id": 1, 00:14:32.951 "can_share": true 00:14:32.951 } 00:14:32.951 } 00:14:32.951 ], 00:14:32.951 "mp_policy": "active_passive" 00:14:32.951 } 00:14:32.951 } 00:14:32.951 ] 00:14:32.951 15:24:50 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1579061 00:14:32.951 15:24:50 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:32.951 15:24:50 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:32.951 Running I/O for 10 seconds... 00:14:33.896 Latency(us) 00:14:33.896 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:33.896 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:33.896 Nvme0n1 : 1.00 17553.00 68.57 0.00 0.00 0.00 0.00 0.00 00:14:33.896 =================================================================================================================== 00:14:33.896 Total : 17553.00 68.57 0.00 0.00 0.00 0.00 0.00 00:14:33.896 00:14:34.839 15:24:52 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b934d554-a6f1-411a-87e8-d375507de6da 00:14:35.101 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:35.101 Nvme0n1 : 2.00 17639.00 68.90 0.00 0.00 0.00 0.00 0.00 00:14:35.101 =================================================================================================================== 00:14:35.101 Total : 17639.00 68.90 0.00 0.00 0.00 0.00 0.00 00:14:35.101 00:14:35.101 true 00:14:35.101 15:24:52 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b934d554-a6f1-411a-87e8-d375507de6da 00:14:35.101 15:24:52 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:35.101 15:24:52 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:35.101 15:24:52 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:35.101 15:24:52 -- target/nvmf_lvs_grow.sh@65 -- # wait 1579061 00:14:36.044 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:36.044 Nvme0n1 : 3.00 17667.00 69.01 0.00 0.00 0.00 0.00 0.00 00:14:36.044 =================================================================================================================== 00:14:36.044 Total : 17667.00 69.01 0.00 0.00 0.00 0.00 0.00 00:14:36.044 00:14:36.985 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:36.985 Nvme0n1 : 4.00 17713.75 69.19 0.00 0.00 0.00 0.00 0.00 00:14:36.985 =================================================================================================================== 00:14:36.985 Total : 17713.75 69.19 0.00 0.00 0.00 0.00 0.00 00:14:36.985 00:14:37.929 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:37.929 Nvme0n1 : 5.00 17729.00 69.25 0.00 0.00 0.00 0.00 0.00 00:14:37.929 =================================================================================================================== 00:14:37.929 Total : 17729.00 69.25 0.00 0.00 0.00 0.00 0.00 00:14:37.929 00:14:38.871 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:38.871 Nvme0n1 : 6.00 17749.67 69.33 0.00 0.00 0.00 0.00 0.00 00:14:38.871 =================================================================================================================== 00:14:38.871 Total : 17749.67 69.33 0.00 0.00 0.00 0.00 0.00 00:14:38.871 00:14:40.255 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:40.255 Nvme0n1 : 7.00 17764.14 69.39 0.00 0.00 0.00 0.00 0.00 00:14:40.255 =================================================================================================================== 00:14:40.255 Total : 17764.14 69.39 0.00 0.00 0.00 0.00 0.00 00:14:40.255 00:14:41.199 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:41.199 Nvme0n1 : 8.00 17775.25 69.43 0.00 0.00 0.00 0.00 0.00 00:14:41.199 =================================================================================================================== 00:14:41.199 Total : 17775.25 69.43 0.00 0.00 0.00 0.00 0.00 00:14:41.199 00:14:42.142 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:42.142 Nvme0n1 : 9.00 17783.56 69.47 0.00 0.00 0.00 0.00 0.00 00:14:42.142 =================================================================================================================== 00:14:42.142 Total : 17783.56 69.47 0.00 0.00 0.00 0.00 0.00 00:14:42.142 00:14:43.082 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:43.082 Nvme0n1 : 10.00 17790.30 69.49 0.00 0.00 0.00 0.00 0.00 00:14:43.082 =================================================================================================================== 00:14:43.082 Total : 17790.30 69.49 0.00 0.00 0.00 0.00 0.00 00:14:43.082 00:14:43.082 00:14:43.082 Latency(us) 00:14:43.082 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:43.082 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:43.082 Nvme0n1 : 10.01 17790.67 69.49 0.00 0.00 7190.93 4341.76 12724.91 00:14:43.082 =================================================================================================================== 00:14:43.082 Total : 17790.67 69.49 0.00 0.00 7190.93 4341.76 12724.91 00:14:43.082 0 00:14:43.082 15:25:00 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1578731 00:14:43.082 15:25:00 -- common/autotest_common.sh@936 -- # '[' -z 1578731 ']' 00:14:43.082 15:25:00 -- common/autotest_common.sh@940 -- # kill -0 1578731 00:14:43.082 15:25:00 -- common/autotest_common.sh@941 -- # uname 00:14:43.082 15:25:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:43.082 15:25:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1578731 00:14:43.082 15:25:00 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:43.082 15:25:00 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:43.082 15:25:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1578731' 00:14:43.082 killing process with pid 1578731 00:14:43.082 15:25:00 -- common/autotest_common.sh@955 -- # kill 1578731 00:14:43.082 Received shutdown signal, test time was about 10.000000 seconds 00:14:43.082 00:14:43.082 Latency(us) 00:14:43.082 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:43.082 =================================================================================================================== 00:14:43.082 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:43.082 15:25:00 -- common/autotest_common.sh@960 -- # wait 1578731 00:14:43.082 15:25:00 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:43.370 15:25:00 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b934d554-a6f1-411a-87e8-d375507de6da 00:14:43.370 15:25:00 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:14:43.644 15:25:00 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:14:43.644 15:25:00 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:14:43.644 15:25:00 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:43.644 [2024-04-26 15:25:00.961956] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:43.644 15:25:00 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b934d554-a6f1-411a-87e8-d375507de6da 00:14:43.644 15:25:00 -- common/autotest_common.sh@638 -- # local es=0 00:14:43.644 15:25:00 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b934d554-a6f1-411a-87e8-d375507de6da 00:14:43.644 15:25:00 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:43.644 15:25:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:43.644 15:25:00 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:43.644 15:25:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:43.644 15:25:00 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:43.644 15:25:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:43.644 15:25:00 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:43.644 15:25:00 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:43.644 15:25:00 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b934d554-a6f1-411a-87e8-d375507de6da 00:14:43.911 request: 00:14:43.911 { 00:14:43.911 "uuid": "b934d554-a6f1-411a-87e8-d375507de6da", 00:14:43.911 "method": "bdev_lvol_get_lvstores", 00:14:43.911 "req_id": 1 00:14:43.911 } 00:14:43.911 Got JSON-RPC error response 00:14:43.911 response: 00:14:43.911 { 00:14:43.911 "code": -19, 00:14:43.911 "message": "No such device" 00:14:43.911 } 00:14:43.911 15:25:01 -- common/autotest_common.sh@641 -- # es=1 00:14:43.911 15:25:01 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:43.911 15:25:01 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:43.911 15:25:01 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:43.911 15:25:01 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:43.911 aio_bdev 00:14:43.911 15:25:01 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev d660b536-2bda-469b-a7f1-3009762c05a0 00:14:43.911 15:25:01 -- common/autotest_common.sh@885 -- # local bdev_name=d660b536-2bda-469b-a7f1-3009762c05a0 00:14:43.911 15:25:01 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:14:43.911 15:25:01 -- common/autotest_common.sh@887 -- # local i 00:14:43.911 15:25:01 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:14:43.911 15:25:01 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:14:43.911 15:25:01 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:44.170 15:25:01 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d660b536-2bda-469b-a7f1-3009762c05a0 -t 2000 00:14:44.170 [ 00:14:44.170 { 00:14:44.170 "name": "d660b536-2bda-469b-a7f1-3009762c05a0", 00:14:44.171 "aliases": [ 00:14:44.171 "lvs/lvol" 00:14:44.171 ], 00:14:44.171 "product_name": "Logical Volume", 00:14:44.171 "block_size": 4096, 00:14:44.171 "num_blocks": 38912, 00:14:44.171 "uuid": "d660b536-2bda-469b-a7f1-3009762c05a0", 00:14:44.171 "assigned_rate_limits": { 00:14:44.171 "rw_ios_per_sec": 0, 00:14:44.171 "rw_mbytes_per_sec": 0, 00:14:44.171 "r_mbytes_per_sec": 0, 00:14:44.171 "w_mbytes_per_sec": 0 00:14:44.171 }, 00:14:44.171 "claimed": false, 00:14:44.171 "zoned": false, 00:14:44.171 "supported_io_types": { 00:14:44.171 "read": true, 00:14:44.171 "write": true, 00:14:44.171 "unmap": true, 00:14:44.171 "write_zeroes": true, 00:14:44.171 "flush": false, 00:14:44.171 "reset": true, 00:14:44.171 "compare": false, 00:14:44.171 "compare_and_write": false, 00:14:44.171 "abort": false, 00:14:44.171 "nvme_admin": false, 00:14:44.171 "nvme_io": false 00:14:44.171 }, 00:14:44.171 "driver_specific": { 00:14:44.171 "lvol": { 00:14:44.171 "lvol_store_uuid": "b934d554-a6f1-411a-87e8-d375507de6da", 00:14:44.171 "base_bdev": "aio_bdev", 00:14:44.171 "thin_provision": false, 00:14:44.171 "snapshot": false, 00:14:44.171 "clone": false, 00:14:44.171 "esnap_clone": false 00:14:44.171 } 00:14:44.171 } 00:14:44.171 } 00:14:44.171 ] 00:14:44.171 15:25:01 -- common/autotest_common.sh@893 -- # return 0 00:14:44.171 15:25:01 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b934d554-a6f1-411a-87e8-d375507de6da 00:14:44.171 15:25:01 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:14:44.431 15:25:01 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:14:44.431 15:25:01 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b934d554-a6f1-411a-87e8-d375507de6da 00:14:44.431 15:25:01 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:14:44.691 15:25:01 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:14:44.691 15:25:01 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d660b536-2bda-469b-a7f1-3009762c05a0 00:14:44.691 15:25:02 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b934d554-a6f1-411a-87e8-d375507de6da 00:14:44.951 15:25:02 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:44.951 15:25:02 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:44.951 00:14:44.951 real 0m15.019s 00:14:44.951 user 0m14.719s 00:14:44.951 sys 0m1.275s 00:14:44.951 15:25:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:44.951 15:25:02 -- common/autotest_common.sh@10 -- # set +x 00:14:44.951 ************************************ 00:14:44.951 END TEST lvs_grow_clean 00:14:44.951 ************************************ 00:14:45.211 15:25:02 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:14:45.211 15:25:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:45.211 15:25:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:45.211 15:25:02 -- common/autotest_common.sh@10 -- # set +x 00:14:45.211 ************************************ 00:14:45.211 START TEST lvs_grow_dirty 00:14:45.211 ************************************ 00:14:45.211 15:25:02 -- common/autotest_common.sh@1111 -- # lvs_grow dirty 00:14:45.211 15:25:02 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:45.211 15:25:02 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:45.211 15:25:02 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:45.211 15:25:02 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:45.211 15:25:02 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:45.211 15:25:02 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:45.211 15:25:02 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:45.211 15:25:02 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:45.211 15:25:02 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:45.471 15:25:02 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:45.471 15:25:02 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:45.731 15:25:02 -- target/nvmf_lvs_grow.sh@28 -- # lvs=f9366eb3-5e7f-4584-bcd1-8db06fd4f392 00:14:45.731 15:25:02 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9366eb3-5e7f-4584-bcd1-8db06fd4f392 00:14:45.731 15:25:02 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:45.731 15:25:03 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:45.731 15:25:03 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:45.731 15:25:03 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f9366eb3-5e7f-4584-bcd1-8db06fd4f392 lvol 150 00:14:45.990 15:25:03 -- target/nvmf_lvs_grow.sh@33 -- # lvol=a68bfb90-b683-40f0-9eb9-59a07f911ddc 00:14:45.990 15:25:03 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:45.990 15:25:03 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:45.990 [2024-04-26 15:25:03.393303] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:45.990 [2024-04-26 15:25:03.393353] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:45.990 true 00:14:45.990 15:25:03 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9366eb3-5e7f-4584-bcd1-8db06fd4f392 00:14:45.990 15:25:03 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:46.249 15:25:03 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:46.249 15:25:03 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:46.509 15:25:03 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a68bfb90-b683-40f0-9eb9-59a07f911ddc 00:14:46.509 15:25:03 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:46.770 15:25:04 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:46.770 15:25:04 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1581816 00:14:46.770 15:25:04 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:46.770 15:25:04 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1581816 /var/tmp/bdevperf.sock 00:14:46.770 15:25:04 -- common/autotest_common.sh@817 -- # '[' -z 1581816 ']' 00:14:46.770 15:25:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:46.770 15:25:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:46.770 15:25:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:46.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:46.770 15:25:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:46.770 15:25:04 -- common/autotest_common.sh@10 -- # set +x 00:14:46.770 15:25:04 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:46.770 [2024-04-26 15:25:04.207977] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:14:46.770 [2024-04-26 15:25:04.208029] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1581816 ] 00:14:47.030 EAL: No free 2048 kB hugepages reported on node 1 00:14:47.030 [2024-04-26 15:25:04.281602] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:47.030 [2024-04-26 15:25:04.333834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:47.600 15:25:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:47.600 15:25:04 -- common/autotest_common.sh@850 -- # return 0 00:14:47.600 15:25:04 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:47.860 Nvme0n1 00:14:48.121 15:25:05 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:48.121 [ 00:14:48.121 { 00:14:48.121 "name": "Nvme0n1", 00:14:48.121 "aliases": [ 00:14:48.121 "a68bfb90-b683-40f0-9eb9-59a07f911ddc" 00:14:48.121 ], 00:14:48.121 "product_name": "NVMe disk", 00:14:48.121 "block_size": 4096, 00:14:48.121 "num_blocks": 38912, 00:14:48.121 "uuid": "a68bfb90-b683-40f0-9eb9-59a07f911ddc", 00:14:48.121 "assigned_rate_limits": { 00:14:48.121 "rw_ios_per_sec": 0, 00:14:48.121 "rw_mbytes_per_sec": 0, 00:14:48.121 "r_mbytes_per_sec": 0, 00:14:48.121 "w_mbytes_per_sec": 0 00:14:48.121 }, 00:14:48.121 "claimed": false, 00:14:48.121 "zoned": false, 00:14:48.121 "supported_io_types": { 00:14:48.121 "read": true, 00:14:48.121 "write": true, 00:14:48.121 "unmap": true, 00:14:48.121 "write_zeroes": true, 00:14:48.121 "flush": true, 00:14:48.121 "reset": true, 00:14:48.121 "compare": true, 00:14:48.121 "compare_and_write": true, 00:14:48.121 "abort": true, 00:14:48.121 "nvme_admin": true, 00:14:48.121 "nvme_io": true 00:14:48.121 }, 00:14:48.121 "memory_domains": [ 00:14:48.121 { 00:14:48.121 "dma_device_id": "system", 00:14:48.121 "dma_device_type": 1 00:14:48.121 } 00:14:48.121 ], 00:14:48.121 "driver_specific": { 00:14:48.121 "nvme": [ 00:14:48.121 { 00:14:48.121 "trid": { 00:14:48.121 "trtype": "TCP", 00:14:48.121 "adrfam": "IPv4", 00:14:48.121 "traddr": "10.0.0.2", 00:14:48.121 "trsvcid": "4420", 00:14:48.121 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:48.121 }, 00:14:48.121 "ctrlr_data": { 00:14:48.121 "cntlid": 1, 00:14:48.121 "vendor_id": "0x8086", 00:14:48.121 "model_number": "SPDK bdev Controller", 00:14:48.121 "serial_number": "SPDK0", 00:14:48.121 "firmware_revision": "24.05", 00:14:48.121 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:48.121 "oacs": { 00:14:48.121 "security": 0, 00:14:48.121 "format": 0, 00:14:48.121 "firmware": 0, 00:14:48.121 "ns_manage": 0 00:14:48.121 }, 00:14:48.121 "multi_ctrlr": true, 00:14:48.121 "ana_reporting": false 00:14:48.121 }, 00:14:48.121 "vs": { 00:14:48.121 "nvme_version": "1.3" 00:14:48.121 }, 00:14:48.121 "ns_data": { 00:14:48.121 "id": 1, 00:14:48.121 "can_share": true 00:14:48.121 } 00:14:48.121 } 00:14:48.121 ], 00:14:48.121 "mp_policy": "active_passive" 00:14:48.121 } 00:14:48.121 } 00:14:48.121 ] 00:14:48.121 15:25:05 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1582150 00:14:48.121 15:25:05 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:48.121 15:25:05 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:48.121 Running I/O for 10 seconds... 00:14:49.503 Latency(us) 00:14:49.503 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:49.503 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:49.503 Nvme0n1 : 1.00 17505.00 68.38 0.00 0.00 0.00 0.00 0.00 00:14:49.503 =================================================================================================================== 00:14:49.503 Total : 17505.00 68.38 0.00 0.00 0.00 0.00 0.00 00:14:49.503 00:14:50.075 15:25:07 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f9366eb3-5e7f-4584-bcd1-8db06fd4f392 00:14:50.335 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:50.335 Nvme0n1 : 2.00 17700.00 69.14 0.00 0.00 0.00 0.00 0.00 00:14:50.335 =================================================================================================================== 00:14:50.335 Total : 17700.00 69.14 0.00 0.00 0.00 0.00 0.00 00:14:50.335 00:14:50.335 true 00:14:50.335 15:25:07 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9366eb3-5e7f-4584-bcd1-8db06fd4f392 00:14:50.335 15:25:07 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:50.596 15:25:07 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:50.596 15:25:07 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:50.596 15:25:07 -- target/nvmf_lvs_grow.sh@65 -- # wait 1582150 00:14:51.169 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:51.169 Nvme0n1 : 3.00 17760.33 69.38 0.00 0.00 0.00 0.00 0.00 00:14:51.169 =================================================================================================================== 00:14:51.169 Total : 17760.33 69.38 0.00 0.00 0.00 0.00 0.00 00:14:51.169 00:14:52.552 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:52.552 Nvme0n1 : 4.00 17792.50 69.50 0.00 0.00 0.00 0.00 0.00 00:14:52.552 =================================================================================================================== 00:14:52.552 Total : 17792.50 69.50 0.00 0.00 0.00 0.00 0.00 00:14:52.552 00:14:53.123 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:53.123 Nvme0n1 : 5.00 17822.20 69.62 0.00 0.00 0.00 0.00 0.00 00:14:53.123 =================================================================================================================== 00:14:53.123 Total : 17822.20 69.62 0.00 0.00 0.00 0.00 0.00 00:14:53.123 00:14:54.507 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:54.507 Nvme0n1 : 6.00 17843.83 69.70 0.00 0.00 0.00 0.00 0.00 00:14:54.507 =================================================================================================================== 00:14:54.507 Total : 17843.83 69.70 0.00 0.00 0.00 0.00 0.00 00:14:54.507 00:14:55.449 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:55.449 Nvme0n1 : 7.00 17858.86 69.76 0.00 0.00 0.00 0.00 0.00 00:14:55.449 =================================================================================================================== 00:14:55.449 Total : 17858.86 69.76 0.00 0.00 0.00 0.00 0.00 00:14:55.449 00:14:56.391 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:56.391 Nvme0n1 : 8.00 17878.38 69.84 0.00 0.00 0.00 0.00 0.00 00:14:56.391 =================================================================================================================== 00:14:56.391 Total : 17878.38 69.84 0.00 0.00 0.00 0.00 0.00 00:14:56.391 00:14:57.332 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:57.332 Nvme0n1 : 9.00 17887.00 69.87 0.00 0.00 0.00 0.00 0.00 00:14:57.333 =================================================================================================================== 00:14:57.333 Total : 17887.00 69.87 0.00 0.00 0.00 0.00 0.00 00:14:57.333 00:14:58.274 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:58.274 Nvme0n1 : 10.00 17893.80 69.90 0.00 0.00 0.00 0.00 0.00 00:14:58.274 =================================================================================================================== 00:14:58.274 Total : 17893.80 69.90 0.00 0.00 0.00 0.00 0.00 00:14:58.274 00:14:58.274 00:14:58.274 Latency(us) 00:14:58.274 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:58.274 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:58.274 Nvme0n1 : 10.00 17899.33 69.92 0.00 0.00 7148.30 4232.53 16602.45 00:14:58.274 =================================================================================================================== 00:14:58.274 Total : 17899.33 69.92 0.00 0.00 7148.30 4232.53 16602.45 00:14:58.274 0 00:14:58.274 15:25:15 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1581816 00:14:58.274 15:25:15 -- common/autotest_common.sh@936 -- # '[' -z 1581816 ']' 00:14:58.274 15:25:15 -- common/autotest_common.sh@940 -- # kill -0 1581816 00:14:58.274 15:25:15 -- common/autotest_common.sh@941 -- # uname 00:14:58.274 15:25:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:58.274 15:25:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1581816 00:14:58.274 15:25:15 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:58.274 15:25:15 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:58.274 15:25:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1581816' 00:14:58.274 killing process with pid 1581816 00:14:58.274 15:25:15 -- common/autotest_common.sh@955 -- # kill 1581816 00:14:58.274 Received shutdown signal, test time was about 10.000000 seconds 00:14:58.274 00:14:58.274 Latency(us) 00:14:58.274 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:58.274 =================================================================================================================== 00:14:58.274 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:58.274 15:25:15 -- common/autotest_common.sh@960 -- # wait 1581816 00:14:58.541 15:25:15 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:58.542 15:25:15 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9366eb3-5e7f-4584-bcd1-8db06fd4f392 00:14:58.542 15:25:15 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:14:58.805 15:25:16 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:14:58.805 15:25:16 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:14:58.805 15:25:16 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 1578313 00:14:58.805 15:25:16 -- target/nvmf_lvs_grow.sh@74 -- # wait 1578313 00:14:58.806 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 1578313 Killed "${NVMF_APP[@]}" "$@" 00:14:58.806 15:25:16 -- target/nvmf_lvs_grow.sh@74 -- # true 00:14:58.806 15:25:16 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:14:58.806 15:25:16 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:58.806 15:25:16 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:58.806 15:25:16 -- common/autotest_common.sh@10 -- # set +x 00:14:58.806 15:25:16 -- nvmf/common.sh@470 -- # nvmfpid=1584174 00:14:58.806 15:25:16 -- nvmf/common.sh@471 -- # waitforlisten 1584174 00:14:58.806 15:25:16 -- common/autotest_common.sh@817 -- # '[' -z 1584174 ']' 00:14:58.806 15:25:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:58.806 15:25:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:58.806 15:25:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:58.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:58.806 15:25:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:58.806 15:25:16 -- common/autotest_common.sh@10 -- # set +x 00:14:58.806 15:25:16 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:58.806 [2024-04-26 15:25:16.173518] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:14:58.806 [2024-04-26 15:25:16.173574] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:58.806 EAL: No free 2048 kB hugepages reported on node 1 00:14:58.806 [2024-04-26 15:25:16.239334] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:59.065 [2024-04-26 15:25:16.302769] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:59.065 [2024-04-26 15:25:16.302807] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:59.065 [2024-04-26 15:25:16.302814] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:59.065 [2024-04-26 15:25:16.302821] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:59.065 [2024-04-26 15:25:16.302826] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:59.065 [2024-04-26 15:25:16.302856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:59.636 15:25:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:59.636 15:25:16 -- common/autotest_common.sh@850 -- # return 0 00:14:59.636 15:25:16 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:59.636 15:25:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:59.636 15:25:16 -- common/autotest_common.sh@10 -- # set +x 00:14:59.636 15:25:16 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:59.636 15:25:16 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:59.897 [2024-04-26 15:25:17.099934] blobstore.c:4779:bs_recover: *NOTICE*: Performing recovery on blobstore 00:14:59.897 [2024-04-26 15:25:17.100026] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:14:59.897 [2024-04-26 15:25:17.100056] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:14:59.897 15:25:17 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:14:59.897 15:25:17 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev a68bfb90-b683-40f0-9eb9-59a07f911ddc 00:14:59.897 15:25:17 -- common/autotest_common.sh@885 -- # local bdev_name=a68bfb90-b683-40f0-9eb9-59a07f911ddc 00:14:59.897 15:25:17 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:14:59.897 15:25:17 -- common/autotest_common.sh@887 -- # local i 00:14:59.897 15:25:17 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:14:59.897 15:25:17 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:14:59.897 15:25:17 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:59.897 15:25:17 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a68bfb90-b683-40f0-9eb9-59a07f911ddc -t 2000 00:15:00.158 [ 00:15:00.158 { 00:15:00.158 "name": "a68bfb90-b683-40f0-9eb9-59a07f911ddc", 00:15:00.158 "aliases": [ 00:15:00.158 "lvs/lvol" 00:15:00.158 ], 00:15:00.158 "product_name": "Logical Volume", 00:15:00.158 "block_size": 4096, 00:15:00.158 "num_blocks": 38912, 00:15:00.158 "uuid": "a68bfb90-b683-40f0-9eb9-59a07f911ddc", 00:15:00.158 "assigned_rate_limits": { 00:15:00.158 "rw_ios_per_sec": 0, 00:15:00.158 "rw_mbytes_per_sec": 0, 00:15:00.158 "r_mbytes_per_sec": 0, 00:15:00.158 "w_mbytes_per_sec": 0 00:15:00.158 }, 00:15:00.158 "claimed": false, 00:15:00.158 "zoned": false, 00:15:00.158 "supported_io_types": { 00:15:00.158 "read": true, 00:15:00.158 "write": true, 00:15:00.158 "unmap": true, 00:15:00.158 "write_zeroes": true, 00:15:00.158 "flush": false, 00:15:00.158 "reset": true, 00:15:00.158 "compare": false, 00:15:00.158 "compare_and_write": false, 00:15:00.158 "abort": false, 00:15:00.158 "nvme_admin": false, 00:15:00.158 "nvme_io": false 00:15:00.158 }, 00:15:00.158 "driver_specific": { 00:15:00.158 "lvol": { 00:15:00.158 "lvol_store_uuid": "f9366eb3-5e7f-4584-bcd1-8db06fd4f392", 00:15:00.158 "base_bdev": "aio_bdev", 00:15:00.158 "thin_provision": false, 00:15:00.158 "snapshot": false, 00:15:00.158 "clone": false, 00:15:00.158 "esnap_clone": false 00:15:00.158 } 00:15:00.158 } 00:15:00.158 } 00:15:00.158 ] 00:15:00.158 15:25:17 -- common/autotest_common.sh@893 -- # return 0 00:15:00.158 15:25:17 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9366eb3-5e7f-4584-bcd1-8db06fd4f392 00:15:00.158 15:25:17 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:15:00.158 15:25:17 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:15:00.158 15:25:17 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9366eb3-5e7f-4584-bcd1-8db06fd4f392 00:15:00.158 15:25:17 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:15:00.419 15:25:17 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:15:00.420 15:25:17 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:00.420 [2024-04-26 15:25:17.859796] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:00.681 15:25:17 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9366eb3-5e7f-4584-bcd1-8db06fd4f392 00:15:00.681 15:25:17 -- common/autotest_common.sh@638 -- # local es=0 00:15:00.681 15:25:17 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9366eb3-5e7f-4584-bcd1-8db06fd4f392 00:15:00.681 15:25:17 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:00.681 15:25:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:00.681 15:25:17 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:00.681 15:25:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:00.681 15:25:17 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:00.681 15:25:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:00.681 15:25:17 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:00.681 15:25:17 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:00.681 15:25:17 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9366eb3-5e7f-4584-bcd1-8db06fd4f392 00:15:00.681 request: 00:15:00.681 { 00:15:00.681 "uuid": "f9366eb3-5e7f-4584-bcd1-8db06fd4f392", 00:15:00.681 "method": "bdev_lvol_get_lvstores", 00:15:00.681 "req_id": 1 00:15:00.681 } 00:15:00.681 Got JSON-RPC error response 00:15:00.681 response: 00:15:00.681 { 00:15:00.681 "code": -19, 00:15:00.681 "message": "No such device" 00:15:00.681 } 00:15:00.681 15:25:18 -- common/autotest_common.sh@641 -- # es=1 00:15:00.681 15:25:18 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:00.681 15:25:18 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:00.681 15:25:18 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:00.681 15:25:18 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:00.942 aio_bdev 00:15:00.942 15:25:18 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev a68bfb90-b683-40f0-9eb9-59a07f911ddc 00:15:00.942 15:25:18 -- common/autotest_common.sh@885 -- # local bdev_name=a68bfb90-b683-40f0-9eb9-59a07f911ddc 00:15:00.942 15:25:18 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:15:00.942 15:25:18 -- common/autotest_common.sh@887 -- # local i 00:15:00.942 15:25:18 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:15:00.942 15:25:18 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:15:00.942 15:25:18 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:01.204 15:25:18 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a68bfb90-b683-40f0-9eb9-59a07f911ddc -t 2000 00:15:01.204 [ 00:15:01.204 { 00:15:01.204 "name": "a68bfb90-b683-40f0-9eb9-59a07f911ddc", 00:15:01.204 "aliases": [ 00:15:01.204 "lvs/lvol" 00:15:01.204 ], 00:15:01.204 "product_name": "Logical Volume", 00:15:01.204 "block_size": 4096, 00:15:01.204 "num_blocks": 38912, 00:15:01.204 "uuid": "a68bfb90-b683-40f0-9eb9-59a07f911ddc", 00:15:01.204 "assigned_rate_limits": { 00:15:01.204 "rw_ios_per_sec": 0, 00:15:01.204 "rw_mbytes_per_sec": 0, 00:15:01.204 "r_mbytes_per_sec": 0, 00:15:01.204 "w_mbytes_per_sec": 0 00:15:01.204 }, 00:15:01.204 "claimed": false, 00:15:01.204 "zoned": false, 00:15:01.204 "supported_io_types": { 00:15:01.204 "read": true, 00:15:01.204 "write": true, 00:15:01.204 "unmap": true, 00:15:01.204 "write_zeroes": true, 00:15:01.204 "flush": false, 00:15:01.204 "reset": true, 00:15:01.204 "compare": false, 00:15:01.204 "compare_and_write": false, 00:15:01.204 "abort": false, 00:15:01.204 "nvme_admin": false, 00:15:01.204 "nvme_io": false 00:15:01.204 }, 00:15:01.204 "driver_specific": { 00:15:01.204 "lvol": { 00:15:01.204 "lvol_store_uuid": "f9366eb3-5e7f-4584-bcd1-8db06fd4f392", 00:15:01.204 "base_bdev": "aio_bdev", 00:15:01.204 "thin_provision": false, 00:15:01.204 "snapshot": false, 00:15:01.204 "clone": false, 00:15:01.204 "esnap_clone": false 00:15:01.204 } 00:15:01.204 } 00:15:01.204 } 00:15:01.204 ] 00:15:01.204 15:25:18 -- common/autotest_common.sh@893 -- # return 0 00:15:01.204 15:25:18 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9366eb3-5e7f-4584-bcd1-8db06fd4f392 00:15:01.204 15:25:18 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:15:01.466 15:25:18 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:15:01.466 15:25:18 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9366eb3-5e7f-4584-bcd1-8db06fd4f392 00:15:01.466 15:25:18 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:15:01.466 15:25:18 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:15:01.466 15:25:18 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a68bfb90-b683-40f0-9eb9-59a07f911ddc 00:15:01.728 15:25:19 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f9366eb3-5e7f-4584-bcd1-8db06fd4f392 00:15:01.989 15:25:19 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:01.989 15:25:19 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:01.989 00:15:01.989 real 0m16.845s 00:15:01.989 user 0m44.186s 00:15:01.989 sys 0m2.759s 00:15:01.989 15:25:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:01.989 15:25:19 -- common/autotest_common.sh@10 -- # set +x 00:15:01.989 ************************************ 00:15:01.989 END TEST lvs_grow_dirty 00:15:01.989 ************************************ 00:15:02.249 15:25:19 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:15:02.250 15:25:19 -- common/autotest_common.sh@794 -- # type=--id 00:15:02.250 15:25:19 -- common/autotest_common.sh@795 -- # id=0 00:15:02.250 15:25:19 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:15:02.250 15:25:19 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:02.250 15:25:19 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:15:02.250 15:25:19 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:15:02.250 15:25:19 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:15:02.250 15:25:19 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:02.250 nvmf_trace.0 00:15:02.250 15:25:19 -- common/autotest_common.sh@809 -- # return 0 00:15:02.250 15:25:19 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:15:02.250 15:25:19 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:02.250 15:25:19 -- nvmf/common.sh@117 -- # sync 00:15:02.250 15:25:19 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:02.250 15:25:19 -- nvmf/common.sh@120 -- # set +e 00:15:02.250 15:25:19 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:02.250 15:25:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:02.250 rmmod nvme_tcp 00:15:02.250 rmmod nvme_fabrics 00:15:02.250 rmmod nvme_keyring 00:15:02.250 15:25:19 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:02.250 15:25:19 -- nvmf/common.sh@124 -- # set -e 00:15:02.250 15:25:19 -- nvmf/common.sh@125 -- # return 0 00:15:02.250 15:25:19 -- nvmf/common.sh@478 -- # '[' -n 1584174 ']' 00:15:02.250 15:25:19 -- nvmf/common.sh@479 -- # killprocess 1584174 00:15:02.250 15:25:19 -- common/autotest_common.sh@936 -- # '[' -z 1584174 ']' 00:15:02.250 15:25:19 -- common/autotest_common.sh@940 -- # kill -0 1584174 00:15:02.250 15:25:19 -- common/autotest_common.sh@941 -- # uname 00:15:02.250 15:25:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:02.250 15:25:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1584174 00:15:02.250 15:25:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:02.250 15:25:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:02.250 15:25:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1584174' 00:15:02.250 killing process with pid 1584174 00:15:02.250 15:25:19 -- common/autotest_common.sh@955 -- # kill 1584174 00:15:02.250 15:25:19 -- common/autotest_common.sh@960 -- # wait 1584174 00:15:02.511 15:25:19 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:02.511 15:25:19 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:02.511 15:25:19 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:02.511 15:25:19 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:02.511 15:25:19 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:02.511 15:25:19 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:02.511 15:25:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:02.511 15:25:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:04.427 15:25:21 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:04.427 00:15:04.427 real 0m42.951s 00:15:04.427 user 1m4.925s 00:15:04.427 sys 0m9.907s 00:15:04.427 15:25:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:04.427 15:25:21 -- common/autotest_common.sh@10 -- # set +x 00:15:04.427 ************************************ 00:15:04.427 END TEST nvmf_lvs_grow 00:15:04.427 ************************************ 00:15:04.427 15:25:21 -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:04.427 15:25:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:04.427 15:25:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:04.427 15:25:21 -- common/autotest_common.sh@10 -- # set +x 00:15:04.688 ************************************ 00:15:04.688 START TEST nvmf_bdev_io_wait 00:15:04.688 ************************************ 00:15:04.688 15:25:22 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:04.688 * Looking for test storage... 00:15:04.688 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:04.688 15:25:22 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:04.688 15:25:22 -- nvmf/common.sh@7 -- # uname -s 00:15:04.688 15:25:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:04.688 15:25:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:04.688 15:25:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:04.688 15:25:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:04.688 15:25:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:04.688 15:25:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:04.688 15:25:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:04.688 15:25:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:04.688 15:25:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:04.950 15:25:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:04.950 15:25:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:04.950 15:25:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:04.950 15:25:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:04.950 15:25:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:04.950 15:25:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:04.950 15:25:22 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:04.950 15:25:22 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:04.950 15:25:22 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:04.950 15:25:22 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:04.950 15:25:22 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:04.950 15:25:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.950 15:25:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.950 15:25:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.950 15:25:22 -- paths/export.sh@5 -- # export PATH 00:15:04.950 15:25:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.950 15:25:22 -- nvmf/common.sh@47 -- # : 0 00:15:04.950 15:25:22 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:04.950 15:25:22 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:04.950 15:25:22 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:04.950 15:25:22 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:04.950 15:25:22 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:04.950 15:25:22 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:04.950 15:25:22 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:04.950 15:25:22 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:04.950 15:25:22 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:04.950 15:25:22 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:04.950 15:25:22 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:15:04.950 15:25:22 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:04.950 15:25:22 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:04.950 15:25:22 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:04.950 15:25:22 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:04.950 15:25:22 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:04.950 15:25:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:04.950 15:25:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:04.950 15:25:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:04.950 15:25:22 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:04.950 15:25:22 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:04.950 15:25:22 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:04.950 15:25:22 -- common/autotest_common.sh@10 -- # set +x 00:15:11.546 15:25:28 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:11.546 15:25:28 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:11.546 15:25:28 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:11.546 15:25:28 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:11.546 15:25:28 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:11.546 15:25:28 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:11.546 15:25:28 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:11.546 15:25:28 -- nvmf/common.sh@295 -- # net_devs=() 00:15:11.546 15:25:28 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:11.546 15:25:28 -- nvmf/common.sh@296 -- # e810=() 00:15:11.546 15:25:28 -- nvmf/common.sh@296 -- # local -ga e810 00:15:11.546 15:25:28 -- nvmf/common.sh@297 -- # x722=() 00:15:11.546 15:25:28 -- nvmf/common.sh@297 -- # local -ga x722 00:15:11.546 15:25:28 -- nvmf/common.sh@298 -- # mlx=() 00:15:11.546 15:25:28 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:11.546 15:25:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:11.546 15:25:28 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:11.546 15:25:28 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:11.546 15:25:28 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:11.546 15:25:28 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:11.546 15:25:28 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:11.546 15:25:28 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:11.546 15:25:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:11.546 15:25:28 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:11.546 15:25:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:11.546 15:25:28 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:11.546 15:25:28 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:11.546 15:25:28 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:11.546 15:25:28 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:11.546 15:25:28 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:11.546 15:25:28 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:11.546 15:25:28 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:11.546 15:25:28 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:11.546 15:25:28 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:11.546 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:11.546 15:25:28 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:11.546 15:25:28 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:11.546 15:25:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:11.546 15:25:28 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:11.546 15:25:28 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:11.546 15:25:28 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:11.546 15:25:28 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:11.546 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:11.546 15:25:28 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:11.546 15:25:28 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:11.546 15:25:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:11.546 15:25:28 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:11.546 15:25:28 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:11.546 15:25:28 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:11.546 15:25:28 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:11.546 15:25:28 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:11.546 15:25:28 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:11.546 15:25:28 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:11.546 15:25:28 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:11.546 15:25:28 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:11.546 15:25:28 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:11.546 Found net devices under 0000:31:00.0: cvl_0_0 00:15:11.546 15:25:28 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:11.546 15:25:28 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:11.546 15:25:28 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:11.546 15:25:28 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:11.546 15:25:28 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:11.546 15:25:28 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:11.546 Found net devices under 0000:31:00.1: cvl_0_1 00:15:11.546 15:25:28 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:11.546 15:25:28 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:11.546 15:25:28 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:11.546 15:25:28 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:11.546 15:25:28 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:11.546 15:25:28 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:11.546 15:25:28 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:11.546 15:25:28 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:11.546 15:25:28 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:11.546 15:25:28 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:11.546 15:25:28 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:11.546 15:25:28 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:11.546 15:25:28 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:11.546 15:25:28 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:11.546 15:25:28 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:11.546 15:25:28 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:11.546 15:25:28 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:11.546 15:25:28 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:11.546 15:25:28 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:11.807 15:25:29 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:11.807 15:25:29 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:11.807 15:25:29 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:11.807 15:25:29 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:11.807 15:25:29 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:11.807 15:25:29 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:12.067 15:25:29 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:12.067 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:12.067 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.600 ms 00:15:12.067 00:15:12.067 --- 10.0.0.2 ping statistics --- 00:15:12.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.067 rtt min/avg/max/mdev = 0.600/0.600/0.600/0.000 ms 00:15:12.067 15:25:29 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:12.067 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:12.067 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:15:12.067 00:15:12.067 --- 10.0.0.1 ping statistics --- 00:15:12.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.067 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:15:12.067 15:25:29 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:12.067 15:25:29 -- nvmf/common.sh@411 -- # return 0 00:15:12.067 15:25:29 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:12.067 15:25:29 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:12.067 15:25:29 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:12.067 15:25:29 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:12.067 15:25:29 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:12.067 15:25:29 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:12.067 15:25:29 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:12.067 15:25:29 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:15:12.067 15:25:29 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:12.067 15:25:29 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:12.067 15:25:29 -- common/autotest_common.sh@10 -- # set +x 00:15:12.067 15:25:29 -- nvmf/common.sh@470 -- # nvmfpid=1589209 00:15:12.067 15:25:29 -- nvmf/common.sh@471 -- # waitforlisten 1589209 00:15:12.067 15:25:29 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:15:12.067 15:25:29 -- common/autotest_common.sh@817 -- # '[' -z 1589209 ']' 00:15:12.067 15:25:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:12.067 15:25:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:12.067 15:25:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:12.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:12.067 15:25:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:12.067 15:25:29 -- common/autotest_common.sh@10 -- # set +x 00:15:12.067 [2024-04-26 15:25:29.389735] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:15:12.067 [2024-04-26 15:25:29.389804] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:12.067 EAL: No free 2048 kB hugepages reported on node 1 00:15:12.067 [2024-04-26 15:25:29.462213] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:12.327 [2024-04-26 15:25:29.536498] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:12.327 [2024-04-26 15:25:29.536540] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:12.327 [2024-04-26 15:25:29.536549] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:12.327 [2024-04-26 15:25:29.536557] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:12.327 [2024-04-26 15:25:29.536563] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:12.327 [2024-04-26 15:25:29.536724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:12.327 [2024-04-26 15:25:29.536862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:12.327 [2024-04-26 15:25:29.536999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:12.327 [2024-04-26 15:25:29.536999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:12.897 15:25:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:12.897 15:25:30 -- common/autotest_common.sh@850 -- # return 0 00:15:12.897 15:25:30 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:12.897 15:25:30 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:12.897 15:25:30 -- common/autotest_common.sh@10 -- # set +x 00:15:12.897 15:25:30 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:12.897 15:25:30 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:15:12.897 15:25:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:12.897 15:25:30 -- common/autotest_common.sh@10 -- # set +x 00:15:12.897 15:25:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:12.897 15:25:30 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:15:12.897 15:25:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:12.897 15:25:30 -- common/autotest_common.sh@10 -- # set +x 00:15:12.897 15:25:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:12.897 15:25:30 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:12.897 15:25:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:12.897 15:25:30 -- common/autotest_common.sh@10 -- # set +x 00:15:12.897 [2024-04-26 15:25:30.265465] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:12.897 15:25:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:12.897 15:25:30 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:12.897 15:25:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:12.897 15:25:30 -- common/autotest_common.sh@10 -- # set +x 00:15:12.897 Malloc0 00:15:12.897 15:25:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:12.897 15:25:30 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:12.897 15:25:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:12.897 15:25:30 -- common/autotest_common.sh@10 -- # set +x 00:15:12.897 15:25:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:12.897 15:25:30 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:12.897 15:25:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:12.897 15:25:30 -- common/autotest_common.sh@10 -- # set +x 00:15:12.897 15:25:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:12.897 15:25:30 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:12.897 15:25:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:12.897 15:25:30 -- common/autotest_common.sh@10 -- # set +x 00:15:12.897 [2024-04-26 15:25:30.335094] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:12.897 15:25:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:12.897 15:25:30 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1589332 00:15:12.897 15:25:30 -- target/bdev_io_wait.sh@30 -- # READ_PID=1589334 00:15:12.897 15:25:30 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:15:12.897 15:25:30 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:15:12.897 15:25:30 -- nvmf/common.sh@521 -- # config=() 00:15:12.897 15:25:30 -- nvmf/common.sh@521 -- # local subsystem config 00:15:12.897 15:25:30 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:15:12.897 15:25:30 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:15:12.897 { 00:15:12.897 "params": { 00:15:12.897 "name": "Nvme$subsystem", 00:15:12.897 "trtype": "$TEST_TRANSPORT", 00:15:12.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:12.897 "adrfam": "ipv4", 00:15:12.897 "trsvcid": "$NVMF_PORT", 00:15:12.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:12.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:12.897 "hdgst": ${hdgst:-false}, 00:15:12.897 "ddgst": ${ddgst:-false} 00:15:12.897 }, 00:15:12.897 "method": "bdev_nvme_attach_controller" 00:15:12.897 } 00:15:12.897 EOF 00:15:12.897 )") 00:15:12.897 15:25:30 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1589336 00:15:12.897 15:25:30 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:15:12.897 15:25:30 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:15:12.897 15:25:30 -- nvmf/common.sh@521 -- # config=() 00:15:12.897 15:25:30 -- nvmf/common.sh@521 -- # local subsystem config 00:15:12.897 15:25:30 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:15:12.897 15:25:30 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:15:12.897 { 00:15:12.897 "params": { 00:15:12.897 "name": "Nvme$subsystem", 00:15:12.897 "trtype": "$TEST_TRANSPORT", 00:15:12.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:12.897 "adrfam": "ipv4", 00:15:12.897 "trsvcid": "$NVMF_PORT", 00:15:12.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:12.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:12.897 "hdgst": ${hdgst:-false}, 00:15:12.897 "ddgst": ${ddgst:-false} 00:15:12.897 }, 00:15:12.897 "method": "bdev_nvme_attach_controller" 00:15:12.897 } 00:15:12.897 EOF 00:15:12.897 )") 00:15:12.897 15:25:30 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1589339 00:15:12.897 15:25:30 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:15:12.897 15:25:30 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:15:13.158 15:25:30 -- target/bdev_io_wait.sh@35 -- # sync 00:15:13.158 15:25:30 -- nvmf/common.sh@521 -- # config=() 00:15:13.158 15:25:30 -- nvmf/common.sh@543 -- # cat 00:15:13.158 15:25:30 -- nvmf/common.sh@521 -- # local subsystem config 00:15:13.158 15:25:30 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:15:13.158 15:25:30 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:15:13.158 { 00:15:13.158 "params": { 00:15:13.158 "name": "Nvme$subsystem", 00:15:13.158 "trtype": "$TEST_TRANSPORT", 00:15:13.158 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:13.158 "adrfam": "ipv4", 00:15:13.158 "trsvcid": "$NVMF_PORT", 00:15:13.158 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:13.158 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:13.158 "hdgst": ${hdgst:-false}, 00:15:13.158 "ddgst": ${ddgst:-false} 00:15:13.158 }, 00:15:13.158 "method": "bdev_nvme_attach_controller" 00:15:13.158 } 00:15:13.158 EOF 00:15:13.158 )") 00:15:13.158 15:25:30 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:15:13.158 15:25:30 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:15:13.158 15:25:30 -- nvmf/common.sh@521 -- # config=() 00:15:13.158 15:25:30 -- nvmf/common.sh@543 -- # cat 00:15:13.158 15:25:30 -- nvmf/common.sh@521 -- # local subsystem config 00:15:13.158 15:25:30 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:15:13.158 15:25:30 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:15:13.158 { 00:15:13.158 "params": { 00:15:13.158 "name": "Nvme$subsystem", 00:15:13.158 "trtype": "$TEST_TRANSPORT", 00:15:13.158 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:13.158 "adrfam": "ipv4", 00:15:13.158 "trsvcid": "$NVMF_PORT", 00:15:13.158 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:13.158 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:13.158 "hdgst": ${hdgst:-false}, 00:15:13.158 "ddgst": ${ddgst:-false} 00:15:13.158 }, 00:15:13.158 "method": "bdev_nvme_attach_controller" 00:15:13.158 } 00:15:13.158 EOF 00:15:13.158 )") 00:15:13.158 15:25:30 -- nvmf/common.sh@543 -- # cat 00:15:13.158 15:25:30 -- target/bdev_io_wait.sh@37 -- # wait 1589332 00:15:13.158 15:25:30 -- nvmf/common.sh@543 -- # cat 00:15:13.158 15:25:30 -- nvmf/common.sh@545 -- # jq . 00:15:13.158 15:25:30 -- nvmf/common.sh@545 -- # jq . 00:15:13.158 15:25:30 -- nvmf/common.sh@545 -- # jq . 00:15:13.158 15:25:30 -- nvmf/common.sh@546 -- # IFS=, 00:15:13.158 15:25:30 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:15:13.158 "params": { 00:15:13.158 "name": "Nvme1", 00:15:13.158 "trtype": "tcp", 00:15:13.158 "traddr": "10.0.0.2", 00:15:13.158 "adrfam": "ipv4", 00:15:13.158 "trsvcid": "4420", 00:15:13.158 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:13.158 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:13.158 "hdgst": false, 00:15:13.158 "ddgst": false 00:15:13.158 }, 00:15:13.158 "method": "bdev_nvme_attach_controller" 00:15:13.158 }' 00:15:13.158 15:25:30 -- nvmf/common.sh@545 -- # jq . 00:15:13.158 15:25:30 -- nvmf/common.sh@546 -- # IFS=, 00:15:13.158 15:25:30 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:15:13.158 "params": { 00:15:13.158 "name": "Nvme1", 00:15:13.158 "trtype": "tcp", 00:15:13.158 "traddr": "10.0.0.2", 00:15:13.158 "adrfam": "ipv4", 00:15:13.158 "trsvcid": "4420", 00:15:13.158 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:13.158 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:13.158 "hdgst": false, 00:15:13.158 "ddgst": false 00:15:13.158 }, 00:15:13.158 "method": "bdev_nvme_attach_controller" 00:15:13.158 }' 00:15:13.158 15:25:30 -- nvmf/common.sh@546 -- # IFS=, 00:15:13.158 15:25:30 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:15:13.158 "params": { 00:15:13.158 "name": "Nvme1", 00:15:13.158 "trtype": "tcp", 00:15:13.158 "traddr": "10.0.0.2", 00:15:13.158 "adrfam": "ipv4", 00:15:13.158 "trsvcid": "4420", 00:15:13.158 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:13.158 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:13.158 "hdgst": false, 00:15:13.158 "ddgst": false 00:15:13.158 }, 00:15:13.158 "method": "bdev_nvme_attach_controller" 00:15:13.158 }' 00:15:13.158 15:25:30 -- nvmf/common.sh@546 -- # IFS=, 00:15:13.158 15:25:30 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:15:13.158 "params": { 00:15:13.158 "name": "Nvme1", 00:15:13.158 "trtype": "tcp", 00:15:13.158 "traddr": "10.0.0.2", 00:15:13.158 "adrfam": "ipv4", 00:15:13.158 "trsvcid": "4420", 00:15:13.158 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:13.158 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:13.158 "hdgst": false, 00:15:13.158 "ddgst": false 00:15:13.158 }, 00:15:13.158 "method": "bdev_nvme_attach_controller" 00:15:13.158 }' 00:15:13.158 [2024-04-26 15:25:30.386232] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:15:13.158 [2024-04-26 15:25:30.386284] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:15:13.158 [2024-04-26 15:25:30.388742] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:15:13.158 [2024-04-26 15:25:30.388790] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:15:13.158 [2024-04-26 15:25:30.389151] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:15:13.158 [2024-04-26 15:25:30.389194] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:15:13.158 [2024-04-26 15:25:30.390488] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:15:13.158 [2024-04-26 15:25:30.390530] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:15:13.158 EAL: No free 2048 kB hugepages reported on node 1 00:15:13.158 EAL: No free 2048 kB hugepages reported on node 1 00:15:13.158 [2024-04-26 15:25:30.529844] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.158 EAL: No free 2048 kB hugepages reported on node 1 00:15:13.158 [2024-04-26 15:25:30.578982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:15:13.158 [2024-04-26 15:25:30.589636] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.158 EAL: No free 2048 kB hugepages reported on node 1 00:15:13.419 [2024-04-26 15:25:30.637730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:15:13.419 [2024-04-26 15:25:30.647494] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.419 [2024-04-26 15:25:30.697192] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.419 [2024-04-26 15:25:30.697942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:15:13.419 [2024-04-26 15:25:30.744566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:15:13.419 Running I/O for 1 seconds... 00:15:13.419 Running I/O for 1 seconds... 00:15:13.679 Running I/O for 1 seconds... 00:15:13.679 Running I/O for 1 seconds... 00:15:14.620 00:15:14.620 Latency(us) 00:15:14.620 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:14.620 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:15:14.620 Nvme1n1 : 1.00 14015.32 54.75 0.00 0.00 9105.84 4942.51 22391.47 00:15:14.620 =================================================================================================================== 00:15:14.620 Total : 14015.32 54.75 0.00 0.00 9105.84 4942.51 22391.47 00:15:14.620 00:15:14.620 Latency(us) 00:15:14.620 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:14.620 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:15:14.620 Nvme1n1 : 1.02 6638.65 25.93 0.00 0.00 19157.61 5406.72 27962.03 00:15:14.620 =================================================================================================================== 00:15:14.620 Total : 6638.65 25.93 0.00 0.00 19157.61 5406.72 27962.03 00:15:14.620 00:15:14.620 Latency(us) 00:15:14.620 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:14.620 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:15:14.620 Nvme1n1 : 1.01 6573.48 25.68 0.00 0.00 19408.98 6062.08 43253.76 00:15:14.620 =================================================================================================================== 00:15:14.620 Total : 6573.48 25.68 0.00 0.00 19408.98 6062.08 43253.76 00:15:14.620 00:15:14.620 Latency(us) 00:15:14.620 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:14.620 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:15:14.620 Nvme1n1 : 1.00 189426.44 739.95 0.00 0.00 673.03 259.41 774.83 00:15:14.620 =================================================================================================================== 00:15:14.620 Total : 189426.44 739.95 0.00 0.00 673.03 259.41 774.83 00:15:14.620 15:25:32 -- target/bdev_io_wait.sh@38 -- # wait 1589334 00:15:14.900 15:25:32 -- target/bdev_io_wait.sh@39 -- # wait 1589336 00:15:14.900 15:25:32 -- target/bdev_io_wait.sh@40 -- # wait 1589339 00:15:14.900 15:25:32 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:14.900 15:25:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:14.900 15:25:32 -- common/autotest_common.sh@10 -- # set +x 00:15:14.900 15:25:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:14.900 15:25:32 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:15:14.900 15:25:32 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:15:14.900 15:25:32 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:14.900 15:25:32 -- nvmf/common.sh@117 -- # sync 00:15:14.900 15:25:32 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:14.900 15:25:32 -- nvmf/common.sh@120 -- # set +e 00:15:14.900 15:25:32 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:14.900 15:25:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:14.900 rmmod nvme_tcp 00:15:14.900 rmmod nvme_fabrics 00:15:14.900 rmmod nvme_keyring 00:15:14.900 15:25:32 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:14.900 15:25:32 -- nvmf/common.sh@124 -- # set -e 00:15:14.900 15:25:32 -- nvmf/common.sh@125 -- # return 0 00:15:14.900 15:25:32 -- nvmf/common.sh@478 -- # '[' -n 1589209 ']' 00:15:14.900 15:25:32 -- nvmf/common.sh@479 -- # killprocess 1589209 00:15:14.900 15:25:32 -- common/autotest_common.sh@936 -- # '[' -z 1589209 ']' 00:15:14.900 15:25:32 -- common/autotest_common.sh@940 -- # kill -0 1589209 00:15:14.900 15:25:32 -- common/autotest_common.sh@941 -- # uname 00:15:14.900 15:25:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:14.900 15:25:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1589209 00:15:14.900 15:25:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:14.900 15:25:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:14.900 15:25:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1589209' 00:15:14.900 killing process with pid 1589209 00:15:14.900 15:25:32 -- common/autotest_common.sh@955 -- # kill 1589209 00:15:14.900 15:25:32 -- common/autotest_common.sh@960 -- # wait 1589209 00:15:15.202 15:25:32 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:15.202 15:25:32 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:15.202 15:25:32 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:15.202 15:25:32 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:15.202 15:25:32 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:15.202 15:25:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:15.202 15:25:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:15.202 15:25:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:17.137 15:25:34 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:17.137 00:15:17.137 real 0m12.410s 00:15:17.137 user 0m18.621s 00:15:17.137 sys 0m6.673s 00:15:17.137 15:25:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:17.137 15:25:34 -- common/autotest_common.sh@10 -- # set +x 00:15:17.137 ************************************ 00:15:17.137 END TEST nvmf_bdev_io_wait 00:15:17.137 ************************************ 00:15:17.137 15:25:34 -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:17.137 15:25:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:17.137 15:25:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:17.137 15:25:34 -- common/autotest_common.sh@10 -- # set +x 00:15:17.398 ************************************ 00:15:17.398 START TEST nvmf_queue_depth 00:15:17.398 ************************************ 00:15:17.398 15:25:34 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:17.398 * Looking for test storage... 00:15:17.398 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:17.398 15:25:34 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:17.398 15:25:34 -- nvmf/common.sh@7 -- # uname -s 00:15:17.398 15:25:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:17.398 15:25:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:17.398 15:25:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:17.398 15:25:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:17.398 15:25:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:17.398 15:25:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:17.398 15:25:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:17.398 15:25:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:17.398 15:25:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:17.398 15:25:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:17.398 15:25:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:17.398 15:25:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:17.398 15:25:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:17.398 15:25:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:17.398 15:25:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:17.398 15:25:34 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:17.398 15:25:34 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:17.398 15:25:34 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:17.398 15:25:34 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:17.398 15:25:34 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:17.398 15:25:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.398 15:25:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.398 15:25:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.398 15:25:34 -- paths/export.sh@5 -- # export PATH 00:15:17.398 15:25:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.398 15:25:34 -- nvmf/common.sh@47 -- # : 0 00:15:17.398 15:25:34 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:17.398 15:25:34 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:17.398 15:25:34 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:17.398 15:25:34 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:17.398 15:25:34 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:17.398 15:25:34 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:17.398 15:25:34 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:17.398 15:25:34 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:17.398 15:25:34 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:15:17.398 15:25:34 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:15:17.398 15:25:34 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:17.398 15:25:34 -- target/queue_depth.sh@19 -- # nvmftestinit 00:15:17.398 15:25:34 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:17.398 15:25:34 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:17.398 15:25:34 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:17.398 15:25:34 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:17.398 15:25:34 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:17.398 15:25:34 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:17.398 15:25:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:17.398 15:25:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:17.399 15:25:34 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:17.399 15:25:34 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:17.399 15:25:34 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:17.399 15:25:34 -- common/autotest_common.sh@10 -- # set +x 00:15:25.546 15:25:41 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:25.546 15:25:41 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:25.546 15:25:41 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:25.546 15:25:41 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:25.546 15:25:41 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:25.546 15:25:41 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:25.546 15:25:41 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:25.546 15:25:41 -- nvmf/common.sh@295 -- # net_devs=() 00:15:25.546 15:25:41 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:25.546 15:25:41 -- nvmf/common.sh@296 -- # e810=() 00:15:25.546 15:25:41 -- nvmf/common.sh@296 -- # local -ga e810 00:15:25.546 15:25:41 -- nvmf/common.sh@297 -- # x722=() 00:15:25.546 15:25:41 -- nvmf/common.sh@297 -- # local -ga x722 00:15:25.546 15:25:41 -- nvmf/common.sh@298 -- # mlx=() 00:15:25.546 15:25:41 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:25.546 15:25:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:25.546 15:25:41 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:25.546 15:25:41 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:25.546 15:25:41 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:25.546 15:25:41 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:25.546 15:25:41 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:25.546 15:25:41 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:25.546 15:25:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:25.546 15:25:41 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:25.546 15:25:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:25.546 15:25:41 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:25.546 15:25:41 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:25.546 15:25:41 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:25.546 15:25:41 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:25.546 15:25:41 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:25.546 15:25:41 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:25.546 15:25:41 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:25.546 15:25:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:25.546 15:25:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:25.546 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:25.546 15:25:41 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:25.546 15:25:41 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:25.546 15:25:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:25.546 15:25:41 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:25.546 15:25:41 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:25.546 15:25:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:25.546 15:25:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:25.546 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:25.546 15:25:41 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:25.546 15:25:41 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:25.546 15:25:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:25.546 15:25:41 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:25.546 15:25:41 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:25.546 15:25:41 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:25.546 15:25:41 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:25.546 15:25:41 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:25.547 15:25:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:25.547 15:25:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:25.547 15:25:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:25.547 15:25:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:25.547 15:25:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:25.547 Found net devices under 0000:31:00.0: cvl_0_0 00:15:25.547 15:25:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:25.547 15:25:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:25.547 15:25:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:25.547 15:25:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:25.547 15:25:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:25.547 15:25:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:25.547 Found net devices under 0000:31:00.1: cvl_0_1 00:15:25.547 15:25:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:25.547 15:25:41 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:25.547 15:25:41 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:25.547 15:25:41 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:25.547 15:25:41 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:25.547 15:25:41 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:25.547 15:25:41 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:25.547 15:25:41 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:25.547 15:25:41 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:25.547 15:25:41 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:25.547 15:25:41 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:25.547 15:25:41 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:25.547 15:25:41 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:25.547 15:25:41 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:25.547 15:25:41 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:25.547 15:25:41 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:25.547 15:25:41 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:25.547 15:25:41 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:25.547 15:25:41 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:25.547 15:25:41 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:25.547 15:25:41 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:25.547 15:25:41 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:25.547 15:25:41 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:25.547 15:25:42 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:25.547 15:25:42 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:25.547 15:25:42 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:25.547 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:25.547 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.623 ms 00:15:25.547 00:15:25.547 --- 10.0.0.2 ping statistics --- 00:15:25.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:25.547 rtt min/avg/max/mdev = 0.623/0.623/0.623/0.000 ms 00:15:25.547 15:25:42 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:25.547 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:25.547 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:15:25.547 00:15:25.547 --- 10.0.0.1 ping statistics --- 00:15:25.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:25.547 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:15:25.547 15:25:42 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:25.547 15:25:42 -- nvmf/common.sh@411 -- # return 0 00:15:25.547 15:25:42 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:25.547 15:25:42 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:25.547 15:25:42 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:25.547 15:25:42 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:25.547 15:25:42 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:25.547 15:25:42 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:25.547 15:25:42 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:25.547 15:25:42 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:15:25.547 15:25:42 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:25.547 15:25:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:25.547 15:25:42 -- common/autotest_common.sh@10 -- # set +x 00:15:25.547 15:25:42 -- nvmf/common.sh@470 -- # nvmfpid=1594085 00:15:25.547 15:25:42 -- nvmf/common.sh@471 -- # waitforlisten 1594085 00:15:25.547 15:25:42 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:25.547 15:25:42 -- common/autotest_common.sh@817 -- # '[' -z 1594085 ']' 00:15:25.547 15:25:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:25.547 15:25:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:25.547 15:25:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:25.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:25.547 15:25:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:25.547 15:25:42 -- common/autotest_common.sh@10 -- # set +x 00:15:25.547 [2024-04-26 15:25:42.145258] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:15:25.547 [2024-04-26 15:25:42.145307] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:25.547 EAL: No free 2048 kB hugepages reported on node 1 00:15:25.547 [2024-04-26 15:25:42.227199] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.547 [2024-04-26 15:25:42.289681] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:25.547 [2024-04-26 15:25:42.289714] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:25.547 [2024-04-26 15:25:42.289722] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:25.547 [2024-04-26 15:25:42.289728] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:25.547 [2024-04-26 15:25:42.289734] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:25.547 [2024-04-26 15:25:42.289751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:25.547 15:25:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:25.547 15:25:42 -- common/autotest_common.sh@850 -- # return 0 00:15:25.547 15:25:42 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:25.547 15:25:42 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:25.547 15:25:42 -- common/autotest_common.sh@10 -- # set +x 00:15:25.547 15:25:42 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:25.547 15:25:42 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:25.547 15:25:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:25.547 15:25:42 -- common/autotest_common.sh@10 -- # set +x 00:15:25.547 [2024-04-26 15:25:42.967828] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:25.547 15:25:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:25.547 15:25:42 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:25.547 15:25:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:25.547 15:25:42 -- common/autotest_common.sh@10 -- # set +x 00:15:25.809 Malloc0 00:15:25.809 15:25:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:25.809 15:25:43 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:25.809 15:25:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:25.809 15:25:43 -- common/autotest_common.sh@10 -- # set +x 00:15:25.809 15:25:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:25.809 15:25:43 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:25.809 15:25:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:25.809 15:25:43 -- common/autotest_common.sh@10 -- # set +x 00:15:25.809 15:25:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:25.809 15:25:43 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:25.809 15:25:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:25.809 15:25:43 -- common/autotest_common.sh@10 -- # set +x 00:15:25.809 [2024-04-26 15:25:43.034556] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:25.809 15:25:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:25.809 15:25:43 -- target/queue_depth.sh@30 -- # bdevperf_pid=1594226 00:15:25.809 15:25:43 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:25.809 15:25:43 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:15:25.809 15:25:43 -- target/queue_depth.sh@33 -- # waitforlisten 1594226 /var/tmp/bdevperf.sock 00:15:25.809 15:25:43 -- common/autotest_common.sh@817 -- # '[' -z 1594226 ']' 00:15:25.809 15:25:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:25.809 15:25:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:25.809 15:25:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:25.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:25.809 15:25:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:25.809 15:25:43 -- common/autotest_common.sh@10 -- # set +x 00:15:25.809 [2024-04-26 15:25:43.089488] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:15:25.809 [2024-04-26 15:25:43.089551] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1594226 ] 00:15:25.809 EAL: No free 2048 kB hugepages reported on node 1 00:15:25.809 [2024-04-26 15:25:43.154261] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.809 [2024-04-26 15:25:43.226652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:26.752 15:25:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:26.752 15:25:43 -- common/autotest_common.sh@850 -- # return 0 00:15:26.752 15:25:43 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:26.752 15:25:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:26.752 15:25:43 -- common/autotest_common.sh@10 -- # set +x 00:15:26.752 NVMe0n1 00:15:26.752 15:25:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:26.752 15:25:44 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:26.752 Running I/O for 10 seconds... 00:15:36.788 00:15:36.788 Latency(us) 00:15:36.788 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:36.788 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:15:36.788 Verification LBA range: start 0x0 length 0x4000 00:15:36.788 NVMe0n1 : 10.07 11365.70 44.40 0.00 0.00 89752.57 24248.32 76458.67 00:15:36.788 =================================================================================================================== 00:15:36.788 Total : 11365.70 44.40 0.00 0.00 89752.57 24248.32 76458.67 00:15:36.788 0 00:15:37.050 15:25:54 -- target/queue_depth.sh@39 -- # killprocess 1594226 00:15:37.050 15:25:54 -- common/autotest_common.sh@936 -- # '[' -z 1594226 ']' 00:15:37.050 15:25:54 -- common/autotest_common.sh@940 -- # kill -0 1594226 00:15:37.050 15:25:54 -- common/autotest_common.sh@941 -- # uname 00:15:37.050 15:25:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:37.050 15:25:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1594226 00:15:37.050 15:25:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:37.050 15:25:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:37.050 15:25:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1594226' 00:15:37.050 killing process with pid 1594226 00:15:37.050 15:25:54 -- common/autotest_common.sh@955 -- # kill 1594226 00:15:37.050 Received shutdown signal, test time was about 10.000000 seconds 00:15:37.050 00:15:37.050 Latency(us) 00:15:37.050 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:37.050 =================================================================================================================== 00:15:37.050 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:37.050 15:25:54 -- common/autotest_common.sh@960 -- # wait 1594226 00:15:37.050 15:25:54 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:37.050 15:25:54 -- target/queue_depth.sh@43 -- # nvmftestfini 00:15:37.050 15:25:54 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:37.050 15:25:54 -- nvmf/common.sh@117 -- # sync 00:15:37.050 15:25:54 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:37.050 15:25:54 -- nvmf/common.sh@120 -- # set +e 00:15:37.050 15:25:54 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:37.050 15:25:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:37.050 rmmod nvme_tcp 00:15:37.050 rmmod nvme_fabrics 00:15:37.050 rmmod nvme_keyring 00:15:37.050 15:25:54 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:37.050 15:25:54 -- nvmf/common.sh@124 -- # set -e 00:15:37.050 15:25:54 -- nvmf/common.sh@125 -- # return 0 00:15:37.050 15:25:54 -- nvmf/common.sh@478 -- # '[' -n 1594085 ']' 00:15:37.050 15:25:54 -- nvmf/common.sh@479 -- # killprocess 1594085 00:15:37.050 15:25:54 -- common/autotest_common.sh@936 -- # '[' -z 1594085 ']' 00:15:37.312 15:25:54 -- common/autotest_common.sh@940 -- # kill -0 1594085 00:15:37.312 15:25:54 -- common/autotest_common.sh@941 -- # uname 00:15:37.312 15:25:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:37.312 15:25:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1594085 00:15:37.312 15:25:54 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:37.312 15:25:54 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:37.312 15:25:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1594085' 00:15:37.312 killing process with pid 1594085 00:15:37.312 15:25:54 -- common/autotest_common.sh@955 -- # kill 1594085 00:15:37.312 15:25:54 -- common/autotest_common.sh@960 -- # wait 1594085 00:15:37.312 15:25:54 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:37.312 15:25:54 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:37.312 15:25:54 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:37.312 15:25:54 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:37.312 15:25:54 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:37.312 15:25:54 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:37.312 15:25:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:37.312 15:25:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:39.862 15:25:56 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:39.862 00:15:39.862 real 0m22.117s 00:15:39.862 user 0m25.673s 00:15:39.862 sys 0m6.575s 00:15:39.862 15:25:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:39.862 15:25:56 -- common/autotest_common.sh@10 -- # set +x 00:15:39.862 ************************************ 00:15:39.862 END TEST nvmf_queue_depth 00:15:39.862 ************************************ 00:15:39.862 15:25:56 -- nvmf/nvmf.sh@52 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:39.862 15:25:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:39.862 15:25:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:39.862 15:25:56 -- common/autotest_common.sh@10 -- # set +x 00:15:39.862 ************************************ 00:15:39.862 START TEST nvmf_multipath 00:15:39.862 ************************************ 00:15:39.862 15:25:56 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:39.862 * Looking for test storage... 00:15:39.862 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:39.862 15:25:57 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:39.862 15:25:57 -- nvmf/common.sh@7 -- # uname -s 00:15:39.862 15:25:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:39.862 15:25:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:39.862 15:25:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:39.862 15:25:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:39.862 15:25:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:39.862 15:25:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:39.862 15:25:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:39.862 15:25:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:39.862 15:25:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:39.862 15:25:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:39.862 15:25:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:39.862 15:25:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:39.862 15:25:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:39.862 15:25:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:39.862 15:25:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:39.862 15:25:57 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:39.862 15:25:57 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:39.862 15:25:57 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:39.862 15:25:57 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:39.862 15:25:57 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:39.862 15:25:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.862 15:25:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.862 15:25:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.862 15:25:57 -- paths/export.sh@5 -- # export PATH 00:15:39.862 15:25:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.862 15:25:57 -- nvmf/common.sh@47 -- # : 0 00:15:39.862 15:25:57 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:39.862 15:25:57 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:39.862 15:25:57 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:39.862 15:25:57 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:39.862 15:25:57 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:39.862 15:25:57 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:39.862 15:25:57 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:39.862 15:25:57 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:39.862 15:25:57 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:39.862 15:25:57 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:39.862 15:25:57 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:39.862 15:25:57 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:39.862 15:25:57 -- target/multipath.sh@43 -- # nvmftestinit 00:15:39.862 15:25:57 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:39.862 15:25:57 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:39.862 15:25:57 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:39.863 15:25:57 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:39.863 15:25:57 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:39.863 15:25:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:39.863 15:25:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:39.863 15:25:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:39.863 15:25:57 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:39.863 15:25:57 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:39.863 15:25:57 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:39.863 15:25:57 -- common/autotest_common.sh@10 -- # set +x 00:15:46.455 15:26:03 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:46.455 15:26:03 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:46.455 15:26:03 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:46.455 15:26:03 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:46.455 15:26:03 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:46.455 15:26:03 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:46.455 15:26:03 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:46.455 15:26:03 -- nvmf/common.sh@295 -- # net_devs=() 00:15:46.455 15:26:03 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:46.455 15:26:03 -- nvmf/common.sh@296 -- # e810=() 00:15:46.455 15:26:03 -- nvmf/common.sh@296 -- # local -ga e810 00:15:46.455 15:26:03 -- nvmf/common.sh@297 -- # x722=() 00:15:46.455 15:26:03 -- nvmf/common.sh@297 -- # local -ga x722 00:15:46.455 15:26:03 -- nvmf/common.sh@298 -- # mlx=() 00:15:46.455 15:26:03 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:46.455 15:26:03 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:46.455 15:26:03 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:46.455 15:26:03 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:46.455 15:26:03 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:46.455 15:26:03 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:46.455 15:26:03 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:46.455 15:26:03 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:46.455 15:26:03 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:46.455 15:26:03 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:46.455 15:26:03 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:46.455 15:26:03 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:46.455 15:26:03 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:46.455 15:26:03 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:46.455 15:26:03 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:46.455 15:26:03 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:46.455 15:26:03 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:46.455 15:26:03 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:46.455 15:26:03 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:46.455 15:26:03 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:46.455 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:46.455 15:26:03 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:46.455 15:26:03 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:46.455 15:26:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:46.455 15:26:03 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:46.455 15:26:03 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:46.455 15:26:03 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:46.455 15:26:03 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:46.455 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:46.455 15:26:03 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:46.455 15:26:03 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:46.455 15:26:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:46.455 15:26:03 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:46.455 15:26:03 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:46.455 15:26:03 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:46.455 15:26:03 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:46.455 15:26:03 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:46.455 15:26:03 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:46.455 15:26:03 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:46.455 15:26:03 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:46.455 15:26:03 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:46.455 15:26:03 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:46.455 Found net devices under 0000:31:00.0: cvl_0_0 00:15:46.455 15:26:03 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:46.455 15:26:03 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:46.455 15:26:03 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:46.455 15:26:03 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:46.455 15:26:03 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:46.455 15:26:03 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:46.455 Found net devices under 0000:31:00.1: cvl_0_1 00:15:46.455 15:26:03 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:46.455 15:26:03 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:46.455 15:26:03 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:46.455 15:26:03 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:46.455 15:26:03 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:46.455 15:26:03 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:46.455 15:26:03 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:46.455 15:26:03 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:46.455 15:26:03 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:46.455 15:26:03 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:46.455 15:26:03 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:46.455 15:26:03 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:46.455 15:26:03 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:46.455 15:26:03 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:46.455 15:26:03 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:46.455 15:26:03 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:46.455 15:26:03 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:46.455 15:26:03 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:46.455 15:26:03 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:46.720 15:26:04 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:46.720 15:26:04 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:46.720 15:26:04 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:46.720 15:26:04 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:46.720 15:26:04 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:46.720 15:26:04 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:46.720 15:26:04 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:46.982 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:46.982 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.617 ms 00:15:46.982 00:15:46.982 --- 10.0.0.2 ping statistics --- 00:15:46.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:46.982 rtt min/avg/max/mdev = 0.617/0.617/0.617/0.000 ms 00:15:46.982 15:26:04 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:46.982 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:46.982 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.328 ms 00:15:46.982 00:15:46.982 --- 10.0.0.1 ping statistics --- 00:15:46.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:46.982 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:15:46.982 15:26:04 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:46.982 15:26:04 -- nvmf/common.sh@411 -- # return 0 00:15:46.982 15:26:04 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:46.982 15:26:04 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:46.982 15:26:04 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:46.982 15:26:04 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:46.982 15:26:04 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:46.982 15:26:04 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:46.982 15:26:04 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:46.982 15:26:04 -- target/multipath.sh@45 -- # '[' -z ']' 00:15:46.982 15:26:04 -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:15:46.982 only one NIC for nvmf test 00:15:46.982 15:26:04 -- target/multipath.sh@47 -- # nvmftestfini 00:15:46.982 15:26:04 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:46.982 15:26:04 -- nvmf/common.sh@117 -- # sync 00:15:46.982 15:26:04 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:46.982 15:26:04 -- nvmf/common.sh@120 -- # set +e 00:15:46.982 15:26:04 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:46.982 15:26:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:46.982 rmmod nvme_tcp 00:15:46.982 rmmod nvme_fabrics 00:15:46.982 rmmod nvme_keyring 00:15:46.982 15:26:04 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:46.982 15:26:04 -- nvmf/common.sh@124 -- # set -e 00:15:46.982 15:26:04 -- nvmf/common.sh@125 -- # return 0 00:15:46.982 15:26:04 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:15:46.982 15:26:04 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:46.982 15:26:04 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:46.982 15:26:04 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:46.982 15:26:04 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:46.982 15:26:04 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:46.982 15:26:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:46.982 15:26:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:46.982 15:26:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:49.529 15:26:06 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:49.529 15:26:06 -- target/multipath.sh@48 -- # exit 0 00:15:49.529 15:26:06 -- target/multipath.sh@1 -- # nvmftestfini 00:15:49.529 15:26:06 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:49.529 15:26:06 -- nvmf/common.sh@117 -- # sync 00:15:49.529 15:26:06 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:49.529 15:26:06 -- nvmf/common.sh@120 -- # set +e 00:15:49.529 15:26:06 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:49.529 15:26:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:49.529 15:26:06 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:49.529 15:26:06 -- nvmf/common.sh@124 -- # set -e 00:15:49.529 15:26:06 -- nvmf/common.sh@125 -- # return 0 00:15:49.529 15:26:06 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:15:49.530 15:26:06 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:49.530 15:26:06 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:49.530 15:26:06 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:49.530 15:26:06 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:49.530 15:26:06 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:49.530 15:26:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:49.530 15:26:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:49.530 15:26:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:49.530 15:26:06 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:49.530 00:15:49.530 real 0m9.466s 00:15:49.530 user 0m1.983s 00:15:49.530 sys 0m5.347s 00:15:49.530 15:26:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:49.530 15:26:06 -- common/autotest_common.sh@10 -- # set +x 00:15:49.530 ************************************ 00:15:49.530 END TEST nvmf_multipath 00:15:49.530 ************************************ 00:15:49.530 15:26:06 -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:49.530 15:26:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:49.530 15:26:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:49.530 15:26:06 -- common/autotest_common.sh@10 -- # set +x 00:15:49.530 ************************************ 00:15:49.530 START TEST nvmf_zcopy 00:15:49.530 ************************************ 00:15:49.530 15:26:06 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:49.530 * Looking for test storage... 00:15:49.530 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:49.530 15:26:06 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:49.530 15:26:06 -- nvmf/common.sh@7 -- # uname -s 00:15:49.530 15:26:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:49.530 15:26:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:49.530 15:26:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:49.530 15:26:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:49.530 15:26:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:49.530 15:26:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:49.530 15:26:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:49.530 15:26:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:49.530 15:26:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:49.530 15:26:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:49.530 15:26:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:49.530 15:26:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:49.530 15:26:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:49.530 15:26:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:49.530 15:26:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:49.530 15:26:06 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:49.530 15:26:06 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:49.530 15:26:06 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:49.530 15:26:06 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:49.530 15:26:06 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:49.530 15:26:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.530 15:26:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.530 15:26:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.530 15:26:06 -- paths/export.sh@5 -- # export PATH 00:15:49.530 15:26:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.530 15:26:06 -- nvmf/common.sh@47 -- # : 0 00:15:49.530 15:26:06 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:49.530 15:26:06 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:49.530 15:26:06 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:49.530 15:26:06 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:49.530 15:26:06 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:49.530 15:26:06 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:49.530 15:26:06 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:49.530 15:26:06 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:49.530 15:26:06 -- target/zcopy.sh@12 -- # nvmftestinit 00:15:49.530 15:26:06 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:49.530 15:26:06 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:49.530 15:26:06 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:49.530 15:26:06 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:49.530 15:26:06 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:49.530 15:26:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:49.530 15:26:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:49.530 15:26:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:49.530 15:26:06 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:49.530 15:26:06 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:49.530 15:26:06 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:49.530 15:26:06 -- common/autotest_common.sh@10 -- # set +x 00:15:56.120 15:26:13 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:56.120 15:26:13 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:56.120 15:26:13 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:56.120 15:26:13 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:56.120 15:26:13 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:56.120 15:26:13 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:56.120 15:26:13 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:56.120 15:26:13 -- nvmf/common.sh@295 -- # net_devs=() 00:15:56.120 15:26:13 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:56.120 15:26:13 -- nvmf/common.sh@296 -- # e810=() 00:15:56.120 15:26:13 -- nvmf/common.sh@296 -- # local -ga e810 00:15:56.120 15:26:13 -- nvmf/common.sh@297 -- # x722=() 00:15:56.120 15:26:13 -- nvmf/common.sh@297 -- # local -ga x722 00:15:56.120 15:26:13 -- nvmf/common.sh@298 -- # mlx=() 00:15:56.120 15:26:13 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:56.120 15:26:13 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:56.120 15:26:13 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:56.120 15:26:13 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:56.120 15:26:13 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:56.120 15:26:13 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:56.120 15:26:13 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:56.120 15:26:13 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:56.120 15:26:13 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:56.120 15:26:13 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:56.120 15:26:13 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:56.120 15:26:13 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:56.120 15:26:13 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:56.120 15:26:13 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:56.120 15:26:13 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:56.120 15:26:13 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:56.120 15:26:13 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:56.120 15:26:13 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:56.120 15:26:13 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:56.120 15:26:13 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:56.120 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:56.120 15:26:13 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:56.120 15:26:13 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:56.120 15:26:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:56.120 15:26:13 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:56.120 15:26:13 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:56.120 15:26:13 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:56.120 15:26:13 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:56.120 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:56.120 15:26:13 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:56.120 15:26:13 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:56.120 15:26:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:56.120 15:26:13 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:56.120 15:26:13 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:56.120 15:26:13 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:56.120 15:26:13 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:56.120 15:26:13 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:56.120 15:26:13 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:56.120 15:26:13 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:56.120 15:26:13 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:56.120 15:26:13 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:56.120 15:26:13 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:56.120 Found net devices under 0000:31:00.0: cvl_0_0 00:15:56.120 15:26:13 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:56.120 15:26:13 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:56.120 15:26:13 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:56.120 15:26:13 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:56.120 15:26:13 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:56.120 15:26:13 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:56.120 Found net devices under 0000:31:00.1: cvl_0_1 00:15:56.120 15:26:13 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:56.120 15:26:13 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:56.120 15:26:13 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:56.120 15:26:13 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:56.120 15:26:13 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:56.120 15:26:13 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:56.120 15:26:13 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:56.120 15:26:13 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:56.120 15:26:13 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:56.120 15:26:13 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:56.120 15:26:13 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:56.120 15:26:13 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:56.120 15:26:13 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:56.120 15:26:13 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:56.120 15:26:13 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:56.120 15:26:13 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:56.121 15:26:13 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:56.121 15:26:13 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:56.121 15:26:13 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:56.121 15:26:13 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:56.121 15:26:13 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:56.121 15:26:13 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:56.121 15:26:13 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:56.121 15:26:13 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:56.121 15:26:13 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:56.121 15:26:13 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:56.121 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:56.121 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.599 ms 00:15:56.121 00:15:56.121 --- 10.0.0.2 ping statistics --- 00:15:56.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.121 rtt min/avg/max/mdev = 0.599/0.599/0.599/0.000 ms 00:15:56.121 15:26:13 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:56.121 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:56.121 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:15:56.121 00:15:56.121 --- 10.0.0.1 ping statistics --- 00:15:56.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.121 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:15:56.121 15:26:13 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:56.121 15:26:13 -- nvmf/common.sh@411 -- # return 0 00:15:56.121 15:26:13 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:56.121 15:26:13 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:56.121 15:26:13 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:56.121 15:26:13 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:56.121 15:26:13 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:56.121 15:26:13 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:56.121 15:26:13 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:56.121 15:26:13 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:15:56.121 15:26:13 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:56.121 15:26:13 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:56.121 15:26:13 -- common/autotest_common.sh@10 -- # set +x 00:15:56.121 15:26:13 -- nvmf/common.sh@470 -- # nvmfpid=1604902 00:15:56.121 15:26:13 -- nvmf/common.sh@471 -- # waitforlisten 1604902 00:15:56.121 15:26:13 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:56.121 15:26:13 -- common/autotest_common.sh@817 -- # '[' -z 1604902 ']' 00:15:56.121 15:26:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:56.121 15:26:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:56.121 15:26:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:56.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:56.381 15:26:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:56.381 15:26:13 -- common/autotest_common.sh@10 -- # set +x 00:15:56.381 [2024-04-26 15:26:13.620350] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:15:56.381 [2024-04-26 15:26:13.620397] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:56.381 EAL: No free 2048 kB hugepages reported on node 1 00:15:56.381 [2024-04-26 15:26:13.702329] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:56.381 [2024-04-26 15:26:13.774190] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:56.381 [2024-04-26 15:26:13.774232] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:56.381 [2024-04-26 15:26:13.774240] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:56.381 [2024-04-26 15:26:13.774247] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:56.381 [2024-04-26 15:26:13.774253] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:56.381 [2024-04-26 15:26:13.774276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:56.950 15:26:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:56.950 15:26:14 -- common/autotest_common.sh@850 -- # return 0 00:15:56.950 15:26:14 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:56.950 15:26:14 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:56.950 15:26:14 -- common/autotest_common.sh@10 -- # set +x 00:15:57.211 15:26:14 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:57.211 15:26:14 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:15:57.211 15:26:14 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:15:57.211 15:26:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:57.211 15:26:14 -- common/autotest_common.sh@10 -- # set +x 00:15:57.211 [2024-04-26 15:26:14.421666] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:57.211 15:26:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:57.211 15:26:14 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:57.211 15:26:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:57.211 15:26:14 -- common/autotest_common.sh@10 -- # set +x 00:15:57.211 15:26:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:57.211 15:26:14 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:57.211 15:26:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:57.211 15:26:14 -- common/autotest_common.sh@10 -- # set +x 00:15:57.211 [2024-04-26 15:26:14.445856] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:57.211 15:26:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:57.211 15:26:14 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:57.211 15:26:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:57.211 15:26:14 -- common/autotest_common.sh@10 -- # set +x 00:15:57.211 15:26:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:57.211 15:26:14 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:15:57.211 15:26:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:57.211 15:26:14 -- common/autotest_common.sh@10 -- # set +x 00:15:57.211 malloc0 00:15:57.211 15:26:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:57.211 15:26:14 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:57.211 15:26:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:57.211 15:26:14 -- common/autotest_common.sh@10 -- # set +x 00:15:57.211 15:26:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:57.211 15:26:14 -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:15:57.211 15:26:14 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:15:57.211 15:26:14 -- nvmf/common.sh@521 -- # config=() 00:15:57.211 15:26:14 -- nvmf/common.sh@521 -- # local subsystem config 00:15:57.211 15:26:14 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:15:57.211 15:26:14 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:15:57.211 { 00:15:57.211 "params": { 00:15:57.211 "name": "Nvme$subsystem", 00:15:57.211 "trtype": "$TEST_TRANSPORT", 00:15:57.211 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:57.211 "adrfam": "ipv4", 00:15:57.211 "trsvcid": "$NVMF_PORT", 00:15:57.211 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:57.211 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:57.211 "hdgst": ${hdgst:-false}, 00:15:57.211 "ddgst": ${ddgst:-false} 00:15:57.211 }, 00:15:57.211 "method": "bdev_nvme_attach_controller" 00:15:57.211 } 00:15:57.211 EOF 00:15:57.211 )") 00:15:57.211 15:26:14 -- nvmf/common.sh@543 -- # cat 00:15:57.211 15:26:14 -- nvmf/common.sh@545 -- # jq . 00:15:57.211 15:26:14 -- nvmf/common.sh@546 -- # IFS=, 00:15:57.211 15:26:14 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:15:57.211 "params": { 00:15:57.211 "name": "Nvme1", 00:15:57.211 "trtype": "tcp", 00:15:57.211 "traddr": "10.0.0.2", 00:15:57.211 "adrfam": "ipv4", 00:15:57.211 "trsvcid": "4420", 00:15:57.211 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:57.212 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:57.212 "hdgst": false, 00:15:57.212 "ddgst": false 00:15:57.212 }, 00:15:57.212 "method": "bdev_nvme_attach_controller" 00:15:57.212 }' 00:15:57.212 [2024-04-26 15:26:14.546113] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:15:57.212 [2024-04-26 15:26:14.546175] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1605250 ] 00:15:57.212 EAL: No free 2048 kB hugepages reported on node 1 00:15:57.212 [2024-04-26 15:26:14.608251] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:57.472 [2024-04-26 15:26:14.670659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:57.472 Running I/O for 10 seconds... 00:16:07.553 00:16:07.553 Latency(us) 00:16:07.553 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:07.553 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:16:07.553 Verification LBA range: start 0x0 length 0x1000 00:16:07.553 Nvme1n1 : 10.05 8691.82 67.90 0.00 0.00 14622.26 2730.67 43690.67 00:16:07.553 =================================================================================================================== 00:16:07.553 Total : 8691.82 67.90 0.00 0.00 14622.26 2730.67 43690.67 00:16:07.815 15:26:25 -- target/zcopy.sh@39 -- # perfpid=1607263 00:16:07.815 15:26:25 -- target/zcopy.sh@41 -- # xtrace_disable 00:16:07.815 15:26:25 -- common/autotest_common.sh@10 -- # set +x 00:16:07.815 15:26:25 -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:16:07.815 15:26:25 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:16:07.815 15:26:25 -- nvmf/common.sh@521 -- # config=() 00:16:07.815 15:26:25 -- nvmf/common.sh@521 -- # local subsystem config 00:16:07.815 15:26:25 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:07.815 15:26:25 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:07.815 { 00:16:07.815 "params": { 00:16:07.815 "name": "Nvme$subsystem", 00:16:07.815 "trtype": "$TEST_TRANSPORT", 00:16:07.815 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:07.815 "adrfam": "ipv4", 00:16:07.815 "trsvcid": "$NVMF_PORT", 00:16:07.815 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:07.815 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:07.815 "hdgst": ${hdgst:-false}, 00:16:07.815 "ddgst": ${ddgst:-false} 00:16:07.815 }, 00:16:07.815 "method": "bdev_nvme_attach_controller" 00:16:07.815 } 00:16:07.815 EOF 00:16:07.815 )") 00:16:07.815 15:26:25 -- nvmf/common.sh@543 -- # cat 00:16:07.815 [2024-04-26 15:26:25.025879] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.815 [2024-04-26 15:26:25.025910] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.815 15:26:25 -- nvmf/common.sh@545 -- # jq . 00:16:07.815 15:26:25 -- nvmf/common.sh@546 -- # IFS=, 00:16:07.815 15:26:25 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:16:07.815 "params": { 00:16:07.815 "name": "Nvme1", 00:16:07.815 "trtype": "tcp", 00:16:07.815 "traddr": "10.0.0.2", 00:16:07.815 "adrfam": "ipv4", 00:16:07.815 "trsvcid": "4420", 00:16:07.815 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:07.815 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:07.815 "hdgst": false, 00:16:07.815 "ddgst": false 00:16:07.815 }, 00:16:07.815 "method": "bdev_nvme_attach_controller" 00:16:07.815 }' 00:16:07.815 [2024-04-26 15:26:25.037862] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.815 [2024-04-26 15:26:25.037871] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.815 [2024-04-26 15:26:25.049886] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.815 [2024-04-26 15:26:25.049895] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.815 [2024-04-26 15:26:25.061918] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.815 [2024-04-26 15:26:25.061926] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.815 [2024-04-26 15:26:25.064476] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:16:07.815 [2024-04-26 15:26:25.064526] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1607263 ] 00:16:07.815 [2024-04-26 15:26:25.073948] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.815 [2024-04-26 15:26:25.073956] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.815 [2024-04-26 15:26:25.085980] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.815 [2024-04-26 15:26:25.085988] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.815 EAL: No free 2048 kB hugepages reported on node 1 00:16:07.815 [2024-04-26 15:26:25.098011] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.815 [2024-04-26 15:26:25.098018] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.815 [2024-04-26 15:26:25.110041] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.815 [2024-04-26 15:26:25.110048] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.816 [2024-04-26 15:26:25.122070] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.816 [2024-04-26 15:26:25.122078] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.816 [2024-04-26 15:26:25.123097] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:07.816 [2024-04-26 15:26:25.134101] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.816 [2024-04-26 15:26:25.134110] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.816 [2024-04-26 15:26:25.146132] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.816 [2024-04-26 15:26:25.146140] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.816 [2024-04-26 15:26:25.158165] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.816 [2024-04-26 15:26:25.158177] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.816 [2024-04-26 15:26:25.170195] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.816 [2024-04-26 15:26:25.170207] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.816 [2024-04-26 15:26:25.182225] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.816 [2024-04-26 15:26:25.182234] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.816 [2024-04-26 15:26:25.185378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:07.816 [2024-04-26 15:26:25.194257] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.816 [2024-04-26 15:26:25.194265] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.816 [2024-04-26 15:26:25.206295] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.816 [2024-04-26 15:26:25.206308] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.816 [2024-04-26 15:26:25.218322] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.816 [2024-04-26 15:26:25.218330] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.816 [2024-04-26 15:26:25.230349] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.816 [2024-04-26 15:26:25.230357] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.816 [2024-04-26 15:26:25.242381] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.816 [2024-04-26 15:26:25.242390] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.816 [2024-04-26 15:26:25.254411] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.816 [2024-04-26 15:26:25.254419] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.078 [2024-04-26 15:26:25.266454] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.078 [2024-04-26 15:26:25.266471] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.078 [2024-04-26 15:26:25.278479] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.078 [2024-04-26 15:26:25.278489] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.078 [2024-04-26 15:26:25.290511] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.078 [2024-04-26 15:26:25.290522] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.078 [2024-04-26 15:26:25.302539] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.078 [2024-04-26 15:26:25.302549] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.078 [2024-04-26 15:26:25.314571] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.078 [2024-04-26 15:26:25.314580] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.078 [2024-04-26 15:26:25.363253] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.078 [2024-04-26 15:26:25.363268] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.078 Running I/O for 5 seconds... 00:16:08.078 [2024-04-26 15:26:25.374730] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.078 [2024-04-26 15:26:25.374740] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.078 [2024-04-26 15:26:25.390348] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.078 [2024-04-26 15:26:25.390364] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.078 [2024-04-26 15:26:25.403413] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.078 [2024-04-26 15:26:25.403429] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.078 [2024-04-26 15:26:25.416727] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.078 [2024-04-26 15:26:25.416744] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.078 [2024-04-26 15:26:25.429497] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.078 [2024-04-26 15:26:25.429514] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.078 [2024-04-26 15:26:25.443064] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.078 [2024-04-26 15:26:25.443080] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.078 [2024-04-26 15:26:25.456016] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.078 [2024-04-26 15:26:25.456032] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.078 [2024-04-26 15:26:25.469359] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.078 [2024-04-26 15:26:25.469375] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.078 [2024-04-26 15:26:25.482653] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.078 [2024-04-26 15:26:25.482669] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.078 [2024-04-26 15:26:25.496005] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.078 [2024-04-26 15:26:25.496020] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.078 [2024-04-26 15:26:25.509646] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.078 [2024-04-26 15:26:25.509661] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.078 [2024-04-26 15:26:25.522399] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.078 [2024-04-26 15:26:25.522414] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.341 [2024-04-26 15:26:25.534907] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.341 [2024-04-26 15:26:25.534923] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.341 [2024-04-26 15:26:25.548337] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.341 [2024-04-26 15:26:25.548354] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.341 [2024-04-26 15:26:25.561224] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.341 [2024-04-26 15:26:25.561239] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.341 [2024-04-26 15:26:25.574163] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.341 [2024-04-26 15:26:25.574179] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.341 [2024-04-26 15:26:25.587022] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.341 [2024-04-26 15:26:25.587038] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.341 [2024-04-26 15:26:25.600394] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.341 [2024-04-26 15:26:25.600410] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.341 [2024-04-26 15:26:25.614192] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.341 [2024-04-26 15:26:25.614207] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.341 [2024-04-26 15:26:25.627184] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.341 [2024-04-26 15:26:25.627200] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.341 [2024-04-26 15:26:25.640589] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.341 [2024-04-26 15:26:25.640604] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.341 [2024-04-26 15:26:25.653272] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.341 [2024-04-26 15:26:25.653287] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.341 [2024-04-26 15:26:25.666036] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.341 [2024-04-26 15:26:25.666051] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.341 [2024-04-26 15:26:25.678747] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.341 [2024-04-26 15:26:25.678762] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.341 [2024-04-26 15:26:25.691322] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.341 [2024-04-26 15:26:25.691337] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.341 [2024-04-26 15:26:25.704911] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.341 [2024-04-26 15:26:25.704927] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.341 [2024-04-26 15:26:25.717740] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.341 [2024-04-26 15:26:25.717755] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.341 [2024-04-26 15:26:25.730541] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.341 [2024-04-26 15:26:25.730557] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.341 [2024-04-26 15:26:25.743453] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.341 [2024-04-26 15:26:25.743469] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.341 [2024-04-26 15:26:25.756375] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.341 [2024-04-26 15:26:25.756390] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.341 [2024-04-26 15:26:25.769485] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.341 [2024-04-26 15:26:25.769500] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.341 [2024-04-26 15:26:25.782457] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.341 [2024-04-26 15:26:25.782472] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.603 [2024-04-26 15:26:25.796131] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.603 [2024-04-26 15:26:25.796147] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.603 [2024-04-26 15:26:25.809592] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.603 [2024-04-26 15:26:25.809609] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.603 [2024-04-26 15:26:25.822310] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.603 [2024-04-26 15:26:25.822326] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.603 [2024-04-26 15:26:25.835017] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.603 [2024-04-26 15:26:25.835033] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.603 [2024-04-26 15:26:25.847808] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.603 [2024-04-26 15:26:25.847824] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.603 [2024-04-26 15:26:25.860593] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.603 [2024-04-26 15:26:25.860609] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.603 [2024-04-26 15:26:25.873849] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.603 [2024-04-26 15:26:25.873865] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.603 [2024-04-26 15:26:25.887042] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.603 [2024-04-26 15:26:25.887058] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.603 [2024-04-26 15:26:25.899814] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.603 [2024-04-26 15:26:25.899830] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.603 [2024-04-26 15:26:25.912302] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.603 [2024-04-26 15:26:25.912318] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.603 [2024-04-26 15:26:25.924711] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.603 [2024-04-26 15:26:25.924726] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.604 [2024-04-26 15:26:25.938400] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.604 [2024-04-26 15:26:25.938415] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.604 [2024-04-26 15:26:25.951856] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.604 [2024-04-26 15:26:25.951872] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.604 [2024-04-26 15:26:25.965607] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.604 [2024-04-26 15:26:25.965623] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.604 [2024-04-26 15:26:25.978410] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.604 [2024-04-26 15:26:25.978426] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.604 [2024-04-26 15:26:25.991549] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.604 [2024-04-26 15:26:25.991565] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.604 [2024-04-26 15:26:26.005284] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.604 [2024-04-26 15:26:26.005300] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.604 [2024-04-26 15:26:26.017755] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.604 [2024-04-26 15:26:26.017771] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.604 [2024-04-26 15:26:26.030554] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.604 [2024-04-26 15:26:26.030573] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.604 [2024-04-26 15:26:26.043101] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.604 [2024-04-26 15:26:26.043118] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.865 [2024-04-26 15:26:26.055518] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.865 [2024-04-26 15:26:26.055534] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.865 [2024-04-26 15:26:26.069263] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.865 [2024-04-26 15:26:26.069279] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.865 [2024-04-26 15:26:26.082440] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.865 [2024-04-26 15:26:26.082457] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.865 [2024-04-26 15:26:26.095871] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.865 [2024-04-26 15:26:26.095887] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.865 [2024-04-26 15:26:26.109277] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.865 [2024-04-26 15:26:26.109293] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.865 [2024-04-26 15:26:26.122239] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.865 [2024-04-26 15:26:26.122255] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.865 [2024-04-26 15:26:26.134826] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.865 [2024-04-26 15:26:26.134847] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.865 [2024-04-26 15:26:26.147470] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.865 [2024-04-26 15:26:26.147486] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.865 [2024-04-26 15:26:26.160309] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.865 [2024-04-26 15:26:26.160325] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.865 [2024-04-26 15:26:26.173150] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.865 [2024-04-26 15:26:26.173166] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.865 [2024-04-26 15:26:26.185807] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.865 [2024-04-26 15:26:26.185823] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.865 [2024-04-26 15:26:26.198909] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.865 [2024-04-26 15:26:26.198926] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.865 [2024-04-26 15:26:26.211957] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.865 [2024-04-26 15:26:26.211973] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.865 [2024-04-26 15:26:26.225455] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.865 [2024-04-26 15:26:26.225471] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.865 [2024-04-26 15:26:26.238321] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.865 [2024-04-26 15:26:26.238337] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.865 [2024-04-26 15:26:26.250680] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.865 [2024-04-26 15:26:26.250696] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.865 [2024-04-26 15:26:26.263498] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.865 [2024-04-26 15:26:26.263514] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.865 [2024-04-26 15:26:26.276763] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.865 [2024-04-26 15:26:26.276779] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.865 [2024-04-26 15:26:26.289748] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.865 [2024-04-26 15:26:26.289764] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.865 [2024-04-26 15:26:26.302830] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.865 [2024-04-26 15:26:26.302852] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.125 [2024-04-26 15:26:26.315336] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.125 [2024-04-26 15:26:26.315352] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.125 [2024-04-26 15:26:26.328548] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.125 [2024-04-26 15:26:26.328564] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.125 [2024-04-26 15:26:26.341481] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.125 [2024-04-26 15:26:26.341496] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.125 [2024-04-26 15:26:26.355026] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.125 [2024-04-26 15:26:26.355043] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.125 [2024-04-26 15:26:26.368426] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.125 [2024-04-26 15:26:26.368442] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.125 [2024-04-26 15:26:26.381112] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.125 [2024-04-26 15:26:26.381128] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.125 [2024-04-26 15:26:26.394381] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.125 [2024-04-26 15:26:26.394397] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.125 [2024-04-26 15:26:26.407060] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.125 [2024-04-26 15:26:26.407077] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.125 [2024-04-26 15:26:26.420733] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.125 [2024-04-26 15:26:26.420749] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.125 [2024-04-26 15:26:26.433659] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.125 [2024-04-26 15:26:26.433675] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.125 [2024-04-26 15:26:26.446591] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.125 [2024-04-26 15:26:26.446607] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.125 [2024-04-26 15:26:26.460001] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.125 [2024-04-26 15:26:26.460017] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.125 [2024-04-26 15:26:26.473545] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.125 [2024-04-26 15:26:26.473560] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.125 [2024-04-26 15:26:26.486432] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.125 [2024-04-26 15:26:26.486448] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.125 [2024-04-26 15:26:26.500312] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.125 [2024-04-26 15:26:26.500327] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.125 [2024-04-26 15:26:26.513519] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.125 [2024-04-26 15:26:26.513534] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.125 [2024-04-26 15:26:26.526308] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.125 [2024-04-26 15:26:26.526327] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.125 [2024-04-26 15:26:26.539129] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.125 [2024-04-26 15:26:26.539144] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.125 [2024-04-26 15:26:26.552724] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.125 [2024-04-26 15:26:26.552739] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.126 [2024-04-26 15:26:26.565683] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.126 [2024-04-26 15:26:26.565699] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.386 [2024-04-26 15:26:26.579042] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.386 [2024-04-26 15:26:26.579058] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.386 [2024-04-26 15:26:26.592690] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.386 [2024-04-26 15:26:26.592706] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.386 [2024-04-26 15:26:26.605571] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.386 [2024-04-26 15:26:26.605586] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.386 [2024-04-26 15:26:26.619033] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.386 [2024-04-26 15:26:26.619049] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.387 [2024-04-26 15:26:26.632641] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.387 [2024-04-26 15:26:26.632657] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.387 [2024-04-26 15:26:26.645657] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.387 [2024-04-26 15:26:26.645672] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.387 [2024-04-26 15:26:26.658306] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.387 [2024-04-26 15:26:26.658322] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.387 [2024-04-26 15:26:26.671350] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.387 [2024-04-26 15:26:26.671366] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.387 [2024-04-26 15:26:26.684523] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.387 [2024-04-26 15:26:26.684539] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.387 [2024-04-26 15:26:26.698376] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.387 [2024-04-26 15:26:26.698392] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.387 [2024-04-26 15:26:26.711225] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.387 [2024-04-26 15:26:26.711241] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.387 [2024-04-26 15:26:26.723714] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.387 [2024-04-26 15:26:26.723729] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.387 [2024-04-26 15:26:26.737162] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.387 [2024-04-26 15:26:26.737177] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.387 [2024-04-26 15:26:26.750133] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.387 [2024-04-26 15:26:26.750148] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.387 [2024-04-26 15:26:26.763540] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.387 [2024-04-26 15:26:26.763556] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.387 [2024-04-26 15:26:26.776174] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.387 [2024-04-26 15:26:26.776193] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.387 [2024-04-26 15:26:26.788608] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.387 [2024-04-26 15:26:26.788623] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.387 [2024-04-26 15:26:26.801937] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.387 [2024-04-26 15:26:26.801953] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.387 [2024-04-26 15:26:26.815457] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.387 [2024-04-26 15:26:26.815472] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.387 [2024-04-26 15:26:26.828979] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.387 [2024-04-26 15:26:26.828994] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.649 [2024-04-26 15:26:26.842562] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.649 [2024-04-26 15:26:26.842578] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.649 [2024-04-26 15:26:26.856506] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.649 [2024-04-26 15:26:26.856522] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.649 [2024-04-26 15:26:26.869096] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.649 [2024-04-26 15:26:26.869111] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.649 [2024-04-26 15:26:26.881836] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.649 [2024-04-26 15:26:26.881855] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.649 [2024-04-26 15:26:26.894721] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.649 [2024-04-26 15:26:26.894736] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.649 [2024-04-26 15:26:26.907193] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.649 [2024-04-26 15:26:26.907208] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.649 [2024-04-26 15:26:26.920582] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.649 [2024-04-26 15:26:26.920597] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.649 [2024-04-26 15:26:26.933606] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.649 [2024-04-26 15:26:26.933621] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.649 [2024-04-26 15:26:26.946340] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.649 [2024-04-26 15:26:26.946355] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.649 [2024-04-26 15:26:26.959652] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.649 [2024-04-26 15:26:26.959667] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.649 [2024-04-26 15:26:26.973357] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.649 [2024-04-26 15:26:26.973372] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.649 [2024-04-26 15:26:26.986190] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.649 [2024-04-26 15:26:26.986205] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.649 [2024-04-26 15:26:26.998784] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.649 [2024-04-26 15:26:26.998800] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.649 [2024-04-26 15:26:27.011542] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.649 [2024-04-26 15:26:27.011557] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.649 [2024-04-26 15:26:27.024681] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.649 [2024-04-26 15:26:27.024699] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.649 [2024-04-26 15:26:27.038433] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.649 [2024-04-26 15:26:27.038449] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.649 [2024-04-26 15:26:27.051150] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.649 [2024-04-26 15:26:27.051166] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.649 [2024-04-26 15:26:27.064868] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.649 [2024-04-26 15:26:27.064883] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.649 [2024-04-26 15:26:27.077305] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.649 [2024-04-26 15:26:27.077320] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.649 [2024-04-26 15:26:27.090184] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.649 [2024-04-26 15:26:27.090200] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.910 [2024-04-26 15:26:27.102729] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.910 [2024-04-26 15:26:27.102744] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.910 [2024-04-26 15:26:27.116102] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.910 [2024-04-26 15:26:27.116117] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.910 [2024-04-26 15:26:27.128635] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.910 [2024-04-26 15:26:27.128650] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.910 [2024-04-26 15:26:27.142279] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.910 [2024-04-26 15:26:27.142295] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.910 [2024-04-26 15:26:27.155532] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.910 [2024-04-26 15:26:27.155547] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.910 [2024-04-26 15:26:27.168202] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.910 [2024-04-26 15:26:27.168217] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.910 [2024-04-26 15:26:27.181038] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.910 [2024-04-26 15:26:27.181053] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.910 [2024-04-26 15:26:27.194480] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.910 [2024-04-26 15:26:27.194496] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.910 [2024-04-26 15:26:27.207184] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.910 [2024-04-26 15:26:27.207199] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.910 [2024-04-26 15:26:27.219923] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.910 [2024-04-26 15:26:27.219939] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.910 [2024-04-26 15:26:27.232338] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.910 [2024-04-26 15:26:27.232353] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.910 [2024-04-26 15:26:27.244908] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.910 [2024-04-26 15:26:27.244923] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.910 [2024-04-26 15:26:27.257707] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.910 [2024-04-26 15:26:27.257722] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.910 [2024-04-26 15:26:27.270766] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.910 [2024-04-26 15:26:27.270785] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.910 [2024-04-26 15:26:27.283264] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.910 [2024-04-26 15:26:27.283283] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.910 [2024-04-26 15:26:27.296825] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.910 [2024-04-26 15:26:27.296845] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.910 [2024-04-26 15:26:27.309399] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.910 [2024-04-26 15:26:27.309414] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.910 [2024-04-26 15:26:27.322983] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.910 [2024-04-26 15:26:27.322999] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.910 [2024-04-26 15:26:27.336020] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.910 [2024-04-26 15:26:27.336036] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.910 [2024-04-26 15:26:27.348802] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.910 [2024-04-26 15:26:27.348817] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.172 [2024-04-26 15:26:27.361553] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.172 [2024-04-26 15:26:27.361568] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.172 [2024-04-26 15:26:27.374126] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.172 [2024-04-26 15:26:27.374142] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.172 [2024-04-26 15:26:27.386880] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.172 [2024-04-26 15:26:27.386896] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.172 [2024-04-26 15:26:27.399525] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.172 [2024-04-26 15:26:27.399540] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.172 [2024-04-26 15:26:27.412401] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.172 [2024-04-26 15:26:27.412416] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.172 [2024-04-26 15:26:27.425619] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.172 [2024-04-26 15:26:27.425634] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.172 [2024-04-26 15:26:27.439276] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.172 [2024-04-26 15:26:27.439291] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.172 [2024-04-26 15:26:27.452654] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.172 [2024-04-26 15:26:27.452668] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.172 [2024-04-26 15:26:27.465802] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.172 [2024-04-26 15:26:27.465818] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.172 [2024-04-26 15:26:27.478615] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.172 [2024-04-26 15:26:27.478631] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.172 [2024-04-26 15:26:27.492068] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.172 [2024-04-26 15:26:27.492083] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.172 [2024-04-26 15:26:27.504962] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.172 [2024-04-26 15:26:27.504978] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.172 [2024-04-26 15:26:27.517972] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.172 [2024-04-26 15:26:27.517988] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.172 [2024-04-26 15:26:27.530645] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.172 [2024-04-26 15:26:27.530662] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.172 [2024-04-26 15:26:27.544276] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.172 [2024-04-26 15:26:27.544291] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.172 [2024-04-26 15:26:27.558265] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.172 [2024-04-26 15:26:27.558282] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.172 [2024-04-26 15:26:27.571920] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.172 [2024-04-26 15:26:27.571936] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.172 [2024-04-26 15:26:27.584604] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.172 [2024-04-26 15:26:27.584620] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.172 [2024-04-26 15:26:27.597854] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.172 [2024-04-26 15:26:27.597870] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.172 [2024-04-26 15:26:27.611332] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.172 [2024-04-26 15:26:27.611348] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.434 [2024-04-26 15:26:27.623993] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.434 [2024-04-26 15:26:27.624008] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.434 [2024-04-26 15:26:27.637631] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.434 [2024-04-26 15:26:27.637646] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.434 [2024-04-26 15:26:27.651621] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.434 [2024-04-26 15:26:27.651637] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.434 [2024-04-26 15:26:27.664349] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.434 [2024-04-26 15:26:27.664364] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.434 [2024-04-26 15:26:27.677445] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.434 [2024-04-26 15:26:27.677461] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.434 [2024-04-26 15:26:27.690279] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.434 [2024-04-26 15:26:27.690295] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.434 [2024-04-26 15:26:27.702801] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.434 [2024-04-26 15:26:27.702816] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.434 [2024-04-26 15:26:27.715573] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.434 [2024-04-26 15:26:27.715589] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.434 [2024-04-26 15:26:27.728600] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.434 [2024-04-26 15:26:27.728615] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.434 [2024-04-26 15:26:27.741595] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.434 [2024-04-26 15:26:27.741610] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.434 [2024-04-26 15:26:27.754332] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.434 [2024-04-26 15:26:27.754347] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.434 [2024-04-26 15:26:27.767206] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.434 [2024-04-26 15:26:27.767221] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.434 [2024-04-26 15:26:27.780038] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.434 [2024-04-26 15:26:27.780053] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.434 [2024-04-26 15:26:27.792884] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.434 [2024-04-26 15:26:27.792900] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.434 [2024-04-26 15:26:27.805907] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.434 [2024-04-26 15:26:27.805922] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.434 [2024-04-26 15:26:27.818766] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.434 [2024-04-26 15:26:27.818782] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.434 [2024-04-26 15:26:27.831721] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.434 [2024-04-26 15:26:27.831736] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.434 [2024-04-26 15:26:27.844544] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.434 [2024-04-26 15:26:27.844559] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.434 [2024-04-26 15:26:27.857768] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.434 [2024-04-26 15:26:27.857783] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.434 [2024-04-26 15:26:27.871194] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.434 [2024-04-26 15:26:27.871210] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.696 [2024-04-26 15:26:27.884449] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.696 [2024-04-26 15:26:27.884465] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.696 [2024-04-26 15:26:27.897835] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.696 [2024-04-26 15:26:27.897856] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.696 [2024-04-26 15:26:27.910253] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.696 [2024-04-26 15:26:27.910268] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.696 [2024-04-26 15:26:27.923660] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.696 [2024-04-26 15:26:27.923675] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.696 [2024-04-26 15:26:27.936829] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.696 [2024-04-26 15:26:27.936850] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.696 [2024-04-26 15:26:27.950613] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.696 [2024-04-26 15:26:27.950629] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.696 [2024-04-26 15:26:27.963062] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.696 [2024-04-26 15:26:27.963078] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.696 [2024-04-26 15:26:27.975664] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.696 [2024-04-26 15:26:27.975679] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.696 [2024-04-26 15:26:27.988357] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.696 [2024-04-26 15:26:27.988373] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.696 [2024-04-26 15:26:28.001022] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.696 [2024-04-26 15:26:28.001037] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.696 [2024-04-26 15:26:28.014034] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.696 [2024-04-26 15:26:28.014050] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.696 [2024-04-26 15:26:28.027600] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.696 [2024-04-26 15:26:28.027615] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.696 [2024-04-26 15:26:28.040439] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.696 [2024-04-26 15:26:28.040455] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.696 [2024-04-26 15:26:28.053441] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.696 [2024-04-26 15:26:28.053456] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.696 [2024-04-26 15:26:28.066479] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.696 [2024-04-26 15:26:28.066495] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.696 [2024-04-26 15:26:28.079654] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.696 [2024-04-26 15:26:28.079670] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.696 [2024-04-26 15:26:28.092315] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.696 [2024-04-26 15:26:28.092331] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.696 [2024-04-26 15:26:28.105701] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.696 [2024-04-26 15:26:28.105716] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.696 [2024-04-26 15:26:28.118749] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.696 [2024-04-26 15:26:28.118765] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.696 [2024-04-26 15:26:28.131203] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.696 [2024-04-26 15:26:28.131219] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.696 [2024-04-26 15:26:28.143829] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.696 [2024-04-26 15:26:28.143850] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.958 [2024-04-26 15:26:28.157334] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.958 [2024-04-26 15:26:28.157349] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.958 [2024-04-26 15:26:28.170826] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.958 [2024-04-26 15:26:28.170846] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.958 [2024-04-26 15:26:28.183341] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.958 [2024-04-26 15:26:28.183356] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.958 [2024-04-26 15:26:28.196301] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.958 [2024-04-26 15:26:28.196317] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.958 [2024-04-26 15:26:28.209185] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.958 [2024-04-26 15:26:28.209201] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.958 [2024-04-26 15:26:28.222097] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.958 [2024-04-26 15:26:28.222112] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.958 [2024-04-26 15:26:28.234909] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.958 [2024-04-26 15:26:28.234924] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.958 [2024-04-26 15:26:28.248723] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.958 [2024-04-26 15:26:28.248739] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.958 [2024-04-26 15:26:28.261548] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.958 [2024-04-26 15:26:28.261564] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.958 [2024-04-26 15:26:28.274869] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.958 [2024-04-26 15:26:28.274884] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.958 [2024-04-26 15:26:28.288223] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.958 [2024-04-26 15:26:28.288238] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.958 [2024-04-26 15:26:28.301095] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.958 [2024-04-26 15:26:28.301110] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.958 [2024-04-26 15:26:28.313796] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.958 [2024-04-26 15:26:28.313811] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.958 [2024-04-26 15:26:28.326303] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.958 [2024-04-26 15:26:28.326319] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.958 [2024-04-26 15:26:28.339715] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.958 [2024-04-26 15:26:28.339731] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.958 [2024-04-26 15:26:28.353069] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.958 [2024-04-26 15:26:28.353084] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.958 [2024-04-26 15:26:28.366315] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.958 [2024-04-26 15:26:28.366330] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.958 [2024-04-26 15:26:28.379015] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.958 [2024-04-26 15:26:28.379030] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.958 [2024-04-26 15:26:28.392027] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.958 [2024-04-26 15:26:28.392042] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.958 [2024-04-26 15:26:28.405386] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.958 [2024-04-26 15:26:28.405401] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.220 [2024-04-26 15:26:28.418211] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.220 [2024-04-26 15:26:28.418226] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.220 [2024-04-26 15:26:28.431401] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.220 [2024-04-26 15:26:28.431417] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.220 [2024-04-26 15:26:28.444665] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.220 [2024-04-26 15:26:28.444680] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.220 [2024-04-26 15:26:28.457713] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.220 [2024-04-26 15:26:28.457727] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.220 [2024-04-26 15:26:28.470932] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.220 [2024-04-26 15:26:28.470947] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.220 [2024-04-26 15:26:28.483934] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.220 [2024-04-26 15:26:28.483950] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.220 [2024-04-26 15:26:28.497410] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.220 [2024-04-26 15:26:28.497430] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.220 [2024-04-26 15:26:28.511026] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.220 [2024-04-26 15:26:28.511040] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.220 [2024-04-26 15:26:28.524526] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.220 [2024-04-26 15:26:28.524541] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.220 [2024-04-26 15:26:28.538064] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.220 [2024-04-26 15:26:28.538081] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.220 [2024-04-26 15:26:28.551660] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.220 [2024-04-26 15:26:28.551676] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.220 [2024-04-26 15:26:28.564713] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.220 [2024-04-26 15:26:28.564728] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.220 [2024-04-26 15:26:28.577579] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.220 [2024-04-26 15:26:28.577595] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.220 [2024-04-26 15:26:28.590768] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.220 [2024-04-26 15:26:28.590782] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.220 [2024-04-26 15:26:28.603811] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.220 [2024-04-26 15:26:28.603826] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.220 [2024-04-26 15:26:28.616255] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.220 [2024-04-26 15:26:28.616271] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.220 [2024-04-26 15:26:28.629413] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.220 [2024-04-26 15:26:28.629429] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.220 [2024-04-26 15:26:28.643198] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.220 [2024-04-26 15:26:28.643214] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.220 [2024-04-26 15:26:28.656002] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.220 [2024-04-26 15:26:28.656017] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.220 [2024-04-26 15:26:28.668966] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.220 [2024-04-26 15:26:28.668981] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.482 [2024-04-26 15:26:28.682496] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.482 [2024-04-26 15:26:28.682511] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.482 [2024-04-26 15:26:28.695062] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.482 [2024-04-26 15:26:28.695077] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.482 [2024-04-26 15:26:28.708025] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.482 [2024-04-26 15:26:28.708040] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.482 [2024-04-26 15:26:28.720714] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.482 [2024-04-26 15:26:28.720730] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.482 [2024-04-26 15:26:28.733582] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.482 [2024-04-26 15:26:28.733597] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.482 [2024-04-26 15:26:28.746803] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.482 [2024-04-26 15:26:28.746822] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.482 [2024-04-26 15:26:28.760367] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.482 [2024-04-26 15:26:28.760383] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.482 [2024-04-26 15:26:28.773904] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.482 [2024-04-26 15:26:28.773919] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.482 [2024-04-26 15:26:28.787689] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.482 [2024-04-26 15:26:28.787704] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.482 [2024-04-26 15:26:28.800805] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.482 [2024-04-26 15:26:28.800819] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.482 [2024-04-26 15:26:28.814489] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.482 [2024-04-26 15:26:28.814505] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.482 [2024-04-26 15:26:28.827423] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.482 [2024-04-26 15:26:28.827439] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.482 [2024-04-26 15:26:28.841134] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.482 [2024-04-26 15:26:28.841149] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.482 [2024-04-26 15:26:28.854440] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.482 [2024-04-26 15:26:28.854455] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.482 [2024-04-26 15:26:28.867224] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.482 [2024-04-26 15:26:28.867240] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.482 [2024-04-26 15:26:28.881067] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.482 [2024-04-26 15:26:28.881083] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.482 [2024-04-26 15:26:28.892380] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.482 [2024-04-26 15:26:28.892396] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.482 [2024-04-26 15:26:28.906001] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.482 [2024-04-26 15:26:28.906017] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.482 [2024-04-26 15:26:28.919562] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.482 [2024-04-26 15:26:28.919577] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.744 [2024-04-26 15:26:28.932909] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.744 [2024-04-26 15:26:28.932925] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.744 [2024-04-26 15:26:28.945410] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.744 [2024-04-26 15:26:28.945425] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.744 [2024-04-26 15:26:28.958272] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.744 [2024-04-26 15:26:28.958288] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.744 [2024-04-26 15:26:28.970772] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.744 [2024-04-26 15:26:28.970787] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.744 [2024-04-26 15:26:28.983602] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.744 [2024-04-26 15:26:28.983618] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.744 [2024-04-26 15:26:28.996012] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.744 [2024-04-26 15:26:28.996031] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.744 [2024-04-26 15:26:29.008881] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.744 [2024-04-26 15:26:29.008897] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.744 [2024-04-26 15:26:29.021773] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.744 [2024-04-26 15:26:29.021788] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.744 [2024-04-26 15:26:29.034520] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.744 [2024-04-26 15:26:29.034535] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.744 [2024-04-26 15:26:29.048078] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.744 [2024-04-26 15:26:29.048093] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.744 [2024-04-26 15:26:29.061231] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.744 [2024-04-26 15:26:29.061245] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.744 [2024-04-26 15:26:29.074339] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.744 [2024-04-26 15:26:29.074354] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.744 [2024-04-26 15:26:29.087223] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.744 [2024-04-26 15:26:29.087238] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.744 [2024-04-26 15:26:29.100231] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.744 [2024-04-26 15:26:29.100247] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.744 [2024-04-26 15:26:29.113742] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.744 [2024-04-26 15:26:29.113757] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.744 [2024-04-26 15:26:29.126787] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.744 [2024-04-26 15:26:29.126803] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.744 [2024-04-26 15:26:29.139260] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.744 [2024-04-26 15:26:29.139275] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.744 [2024-04-26 15:26:29.151809] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.744 [2024-04-26 15:26:29.151824] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.744 [2024-04-26 15:26:29.165094] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.744 [2024-04-26 15:26:29.165110] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.744 [2024-04-26 15:26:29.177956] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.744 [2024-04-26 15:26:29.177971] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.744 [2024-04-26 15:26:29.191444] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.744 [2024-04-26 15:26:29.191459] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.006 [2024-04-26 15:26:29.204271] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.006 [2024-04-26 15:26:29.204286] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.006 [2024-04-26 15:26:29.217877] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.006 [2024-04-26 15:26:29.217893] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.006 [2024-04-26 15:26:29.230679] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.006 [2024-04-26 15:26:29.230695] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.006 [2024-04-26 15:26:29.244313] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.006 [2024-04-26 15:26:29.244333] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.006 [2024-04-26 15:26:29.257268] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.006 [2024-04-26 15:26:29.257284] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.006 [2024-04-26 15:26:29.269858] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.006 [2024-04-26 15:26:29.269874] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.006 [2024-04-26 15:26:29.282557] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.006 [2024-04-26 15:26:29.282573] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.006 [2024-04-26 15:26:29.295467] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.006 [2024-04-26 15:26:29.295482] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.006 [2024-04-26 15:26:29.308884] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.006 [2024-04-26 15:26:29.308901] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.006 [2024-04-26 15:26:29.322920] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.006 [2024-04-26 15:26:29.322936] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.006 [2024-04-26 15:26:29.336359] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.006 [2024-04-26 15:26:29.336374] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.006 [2024-04-26 15:26:29.349261] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.006 [2024-04-26 15:26:29.349276] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.006 [2024-04-26 15:26:29.362638] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.006 [2024-04-26 15:26:29.362654] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.006 [2024-04-26 15:26:29.375531] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.006 [2024-04-26 15:26:29.375546] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.006 [2024-04-26 15:26:29.388857] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.006 [2024-04-26 15:26:29.388873] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.006 [2024-04-26 15:26:29.401985] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.006 [2024-04-26 15:26:29.402001] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.006 [2024-04-26 15:26:29.414911] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.006 [2024-04-26 15:26:29.414927] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.006 [2024-04-26 15:26:29.428309] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.006 [2024-04-26 15:26:29.428326] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.006 [2024-04-26 15:26:29.441327] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.006 [2024-04-26 15:26:29.441342] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.006 [2024-04-26 15:26:29.454109] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.006 [2024-04-26 15:26:29.454125] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.268 [2024-04-26 15:26:29.466921] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.268 [2024-04-26 15:26:29.466937] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.268 [2024-04-26 15:26:29.480503] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.268 [2024-04-26 15:26:29.480519] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.268 [2024-04-26 15:26:29.493444] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.268 [2024-04-26 15:26:29.493463] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.268 [2024-04-26 15:26:29.507415] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.268 [2024-04-26 15:26:29.507431] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.268 [2024-04-26 15:26:29.520192] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.268 [2024-04-26 15:26:29.520207] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.268 [2024-04-26 15:26:29.533269] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.268 [2024-04-26 15:26:29.533285] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.268 [2024-04-26 15:26:29.546854] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.268 [2024-04-26 15:26:29.546870] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.268 [2024-04-26 15:26:29.559500] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.268 [2024-04-26 15:26:29.559516] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.268 [2024-04-26 15:26:29.572453] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.268 [2024-04-26 15:26:29.572469] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.268 [2024-04-26 15:26:29.585794] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.268 [2024-04-26 15:26:29.585809] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.268 [2024-04-26 15:26:29.599809] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.268 [2024-04-26 15:26:29.599824] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.268 [2024-04-26 15:26:29.612537] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.268 [2024-04-26 15:26:29.612553] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.268 [2024-04-26 15:26:29.626008] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.268 [2024-04-26 15:26:29.626024] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.268 [2024-04-26 15:26:29.639770] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.268 [2024-04-26 15:26:29.639786] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.268 [2024-04-26 15:26:29.653303] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.268 [2024-04-26 15:26:29.653319] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.268 [2024-04-26 15:26:29.666460] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.268 [2024-04-26 15:26:29.666476] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.268 [2024-04-26 15:26:29.678831] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.268 [2024-04-26 15:26:29.678851] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.268 [2024-04-26 15:26:29.692820] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.268 [2024-04-26 15:26:29.692835] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.268 [2024-04-26 15:26:29.705610] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.268 [2024-04-26 15:26:29.705626] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.529 [2024-04-26 15:26:29.718529] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.529 [2024-04-26 15:26:29.718545] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.529 [2024-04-26 15:26:29.731083] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.529 [2024-04-26 15:26:29.731100] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.529 [2024-04-26 15:26:29.743637] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.529 [2024-04-26 15:26:29.743652] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.529 [2024-04-26 15:26:29.756980] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.529 [2024-04-26 15:26:29.756995] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.529 [2024-04-26 15:26:29.769817] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.529 [2024-04-26 15:26:29.769832] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.529 [2024-04-26 15:26:29.782673] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.529 [2024-04-26 15:26:29.782692] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.529 [2024-04-26 15:26:29.795458] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.529 [2024-04-26 15:26:29.795473] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.529 [2024-04-26 15:26:29.809316] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.529 [2024-04-26 15:26:29.809331] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.529 [2024-04-26 15:26:29.822998] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.529 [2024-04-26 15:26:29.823014] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.529 [2024-04-26 15:26:29.836410] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.529 [2024-04-26 15:26:29.836426] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.529 [2024-04-26 15:26:29.849915] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.529 [2024-04-26 15:26:29.849931] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.529 [2024-04-26 15:26:29.863460] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.529 [2024-04-26 15:26:29.863476] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.529 [2024-04-26 15:26:29.877079] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.529 [2024-04-26 15:26:29.877095] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.529 [2024-04-26 15:26:29.889927] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.529 [2024-04-26 15:26:29.889942] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.529 [2024-04-26 15:26:29.903313] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.529 [2024-04-26 15:26:29.903329] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.529 [2024-04-26 15:26:29.916185] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.529 [2024-04-26 15:26:29.916200] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.529 [2024-04-26 15:26:29.929466] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.529 [2024-04-26 15:26:29.929482] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.529 [2024-04-26 15:26:29.942226] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.529 [2024-04-26 15:26:29.942240] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.529 [2024-04-26 15:26:29.954999] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.529 [2024-04-26 15:26:29.955015] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.529 [2024-04-26 15:26:29.967642] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.529 [2024-04-26 15:26:29.967657] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.790 [2024-04-26 15:26:29.981160] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.790 [2024-04-26 15:26:29.981175] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.790 [2024-04-26 15:26:29.993875] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.790 [2024-04-26 15:26:29.993890] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.790 [2024-04-26 15:26:30.006888] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.790 [2024-04-26 15:26:30.006905] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.790 [2024-04-26 15:26:30.019400] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.790 [2024-04-26 15:26:30.019416] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.790 [2024-04-26 15:26:30.033000] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.790 [2024-04-26 15:26:30.033016] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.790 [2024-04-26 15:26:30.046051] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.790 [2024-04-26 15:26:30.046067] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.790 [2024-04-26 15:26:30.058589] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.790 [2024-04-26 15:26:30.058605] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.790 [2024-04-26 15:26:30.070972] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.790 [2024-04-26 15:26:30.070987] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.790 [2024-04-26 15:26:30.084467] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.790 [2024-04-26 15:26:30.084483] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.790 [2024-04-26 15:26:30.097608] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.790 [2024-04-26 15:26:30.097623] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.790 [2024-04-26 15:26:30.111227] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.790 [2024-04-26 15:26:30.111242] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.790 [2024-04-26 15:26:30.124521] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.790 [2024-04-26 15:26:30.124536] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.790 [2024-04-26 15:26:30.138506] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.790 [2024-04-26 15:26:30.138521] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.790 [2024-04-26 15:26:30.151705] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.790 [2024-04-26 15:26:30.151720] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.790 [2024-04-26 15:26:30.164401] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.790 [2024-04-26 15:26:30.164416] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.790 [2024-04-26 15:26:30.177108] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.790 [2024-04-26 15:26:30.177123] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.790 [2024-04-26 15:26:30.190065] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.790 [2024-04-26 15:26:30.190080] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.790 [2024-04-26 15:26:30.202659] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.790 [2024-04-26 15:26:30.202673] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.790 [2024-04-26 15:26:30.216201] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.790 [2024-04-26 15:26:30.216216] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.790 [2024-04-26 15:26:30.228682] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.790 [2024-04-26 15:26:30.228698] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.051 [2024-04-26 15:26:30.241459] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.051 [2024-04-26 15:26:30.241474] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.051 [2024-04-26 15:26:30.254949] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.051 [2024-04-26 15:26:30.254963] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.051 [2024-04-26 15:26:30.267747] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.051 [2024-04-26 15:26:30.267762] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.051 [2024-04-26 15:26:30.280357] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.051 [2024-04-26 15:26:30.280372] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.051 [2024-04-26 15:26:30.293451] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.051 [2024-04-26 15:26:30.293467] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.051 [2024-04-26 15:26:30.307128] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.051 [2024-04-26 15:26:30.307143] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.051 [2024-04-26 15:26:30.319909] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.051 [2024-04-26 15:26:30.319923] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.051 [2024-04-26 15:26:30.333185] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.051 [2024-04-26 15:26:30.333200] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.051 [2024-04-26 15:26:30.346572] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.051 [2024-04-26 15:26:30.346587] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.051 [2024-04-26 15:26:30.360154] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.051 [2024-04-26 15:26:30.360169] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.051 [2024-04-26 15:26:30.373485] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.051 [2024-04-26 15:26:30.373500] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.051 [2024-04-26 15:26:30.387216] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.051 [2024-04-26 15:26:30.387231] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.051 00:16:13.051 Latency(us) 00:16:13.051 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:13.051 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:16:13.051 Nvme1n1 : 5.01 18994.82 148.40 0.00 0.00 6732.03 3031.04 19442.35 00:16:13.051 =================================================================================================================== 00:16:13.051 Total : 18994.82 148.40 0.00 0.00 6732.03 3031.04 19442.35 00:16:13.051 [2024-04-26 15:26:30.396957] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.051 [2024-04-26 15:26:30.396971] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.051 [2024-04-26 15:26:30.408984] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.051 [2024-04-26 15:26:30.408997] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.051 [2024-04-26 15:26:30.421019] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.051 [2024-04-26 15:26:30.421031] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.051 [2024-04-26 15:26:30.433049] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.051 [2024-04-26 15:26:30.433066] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.051 [2024-04-26 15:26:30.445076] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.051 [2024-04-26 15:26:30.445086] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.051 [2024-04-26 15:26:30.457106] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.051 [2024-04-26 15:26:30.457115] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.051 [2024-04-26 15:26:30.469134] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.051 [2024-04-26 15:26:30.469143] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.051 [2024-04-26 15:26:30.481165] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.051 [2024-04-26 15:26:30.481175] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.051 [2024-04-26 15:26:30.493196] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.051 [2024-04-26 15:26:30.493206] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.312 [2024-04-26 15:26:30.505227] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.312 [2024-04-26 15:26:30.505238] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.312 [2024-04-26 15:26:30.517256] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.312 [2024-04-26 15:26:30.517264] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.312 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1607263) - No such process 00:16:13.312 15:26:30 -- target/zcopy.sh@49 -- # wait 1607263 00:16:13.312 15:26:30 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:13.312 15:26:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:13.312 15:26:30 -- common/autotest_common.sh@10 -- # set +x 00:16:13.312 15:26:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:13.312 15:26:30 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:13.312 15:26:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:13.312 15:26:30 -- common/autotest_common.sh@10 -- # set +x 00:16:13.312 delay0 00:16:13.312 15:26:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:13.312 15:26:30 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:16:13.312 15:26:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:13.312 15:26:30 -- common/autotest_common.sh@10 -- # set +x 00:16:13.312 15:26:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:13.312 15:26:30 -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:16:13.312 EAL: No free 2048 kB hugepages reported on node 1 00:16:13.312 [2024-04-26 15:26:30.655047] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:16:19.891 Initializing NVMe Controllers 00:16:19.891 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:19.891 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:19.891 Initialization complete. Launching workers. 00:16:19.891 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 2076 00:16:19.891 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 2348, failed to submit 48 00:16:19.891 success 2179, unsuccess 169, failed 0 00:16:19.891 15:26:37 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:16:19.891 15:26:37 -- target/zcopy.sh@60 -- # nvmftestfini 00:16:19.891 15:26:37 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:19.891 15:26:37 -- nvmf/common.sh@117 -- # sync 00:16:19.891 15:26:37 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:19.891 15:26:37 -- nvmf/common.sh@120 -- # set +e 00:16:19.891 15:26:37 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:19.891 15:26:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:19.891 rmmod nvme_tcp 00:16:19.891 rmmod nvme_fabrics 00:16:19.891 rmmod nvme_keyring 00:16:19.891 15:26:37 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:19.891 15:26:37 -- nvmf/common.sh@124 -- # set -e 00:16:19.891 15:26:37 -- nvmf/common.sh@125 -- # return 0 00:16:19.891 15:26:37 -- nvmf/common.sh@478 -- # '[' -n 1604902 ']' 00:16:19.891 15:26:37 -- nvmf/common.sh@479 -- # killprocess 1604902 00:16:19.891 15:26:37 -- common/autotest_common.sh@936 -- # '[' -z 1604902 ']' 00:16:19.891 15:26:37 -- common/autotest_common.sh@940 -- # kill -0 1604902 00:16:19.891 15:26:37 -- common/autotest_common.sh@941 -- # uname 00:16:19.891 15:26:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:19.891 15:26:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1604902 00:16:19.891 15:26:37 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:19.891 15:26:37 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:19.891 15:26:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1604902' 00:16:19.891 killing process with pid 1604902 00:16:19.891 15:26:37 -- common/autotest_common.sh@955 -- # kill 1604902 00:16:19.891 15:26:37 -- common/autotest_common.sh@960 -- # wait 1604902 00:16:20.151 15:26:37 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:20.151 15:26:37 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:20.151 15:26:37 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:20.151 15:26:37 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:20.151 15:26:37 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:20.151 15:26:37 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:20.151 15:26:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:20.151 15:26:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:22.065 15:26:39 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:22.065 00:16:22.065 real 0m32.818s 00:16:22.065 user 0m44.680s 00:16:22.065 sys 0m9.963s 00:16:22.065 15:26:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:22.065 15:26:39 -- common/autotest_common.sh@10 -- # set +x 00:16:22.065 ************************************ 00:16:22.065 END TEST nvmf_zcopy 00:16:22.065 ************************************ 00:16:22.065 15:26:39 -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:22.065 15:26:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:22.065 15:26:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:22.065 15:26:39 -- common/autotest_common.sh@10 -- # set +x 00:16:22.326 ************************************ 00:16:22.326 START TEST nvmf_nmic 00:16:22.326 ************************************ 00:16:22.326 15:26:39 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:22.326 * Looking for test storage... 00:16:22.326 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:22.326 15:26:39 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:22.326 15:26:39 -- nvmf/common.sh@7 -- # uname -s 00:16:22.326 15:26:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:22.326 15:26:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:22.326 15:26:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:22.326 15:26:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:22.326 15:26:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:22.326 15:26:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:22.326 15:26:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:22.326 15:26:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:22.326 15:26:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:22.326 15:26:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:22.326 15:26:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:22.326 15:26:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:22.326 15:26:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:22.326 15:26:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:22.326 15:26:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:22.326 15:26:39 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:22.326 15:26:39 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:22.326 15:26:39 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:22.326 15:26:39 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:22.326 15:26:39 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:22.326 15:26:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.326 15:26:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.326 15:26:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.326 15:26:39 -- paths/export.sh@5 -- # export PATH 00:16:22.326 15:26:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.326 15:26:39 -- nvmf/common.sh@47 -- # : 0 00:16:22.326 15:26:39 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:22.326 15:26:39 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:22.326 15:26:39 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:22.326 15:26:39 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:22.326 15:26:39 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:22.326 15:26:39 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:22.326 15:26:39 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:22.326 15:26:39 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:22.326 15:26:39 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:22.326 15:26:39 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:22.326 15:26:39 -- target/nmic.sh@14 -- # nvmftestinit 00:16:22.326 15:26:39 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:22.326 15:26:39 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:22.326 15:26:39 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:22.326 15:26:39 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:22.326 15:26:39 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:22.326 15:26:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:22.326 15:26:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:22.326 15:26:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:22.326 15:26:39 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:22.326 15:26:39 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:22.326 15:26:39 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:22.326 15:26:39 -- common/autotest_common.sh@10 -- # set +x 00:16:30.463 15:26:46 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:30.463 15:26:46 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:30.463 15:26:46 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:30.463 15:26:46 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:30.463 15:26:46 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:30.463 15:26:46 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:30.463 15:26:46 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:30.463 15:26:46 -- nvmf/common.sh@295 -- # net_devs=() 00:16:30.463 15:26:46 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:30.463 15:26:46 -- nvmf/common.sh@296 -- # e810=() 00:16:30.463 15:26:46 -- nvmf/common.sh@296 -- # local -ga e810 00:16:30.463 15:26:46 -- nvmf/common.sh@297 -- # x722=() 00:16:30.463 15:26:46 -- nvmf/common.sh@297 -- # local -ga x722 00:16:30.463 15:26:46 -- nvmf/common.sh@298 -- # mlx=() 00:16:30.463 15:26:46 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:30.463 15:26:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:30.463 15:26:46 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:30.463 15:26:46 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:30.463 15:26:46 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:30.463 15:26:46 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:30.463 15:26:46 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:30.463 15:26:46 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:30.463 15:26:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:30.463 15:26:46 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:30.463 15:26:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:30.463 15:26:46 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:30.463 15:26:46 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:30.463 15:26:46 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:30.463 15:26:46 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:30.463 15:26:46 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:30.463 15:26:46 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:30.463 15:26:46 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:30.463 15:26:46 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:30.463 15:26:46 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:30.463 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:30.463 15:26:46 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:30.463 15:26:46 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:30.463 15:26:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:30.463 15:26:46 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:30.463 15:26:46 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:30.463 15:26:46 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:30.463 15:26:46 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:30.463 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:30.463 15:26:46 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:30.463 15:26:46 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:30.463 15:26:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:30.463 15:26:46 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:30.463 15:26:46 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:30.463 15:26:46 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:30.463 15:26:46 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:30.463 15:26:46 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:30.463 15:26:46 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:30.463 15:26:46 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:30.463 15:26:46 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:30.463 15:26:46 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:30.463 15:26:46 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:30.463 Found net devices under 0000:31:00.0: cvl_0_0 00:16:30.463 15:26:46 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:30.463 15:26:46 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:30.463 15:26:46 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:30.463 15:26:46 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:30.463 15:26:46 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:30.463 15:26:46 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:30.463 Found net devices under 0000:31:00.1: cvl_0_1 00:16:30.463 15:26:46 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:30.463 15:26:46 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:30.463 15:26:46 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:30.463 15:26:46 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:30.463 15:26:46 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:30.463 15:26:46 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:30.463 15:26:46 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:30.463 15:26:46 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:30.463 15:26:46 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:30.463 15:26:46 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:30.463 15:26:46 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:30.463 15:26:46 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:30.463 15:26:46 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:30.463 15:26:46 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:30.463 15:26:46 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:30.463 15:26:46 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:30.463 15:26:46 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:30.463 15:26:46 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:30.463 15:26:46 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:30.463 15:26:46 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:30.463 15:26:46 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:30.463 15:26:46 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:30.463 15:26:46 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:30.463 15:26:46 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:30.463 15:26:46 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:30.463 15:26:46 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:30.463 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:30.463 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.684 ms 00:16:30.463 00:16:30.463 --- 10.0.0.2 ping statistics --- 00:16:30.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.463 rtt min/avg/max/mdev = 0.684/0.684/0.684/0.000 ms 00:16:30.464 15:26:46 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:30.464 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:30.464 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:16:30.464 00:16:30.464 --- 10.0.0.1 ping statistics --- 00:16:30.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.464 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:16:30.464 15:26:47 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:30.464 15:26:47 -- nvmf/common.sh@411 -- # return 0 00:16:30.464 15:26:47 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:30.464 15:26:47 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:30.464 15:26:47 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:30.464 15:26:47 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:30.464 15:26:47 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:30.464 15:26:47 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:30.464 15:26:47 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:30.464 15:26:47 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:16:30.464 15:26:47 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:30.464 15:26:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:30.464 15:26:47 -- common/autotest_common.sh@10 -- # set +x 00:16:30.464 15:26:47 -- nvmf/common.sh@470 -- # nvmfpid=1613881 00:16:30.464 15:26:47 -- nvmf/common.sh@471 -- # waitforlisten 1613881 00:16:30.464 15:26:47 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:30.464 15:26:47 -- common/autotest_common.sh@817 -- # '[' -z 1613881 ']' 00:16:30.464 15:26:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:30.464 15:26:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:30.464 15:26:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:30.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:30.464 15:26:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:30.464 15:26:47 -- common/autotest_common.sh@10 -- # set +x 00:16:30.464 [2024-04-26 15:26:47.116283] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:16:30.464 [2024-04-26 15:26:47.116348] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:30.464 EAL: No free 2048 kB hugepages reported on node 1 00:16:30.464 [2024-04-26 15:26:47.189301] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:30.464 [2024-04-26 15:26:47.264064] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:30.464 [2024-04-26 15:26:47.264107] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:30.464 [2024-04-26 15:26:47.264117] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:30.464 [2024-04-26 15:26:47.264124] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:30.464 [2024-04-26 15:26:47.264131] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:30.464 [2024-04-26 15:26:47.264295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:30.464 [2024-04-26 15:26:47.264412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:30.464 [2024-04-26 15:26:47.264570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:30.464 [2024-04-26 15:26:47.264571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:30.464 15:26:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:30.464 15:26:47 -- common/autotest_common.sh@850 -- # return 0 00:16:30.464 15:26:47 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:30.464 15:26:47 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:30.464 15:26:47 -- common/autotest_common.sh@10 -- # set +x 00:16:30.724 15:26:47 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:30.724 15:26:47 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:30.724 15:26:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:30.724 15:26:47 -- common/autotest_common.sh@10 -- # set +x 00:16:30.724 [2024-04-26 15:26:47.943391] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:30.724 15:26:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:30.724 15:26:47 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:30.724 15:26:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:30.724 15:26:47 -- common/autotest_common.sh@10 -- # set +x 00:16:30.724 Malloc0 00:16:30.724 15:26:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:30.724 15:26:47 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:30.724 15:26:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:30.724 15:26:47 -- common/autotest_common.sh@10 -- # set +x 00:16:30.724 15:26:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:30.724 15:26:47 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:30.724 15:26:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:30.725 15:26:47 -- common/autotest_common.sh@10 -- # set +x 00:16:30.725 15:26:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:30.725 15:26:47 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:30.725 15:26:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:30.725 15:26:47 -- common/autotest_common.sh@10 -- # set +x 00:16:30.725 [2024-04-26 15:26:48.002772] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:30.725 15:26:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:30.725 15:26:48 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:16:30.725 test case1: single bdev can't be used in multiple subsystems 00:16:30.725 15:26:48 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:16:30.725 15:26:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:30.725 15:26:48 -- common/autotest_common.sh@10 -- # set +x 00:16:30.725 15:26:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:30.725 15:26:48 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:30.725 15:26:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:30.725 15:26:48 -- common/autotest_common.sh@10 -- # set +x 00:16:30.725 15:26:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:30.725 15:26:48 -- target/nmic.sh@28 -- # nmic_status=0 00:16:30.725 15:26:48 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:16:30.725 15:26:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:30.725 15:26:48 -- common/autotest_common.sh@10 -- # set +x 00:16:30.725 [2024-04-26 15:26:48.038726] bdev.c:8005:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:16:30.725 [2024-04-26 15:26:48.038749] subsystem.c:1930:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:16:30.725 [2024-04-26 15:26:48.038757] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.725 request: 00:16:30.725 { 00:16:30.725 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:16:30.725 "namespace": { 00:16:30.725 "bdev_name": "Malloc0", 00:16:30.725 "no_auto_visible": false 00:16:30.725 }, 00:16:30.725 "method": "nvmf_subsystem_add_ns", 00:16:30.725 "req_id": 1 00:16:30.725 } 00:16:30.725 Got JSON-RPC error response 00:16:30.725 response: 00:16:30.725 { 00:16:30.725 "code": -32602, 00:16:30.725 "message": "Invalid parameters" 00:16:30.725 } 00:16:30.725 15:26:48 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:16:30.725 15:26:48 -- target/nmic.sh@29 -- # nmic_status=1 00:16:30.725 15:26:48 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:16:30.725 15:26:48 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:16:30.725 Adding namespace failed - expected result. 00:16:30.725 15:26:48 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:16:30.725 test case2: host connect to nvmf target in multiple paths 00:16:30.725 15:26:48 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:30.725 15:26:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:30.725 15:26:48 -- common/autotest_common.sh@10 -- # set +x 00:16:30.725 [2024-04-26 15:26:48.050869] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:30.725 15:26:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:30.725 15:26:48 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:32.107 15:26:49 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:16:34.024 15:26:51 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:16:34.024 15:26:51 -- common/autotest_common.sh@1184 -- # local i=0 00:16:34.024 15:26:51 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:16:34.024 15:26:51 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:16:34.024 15:26:51 -- common/autotest_common.sh@1191 -- # sleep 2 00:16:35.962 15:26:53 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:16:35.962 15:26:53 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:16:35.962 15:26:53 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:16:35.962 15:26:53 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:16:35.962 15:26:53 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:16:35.962 15:26:53 -- common/autotest_common.sh@1194 -- # return 0 00:16:35.962 15:26:53 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:35.962 [global] 00:16:35.962 thread=1 00:16:35.962 invalidate=1 00:16:35.962 rw=write 00:16:35.962 time_based=1 00:16:35.962 runtime=1 00:16:35.962 ioengine=libaio 00:16:35.962 direct=1 00:16:35.962 bs=4096 00:16:35.962 iodepth=1 00:16:35.962 norandommap=0 00:16:35.962 numjobs=1 00:16:35.962 00:16:35.962 verify_dump=1 00:16:35.962 verify_backlog=512 00:16:35.962 verify_state_save=0 00:16:35.962 do_verify=1 00:16:35.962 verify=crc32c-intel 00:16:35.962 [job0] 00:16:35.962 filename=/dev/nvme0n1 00:16:35.962 Could not set queue depth (nvme0n1) 00:16:36.227 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:36.227 fio-3.35 00:16:36.227 Starting 1 thread 00:16:37.614 00:16:37.614 job0: (groupid=0, jobs=1): err= 0: pid=1615219: Fri Apr 26 15:26:54 2024 00:16:37.614 read: IOPS=597, BW=2390KiB/s (2447kB/s)(2392KiB/1001msec) 00:16:37.614 slat (nsec): min=6834, max=56816, avg=23684.82, stdev=6736.95 00:16:37.614 clat (usec): min=235, max=975, avg=824.74, stdev=97.18 00:16:37.614 lat (usec): min=260, max=1001, avg=848.43, stdev=97.80 00:16:37.614 clat percentiles (usec): 00:16:37.614 | 1.00th=[ 445], 5.00th=[ 635], 10.00th=[ 701], 20.00th=[ 766], 00:16:37.614 | 30.00th=[ 807], 40.00th=[ 840], 50.00th=[ 857], 60.00th=[ 865], 00:16:37.614 | 70.00th=[ 881], 80.00th=[ 898], 90.00th=[ 906], 95.00th=[ 922], 00:16:37.614 | 99.00th=[ 955], 99.50th=[ 963], 99.90th=[ 979], 99.95th=[ 979], 00:16:37.614 | 99.99th=[ 979] 00:16:37.614 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:16:37.614 slat (nsec): min=9379, max=71070, avg=26717.51, stdev=10029.04 00:16:37.614 clat (usec): min=131, max=592, avg=444.05, stdev=68.26 00:16:37.614 lat (usec): min=141, max=624, avg=470.77, stdev=72.99 00:16:37.614 clat percentiles (usec): 00:16:37.614 | 1.00th=[ 251], 5.00th=[ 330], 10.00th=[ 355], 20.00th=[ 379], 00:16:37.614 | 30.00th=[ 412], 40.00th=[ 449], 50.00th=[ 461], 60.00th=[ 474], 00:16:37.614 | 70.00th=[ 490], 80.00th=[ 502], 90.00th=[ 510], 95.00th=[ 529], 00:16:37.614 | 99.00th=[ 553], 99.50th=[ 562], 99.90th=[ 570], 99.95th=[ 594], 00:16:37.614 | 99.99th=[ 594] 00:16:37.614 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:16:37.614 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:37.614 lat (usec) : 250=0.62%, 500=49.57%, 750=18.99%, 1000=30.83% 00:16:37.614 cpu : usr=1.50%, sys=5.00%, ctx=1622, majf=0, minf=1 00:16:37.614 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:37.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:37.614 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:37.614 issued rwts: total=598,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:37.614 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:37.614 00:16:37.614 Run status group 0 (all jobs): 00:16:37.614 READ: bw=2390KiB/s (2447kB/s), 2390KiB/s-2390KiB/s (2447kB/s-2447kB/s), io=2392KiB (2449kB), run=1001-1001msec 00:16:37.614 WRITE: bw=4092KiB/s (4190kB/s), 4092KiB/s-4092KiB/s (4190kB/s-4190kB/s), io=4096KiB (4194kB), run=1001-1001msec 00:16:37.614 00:16:37.614 Disk stats (read/write): 00:16:37.614 nvme0n1: ios=562/969, merge=0/0, ticks=473/430, in_queue=903, util=93.89% 00:16:37.614 15:26:54 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:37.615 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:16:37.615 15:26:54 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:37.615 15:26:54 -- common/autotest_common.sh@1205 -- # local i=0 00:16:37.615 15:26:54 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:16:37.615 15:26:54 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:37.615 15:26:54 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:16:37.615 15:26:54 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:37.615 15:26:54 -- common/autotest_common.sh@1217 -- # return 0 00:16:37.615 15:26:54 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:16:37.615 15:26:54 -- target/nmic.sh@53 -- # nvmftestfini 00:16:37.615 15:26:54 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:37.615 15:26:54 -- nvmf/common.sh@117 -- # sync 00:16:37.615 15:26:54 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:37.615 15:26:54 -- nvmf/common.sh@120 -- # set +e 00:16:37.615 15:26:54 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:37.615 15:26:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:37.615 rmmod nvme_tcp 00:16:37.615 rmmod nvme_fabrics 00:16:37.615 rmmod nvme_keyring 00:16:37.615 15:26:54 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:37.615 15:26:54 -- nvmf/common.sh@124 -- # set -e 00:16:37.615 15:26:54 -- nvmf/common.sh@125 -- # return 0 00:16:37.615 15:26:54 -- nvmf/common.sh@478 -- # '[' -n 1613881 ']' 00:16:37.615 15:26:54 -- nvmf/common.sh@479 -- # killprocess 1613881 00:16:37.615 15:26:54 -- common/autotest_common.sh@936 -- # '[' -z 1613881 ']' 00:16:37.615 15:26:54 -- common/autotest_common.sh@940 -- # kill -0 1613881 00:16:37.615 15:26:54 -- common/autotest_common.sh@941 -- # uname 00:16:37.615 15:26:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:37.615 15:26:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1613881 00:16:37.615 15:26:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:37.615 15:26:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:37.615 15:26:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1613881' 00:16:37.615 killing process with pid 1613881 00:16:37.615 15:26:54 -- common/autotest_common.sh@955 -- # kill 1613881 00:16:37.615 15:26:54 -- common/autotest_common.sh@960 -- # wait 1613881 00:16:37.875 15:26:55 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:37.875 15:26:55 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:37.875 15:26:55 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:37.875 15:26:55 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:37.875 15:26:55 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:37.875 15:26:55 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:37.875 15:26:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:37.875 15:26:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:39.791 15:26:57 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:39.791 00:16:39.791 real 0m17.574s 00:16:39.791 user 0m48.761s 00:16:39.791 sys 0m6.201s 00:16:39.791 15:26:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:39.791 15:26:57 -- common/autotest_common.sh@10 -- # set +x 00:16:39.791 ************************************ 00:16:39.791 END TEST nvmf_nmic 00:16:39.791 ************************************ 00:16:39.791 15:26:57 -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:39.791 15:26:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:39.791 15:26:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:39.791 15:26:57 -- common/autotest_common.sh@10 -- # set +x 00:16:40.052 ************************************ 00:16:40.052 START TEST nvmf_fio_target 00:16:40.052 ************************************ 00:16:40.052 15:26:57 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:40.052 * Looking for test storage... 00:16:40.314 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:40.314 15:26:57 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:40.314 15:26:57 -- nvmf/common.sh@7 -- # uname -s 00:16:40.314 15:26:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:40.314 15:26:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:40.314 15:26:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:40.314 15:26:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:40.314 15:26:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:40.314 15:26:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:40.314 15:26:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:40.314 15:26:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:40.314 15:26:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:40.314 15:26:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:40.314 15:26:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:40.314 15:26:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:40.314 15:26:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:40.314 15:26:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:40.314 15:26:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:40.314 15:26:57 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:40.314 15:26:57 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:40.314 15:26:57 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:40.314 15:26:57 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:40.314 15:26:57 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:40.314 15:26:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.314 15:26:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.314 15:26:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.314 15:26:57 -- paths/export.sh@5 -- # export PATH 00:16:40.314 15:26:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.314 15:26:57 -- nvmf/common.sh@47 -- # : 0 00:16:40.314 15:26:57 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:40.314 15:26:57 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:40.314 15:26:57 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:40.314 15:26:57 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:40.314 15:26:57 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:40.314 15:26:57 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:40.314 15:26:57 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:40.314 15:26:57 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:40.314 15:26:57 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:40.314 15:26:57 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:40.314 15:26:57 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:40.314 15:26:57 -- target/fio.sh@16 -- # nvmftestinit 00:16:40.314 15:26:57 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:40.314 15:26:57 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:40.314 15:26:57 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:40.314 15:26:57 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:40.314 15:26:57 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:40.314 15:26:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:40.314 15:26:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:40.314 15:26:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:40.314 15:26:57 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:40.314 15:26:57 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:40.314 15:26:57 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:40.314 15:26:57 -- common/autotest_common.sh@10 -- # set +x 00:16:48.539 15:27:04 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:48.539 15:27:04 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:48.539 15:27:04 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:48.539 15:27:04 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:48.539 15:27:04 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:48.539 15:27:04 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:48.539 15:27:04 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:48.539 15:27:04 -- nvmf/common.sh@295 -- # net_devs=() 00:16:48.539 15:27:04 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:48.539 15:27:04 -- nvmf/common.sh@296 -- # e810=() 00:16:48.539 15:27:04 -- nvmf/common.sh@296 -- # local -ga e810 00:16:48.539 15:27:04 -- nvmf/common.sh@297 -- # x722=() 00:16:48.539 15:27:04 -- nvmf/common.sh@297 -- # local -ga x722 00:16:48.539 15:27:04 -- nvmf/common.sh@298 -- # mlx=() 00:16:48.539 15:27:04 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:48.539 15:27:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:48.539 15:27:04 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:48.539 15:27:04 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:48.539 15:27:04 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:48.539 15:27:04 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:48.539 15:27:04 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:48.539 15:27:04 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:48.539 15:27:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:48.539 15:27:04 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:48.539 15:27:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:48.539 15:27:04 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:48.539 15:27:04 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:48.539 15:27:04 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:48.539 15:27:04 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:48.539 15:27:04 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:48.539 15:27:04 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:48.539 15:27:04 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:48.539 15:27:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:48.539 15:27:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:48.539 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:48.539 15:27:04 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:48.539 15:27:04 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:48.539 15:27:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:48.539 15:27:04 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:48.539 15:27:04 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:48.539 15:27:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:48.539 15:27:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:48.539 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:48.539 15:27:04 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:48.539 15:27:04 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:48.539 15:27:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:48.539 15:27:04 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:48.539 15:27:04 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:48.539 15:27:04 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:48.539 15:27:04 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:48.539 15:27:04 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:48.539 15:27:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:48.539 15:27:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:48.539 15:27:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:48.539 15:27:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:48.539 15:27:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:48.539 Found net devices under 0000:31:00.0: cvl_0_0 00:16:48.539 15:27:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:48.539 15:27:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:48.539 15:27:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:48.539 15:27:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:48.539 15:27:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:48.539 15:27:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:48.539 Found net devices under 0000:31:00.1: cvl_0_1 00:16:48.539 15:27:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:48.539 15:27:04 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:48.539 15:27:04 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:48.539 15:27:04 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:48.539 15:27:04 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:48.539 15:27:04 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:48.539 15:27:04 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:48.539 15:27:04 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:48.539 15:27:04 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:48.539 15:27:04 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:48.539 15:27:04 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:48.539 15:27:04 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:48.539 15:27:04 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:48.539 15:27:04 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:48.539 15:27:04 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:48.539 15:27:04 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:48.539 15:27:04 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:48.539 15:27:04 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:48.539 15:27:04 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:48.539 15:27:04 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:48.539 15:27:04 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:48.539 15:27:04 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:48.539 15:27:04 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:48.539 15:27:04 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:48.539 15:27:04 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:48.539 15:27:04 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:48.540 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:48.540 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.667 ms 00:16:48.540 00:16:48.540 --- 10.0.0.2 ping statistics --- 00:16:48.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:48.540 rtt min/avg/max/mdev = 0.667/0.667/0.667/0.000 ms 00:16:48.540 15:27:04 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:48.540 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:48.540 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:16:48.540 00:16:48.540 --- 10.0.0.1 ping statistics --- 00:16:48.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:48.540 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:16:48.540 15:27:04 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:48.540 15:27:04 -- nvmf/common.sh@411 -- # return 0 00:16:48.540 15:27:04 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:48.540 15:27:04 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:48.540 15:27:04 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:48.540 15:27:04 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:48.540 15:27:04 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:48.540 15:27:04 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:48.540 15:27:04 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:48.540 15:27:04 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:16:48.540 15:27:04 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:48.540 15:27:04 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:48.540 15:27:04 -- common/autotest_common.sh@10 -- # set +x 00:16:48.540 15:27:04 -- nvmf/common.sh@470 -- # nvmfpid=1620056 00:16:48.540 15:27:04 -- nvmf/common.sh@471 -- # waitforlisten 1620056 00:16:48.540 15:27:04 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:48.540 15:27:04 -- common/autotest_common.sh@817 -- # '[' -z 1620056 ']' 00:16:48.540 15:27:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:48.540 15:27:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:48.540 15:27:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:48.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:48.540 15:27:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:48.540 15:27:04 -- common/autotest_common.sh@10 -- # set +x 00:16:48.540 [2024-04-26 15:27:04.995521] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:16:48.540 [2024-04-26 15:27:04.995589] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:48.540 EAL: No free 2048 kB hugepages reported on node 1 00:16:48.540 [2024-04-26 15:27:05.066322] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:48.540 [2024-04-26 15:27:05.130088] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:48.540 [2024-04-26 15:27:05.130126] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:48.540 [2024-04-26 15:27:05.130135] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:48.540 [2024-04-26 15:27:05.130143] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:48.540 [2024-04-26 15:27:05.130150] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:48.540 [2024-04-26 15:27:05.130300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:48.540 [2024-04-26 15:27:05.130417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:48.540 [2024-04-26 15:27:05.130572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:48.540 [2024-04-26 15:27:05.130573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:48.540 15:27:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:48.540 15:27:05 -- common/autotest_common.sh@850 -- # return 0 00:16:48.540 15:27:05 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:48.540 15:27:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:48.540 15:27:05 -- common/autotest_common.sh@10 -- # set +x 00:16:48.540 15:27:05 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:48.540 15:27:05 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:48.540 [2024-04-26 15:27:05.936818] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:48.802 15:27:05 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:48.802 15:27:06 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:16:48.802 15:27:06 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:49.062 15:27:06 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:16:49.062 15:27:06 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:49.062 15:27:06 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:16:49.062 15:27:06 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:49.322 15:27:06 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:16:49.322 15:27:06 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:16:49.583 15:27:06 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:49.583 15:27:07 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:16:49.583 15:27:07 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:49.843 15:27:07 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:16:49.843 15:27:07 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:50.104 15:27:07 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:16:50.104 15:27:07 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:16:50.104 15:27:07 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:50.366 15:27:07 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:50.366 15:27:07 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:50.628 15:27:07 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:50.628 15:27:07 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:50.628 15:27:08 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:50.889 [2024-04-26 15:27:08.167000] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:50.889 15:27:08 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:16:51.187 15:27:08 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:16:51.187 15:27:08 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:53.114 15:27:10 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:16:53.114 15:27:10 -- common/autotest_common.sh@1184 -- # local i=0 00:16:53.114 15:27:10 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:16:53.114 15:27:10 -- common/autotest_common.sh@1186 -- # [[ -n 4 ]] 00:16:53.114 15:27:10 -- common/autotest_common.sh@1187 -- # nvme_device_counter=4 00:16:53.114 15:27:10 -- common/autotest_common.sh@1191 -- # sleep 2 00:16:55.029 15:27:12 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:16:55.029 15:27:12 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:16:55.029 15:27:12 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:16:55.029 15:27:12 -- common/autotest_common.sh@1193 -- # nvme_devices=4 00:16:55.029 15:27:12 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:16:55.029 15:27:12 -- common/autotest_common.sh@1194 -- # return 0 00:16:55.029 15:27:12 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:55.029 [global] 00:16:55.029 thread=1 00:16:55.029 invalidate=1 00:16:55.029 rw=write 00:16:55.029 time_based=1 00:16:55.029 runtime=1 00:16:55.029 ioengine=libaio 00:16:55.029 direct=1 00:16:55.029 bs=4096 00:16:55.029 iodepth=1 00:16:55.029 norandommap=0 00:16:55.029 numjobs=1 00:16:55.029 00:16:55.029 verify_dump=1 00:16:55.029 verify_backlog=512 00:16:55.029 verify_state_save=0 00:16:55.029 do_verify=1 00:16:55.029 verify=crc32c-intel 00:16:55.029 [job0] 00:16:55.029 filename=/dev/nvme0n1 00:16:55.029 [job1] 00:16:55.029 filename=/dev/nvme0n2 00:16:55.029 [job2] 00:16:55.029 filename=/dev/nvme0n3 00:16:55.029 [job3] 00:16:55.029 filename=/dev/nvme0n4 00:16:55.029 Could not set queue depth (nvme0n1) 00:16:55.029 Could not set queue depth (nvme0n2) 00:16:55.029 Could not set queue depth (nvme0n3) 00:16:55.029 Could not set queue depth (nvme0n4) 00:16:55.290 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:55.290 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:55.290 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:55.290 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:55.290 fio-3.35 00:16:55.290 Starting 4 threads 00:16:56.704 00:16:56.704 job0: (groupid=0, jobs=1): err= 0: pid=1622108: Fri Apr 26 15:27:13 2024 00:16:56.704 read: IOPS=177, BW=711KiB/s (728kB/s)(736KiB/1035msec) 00:16:56.704 slat (nsec): min=10222, max=45485, avg=26201.99, stdev=2671.49 00:16:56.704 clat (usec): min=814, max=42065, avg=3718.71, stdev=10113.78 00:16:56.704 lat (usec): min=841, max=42090, avg=3744.91, stdev=10113.54 00:16:56.704 clat percentiles (usec): 00:16:56.704 | 1.00th=[ 865], 5.00th=[ 914], 10.00th=[ 947], 20.00th=[ 1004], 00:16:56.704 | 30.00th=[ 1037], 40.00th=[ 1057], 50.00th=[ 1074], 60.00th=[ 1090], 00:16:56.704 | 70.00th=[ 1090], 80.00th=[ 1123], 90.00th=[ 1188], 95.00th=[41681], 00:16:56.704 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:56.704 | 99.99th=[42206] 00:16:56.704 write: IOPS=494, BW=1979KiB/s (2026kB/s)(2048KiB/1035msec); 0 zone resets 00:16:56.704 slat (nsec): min=8941, max=72104, avg=31038.54, stdev=8063.24 00:16:56.704 clat (usec): min=203, max=1036, avg=633.69, stdev=139.97 00:16:56.704 lat (usec): min=213, max=1069, avg=664.73, stdev=141.92 00:16:56.704 clat percentiles (usec): 00:16:56.705 | 1.00th=[ 289], 5.00th=[ 375], 10.00th=[ 449], 20.00th=[ 537], 00:16:56.705 | 30.00th=[ 578], 40.00th=[ 611], 50.00th=[ 644], 60.00th=[ 668], 00:16:56.705 | 70.00th=[ 701], 80.00th=[ 742], 90.00th=[ 799], 95.00th=[ 865], 00:16:56.705 | 99.00th=[ 963], 99.50th=[ 1004], 99.90th=[ 1037], 99.95th=[ 1037], 00:16:56.705 | 99.99th=[ 1037] 00:16:56.705 bw ( KiB/s): min= 4096, max= 4096, per=39.89%, avg=4096.00, stdev= 0.00, samples=1 00:16:56.705 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:56.705 lat (usec) : 250=0.29%, 500=10.06%, 750=50.29%, 1000=17.67% 00:16:56.705 lat (msec) : 2=19.97%, 50=1.72% 00:16:56.705 cpu : usr=1.55%, sys=2.42%, ctx=696, majf=0, minf=1 00:16:56.705 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:56.705 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:56.705 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:56.705 issued rwts: total=184,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:56.705 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:56.705 job1: (groupid=0, jobs=1): err= 0: pid=1622109: Fri Apr 26 15:27:13 2024 00:16:56.705 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:16:56.705 slat (nsec): min=4091, max=24361, avg=9331.09, stdev=878.60 00:16:56.705 clat (usec): min=808, max=1398, avg=1159.93, stdev=79.28 00:16:56.705 lat (usec): min=818, max=1408, avg=1169.26, stdev=79.19 00:16:56.705 clat percentiles (usec): 00:16:56.705 | 1.00th=[ 930], 5.00th=[ 1020], 10.00th=[ 1074], 20.00th=[ 1106], 00:16:56.705 | 30.00th=[ 1139], 40.00th=[ 1156], 50.00th=[ 1172], 60.00th=[ 1188], 00:16:56.705 | 70.00th=[ 1205], 80.00th=[ 1221], 90.00th=[ 1254], 95.00th=[ 1270], 00:16:56.705 | 99.00th=[ 1319], 99.50th=[ 1336], 99.90th=[ 1401], 99.95th=[ 1401], 00:16:56.705 | 99.99th=[ 1401] 00:16:56.705 write: IOPS=608, BW=2434KiB/s (2492kB/s)(2436KiB/1001msec); 0 zone resets 00:16:56.705 slat (usec): min=3, max=1757, avg=25.03, stdev=71.49 00:16:56.705 clat (usec): min=259, max=963, avg=625.21, stdev=118.19 00:16:56.705 lat (usec): min=288, max=2563, avg=650.24, stdev=144.35 00:16:56.705 clat percentiles (usec): 00:16:56.705 | 1.00th=[ 318], 5.00th=[ 412], 10.00th=[ 474], 20.00th=[ 529], 00:16:56.705 | 30.00th=[ 570], 40.00th=[ 603], 50.00th=[ 627], 60.00th=[ 660], 00:16:56.705 | 70.00th=[ 701], 80.00th=[ 725], 90.00th=[ 775], 95.00th=[ 807], 00:16:56.705 | 99.00th=[ 865], 99.50th=[ 881], 99.90th=[ 963], 99.95th=[ 963], 00:16:56.705 | 99.99th=[ 963] 00:16:56.705 bw ( KiB/s): min= 4096, max= 4096, per=39.89%, avg=4096.00, stdev= 0.00, samples=1 00:16:56.705 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:56.705 lat (usec) : 500=7.67%, 750=38.45%, 1000=9.99% 00:16:56.705 lat (msec) : 2=43.89% 00:16:56.705 cpu : usr=1.00%, sys=1.50%, ctx=1125, majf=0, minf=1 00:16:56.705 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:56.705 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:56.705 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:56.705 issued rwts: total=512,609,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:56.705 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:56.705 job2: (groupid=0, jobs=1): err= 0: pid=1622110: Fri Apr 26 15:27:13 2024 00:16:56.705 read: IOPS=19, BW=79.1KiB/s (80.9kB/s)(80.0KiB/1012msec) 00:16:56.705 slat (nsec): min=10213, max=26844, avg=25691.90, stdev=3647.23 00:16:56.705 clat (usec): min=40839, max=41023, avg=40957.11, stdev=45.19 00:16:56.705 lat (usec): min=40850, max=41050, avg=40982.81, stdev=47.48 00:16:56.705 clat percentiles (usec): 00:16:56.705 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:16:56.705 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:56.705 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:16:56.705 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:16:56.705 | 99.99th=[41157] 00:16:56.705 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:16:56.705 slat (nsec): min=9686, max=56532, avg=27052.01, stdev=11612.20 00:16:56.705 clat (usec): min=131, max=2826, avg=339.72, stdev=129.78 00:16:56.705 lat (usec): min=159, max=2860, avg=366.78, stdev=130.09 00:16:56.705 clat percentiles (usec): 00:16:56.705 | 1.00th=[ 212], 5.00th=[ 231], 10.00th=[ 249], 20.00th=[ 285], 00:16:56.705 | 30.00th=[ 314], 40.00th=[ 326], 50.00th=[ 338], 60.00th=[ 351], 00:16:56.705 | 70.00th=[ 363], 80.00th=[ 375], 90.00th=[ 404], 95.00th=[ 429], 00:16:56.705 | 99.00th=[ 482], 99.50th=[ 529], 99.90th=[ 2835], 99.95th=[ 2835], 00:16:56.705 | 99.99th=[ 2835] 00:16:56.705 bw ( KiB/s): min= 4096, max= 4096, per=39.89%, avg=4096.00, stdev= 0.00, samples=1 00:16:56.705 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:56.705 lat (usec) : 250=9.77%, 500=85.53%, 750=0.56% 00:16:56.705 lat (msec) : 2=0.19%, 4=0.19%, 50=3.76% 00:16:56.705 cpu : usr=0.69%, sys=1.38%, ctx=533, majf=0, minf=1 00:16:56.705 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:56.705 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:56.705 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:56.705 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:56.705 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:56.705 job3: (groupid=0, jobs=1): err= 0: pid=1622111: Fri Apr 26 15:27:13 2024 00:16:56.705 read: IOPS=537, BW=2150KiB/s (2201kB/s)(2152KiB/1001msec) 00:16:56.705 slat (nsec): min=7179, max=61530, avg=25595.38, stdev=4694.82 00:16:56.705 clat (usec): min=379, max=1042, avg=771.94, stdev=146.03 00:16:56.705 lat (usec): min=405, max=1068, avg=797.54, stdev=145.98 00:16:56.705 clat percentiles (usec): 00:16:56.705 | 1.00th=[ 506], 5.00th=[ 537], 10.00th=[ 553], 20.00th=[ 611], 00:16:56.705 | 30.00th=[ 685], 40.00th=[ 734], 50.00th=[ 799], 60.00th=[ 848], 00:16:56.705 | 70.00th=[ 889], 80.00th=[ 914], 90.00th=[ 955], 95.00th=[ 971], 00:16:56.705 | 99.00th=[ 1012], 99.50th=[ 1020], 99.90th=[ 1045], 99.95th=[ 1045], 00:16:56.705 | 99.99th=[ 1045] 00:16:56.705 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:16:56.705 slat (nsec): min=9649, max=69349, avg=31152.59, stdev=8516.71 00:16:56.705 clat (usec): min=134, max=875, avg=514.04, stdev=117.96 00:16:56.705 lat (usec): min=145, max=908, avg=545.19, stdev=120.63 00:16:56.705 clat percentiles (usec): 00:16:56.705 | 1.00th=[ 229], 5.00th=[ 306], 10.00th=[ 371], 20.00th=[ 408], 00:16:56.705 | 30.00th=[ 461], 40.00th=[ 482], 50.00th=[ 506], 60.00th=[ 545], 00:16:56.705 | 70.00th=[ 586], 80.00th=[ 619], 90.00th=[ 668], 95.00th=[ 701], 00:16:56.705 | 99.00th=[ 783], 99.50th=[ 807], 99.90th=[ 848], 99.95th=[ 873], 00:16:56.705 | 99.99th=[ 873] 00:16:56.705 bw ( KiB/s): min= 4096, max= 4096, per=39.89%, avg=4096.00, stdev= 0.00, samples=1 00:16:56.705 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:56.705 lat (usec) : 250=0.96%, 500=30.47%, 750=47.18%, 1000=20.87% 00:16:56.705 lat (msec) : 2=0.51% 00:16:56.705 cpu : usr=2.20%, sys=4.80%, ctx=1563, majf=0, minf=1 00:16:56.705 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:56.705 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:56.705 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:56.705 issued rwts: total=538,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:56.705 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:56.705 00:16:56.705 Run status group 0 (all jobs): 00:16:56.705 READ: bw=4846KiB/s (4963kB/s), 79.1KiB/s-2150KiB/s (80.9kB/s-2201kB/s), io=5016KiB (5136kB), run=1001-1035msec 00:16:56.705 WRITE: bw=10.0MiB/s (10.5MB/s), 1979KiB/s-4092KiB/s (2026kB/s-4190kB/s), io=10.4MiB (10.9MB), run=1001-1035msec 00:16:56.705 00:16:56.705 Disk stats (read/write): 00:16:56.705 nvme0n1: ios=233/512, merge=0/0, ticks=702/245, in_queue=947, util=86.05% 00:16:56.705 nvme0n2: ios=425/512, merge=0/0, ticks=684/313, in_queue=997, util=98.34% 00:16:56.705 nvme0n3: ios=35/512, merge=0/0, ticks=1454/163, in_queue=1617, util=100.00% 00:16:56.705 nvme0n4: ios=569/647, merge=0/0, ticks=805/309, in_queue=1114, util=100.00% 00:16:56.705 15:27:13 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:16:56.705 [global] 00:16:56.705 thread=1 00:16:56.705 invalidate=1 00:16:56.705 rw=randwrite 00:16:56.705 time_based=1 00:16:56.705 runtime=1 00:16:56.705 ioengine=libaio 00:16:56.705 direct=1 00:16:56.705 bs=4096 00:16:56.705 iodepth=1 00:16:56.705 norandommap=0 00:16:56.705 numjobs=1 00:16:56.705 00:16:56.705 verify_dump=1 00:16:56.705 verify_backlog=512 00:16:56.705 verify_state_save=0 00:16:56.705 do_verify=1 00:16:56.705 verify=crc32c-intel 00:16:56.705 [job0] 00:16:56.705 filename=/dev/nvme0n1 00:16:56.705 [job1] 00:16:56.705 filename=/dev/nvme0n2 00:16:56.705 [job2] 00:16:56.705 filename=/dev/nvme0n3 00:16:56.705 [job3] 00:16:56.705 filename=/dev/nvme0n4 00:16:56.705 Could not set queue depth (nvme0n1) 00:16:56.705 Could not set queue depth (nvme0n2) 00:16:56.705 Could not set queue depth (nvme0n3) 00:16:56.705 Could not set queue depth (nvme0n4) 00:16:56.974 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:56.974 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:56.974 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:56.974 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:56.974 fio-3.35 00:16:56.974 Starting 4 threads 00:16:58.374 00:16:58.374 job0: (groupid=0, jobs=1): err= 0: pid=1622638: Fri Apr 26 15:27:15 2024 00:16:58.374 read: IOPS=59, BW=240KiB/s (246kB/s)(240KiB/1001msec) 00:16:58.374 slat (nsec): min=7629, max=61447, avg=25561.20, stdev=7372.35 00:16:58.374 clat (usec): min=697, max=41995, avg=12422.17, stdev=17857.23 00:16:58.374 lat (usec): min=724, max=42021, avg=12447.73, stdev=17856.53 00:16:58.374 clat percentiles (usec): 00:16:58.374 | 1.00th=[ 701], 5.00th=[ 914], 10.00th=[ 996], 20.00th=[ 1090], 00:16:58.374 | 30.00th=[ 1139], 40.00th=[ 1172], 50.00th=[ 1221], 60.00th=[ 1270], 00:16:58.374 | 70.00th=[ 1319], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:16:58.374 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:58.374 | 99.99th=[42206] 00:16:58.374 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:16:58.374 slat (nsec): min=9507, max=56477, avg=30272.62, stdev=8153.88 00:16:58.374 clat (usec): min=208, max=674, avg=457.32, stdev=94.14 00:16:58.374 lat (usec): min=242, max=706, avg=487.60, stdev=95.43 00:16:58.374 clat percentiles (usec): 00:16:58.374 | 1.00th=[ 235], 5.00th=[ 302], 10.00th=[ 338], 20.00th=[ 375], 00:16:58.374 | 30.00th=[ 400], 40.00th=[ 437], 50.00th=[ 465], 60.00th=[ 490], 00:16:58.374 | 70.00th=[ 515], 80.00th=[ 537], 90.00th=[ 578], 95.00th=[ 611], 00:16:58.374 | 99.00th=[ 652], 99.50th=[ 668], 99.90th=[ 676], 99.95th=[ 676], 00:16:58.374 | 99.99th=[ 676] 00:16:58.374 bw ( KiB/s): min= 4096, max= 4096, per=39.12%, avg=4096.00, stdev= 0.00, samples=1 00:16:58.374 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:58.374 lat (usec) : 250=1.05%, 500=56.12%, 750=32.52%, 1000=0.87% 00:16:58.374 lat (msec) : 2=6.29%, 20=0.17%, 50=2.97% 00:16:58.374 cpu : usr=0.70%, sys=1.90%, ctx=574, majf=0, minf=1 00:16:58.374 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:58.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:58.374 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:58.374 issued rwts: total=60,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:58.374 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:58.374 job1: (groupid=0, jobs=1): err= 0: pid=1622639: Fri Apr 26 15:27:15 2024 00:16:58.374 read: IOPS=885, BW=3540KiB/s (3625kB/s)(3544KiB/1001msec) 00:16:58.374 slat (nsec): min=6492, max=44740, avg=24334.46, stdev=5617.51 00:16:58.374 clat (usec): min=233, max=1163, avg=605.00, stdev=160.48 00:16:58.374 lat (usec): min=258, max=1191, avg=629.34, stdev=160.45 00:16:58.374 clat percentiles (usec): 00:16:58.374 | 1.00th=[ 322], 5.00th=[ 367], 10.00th=[ 416], 20.00th=[ 445], 00:16:58.374 | 30.00th=[ 490], 40.00th=[ 586], 50.00th=[ 619], 60.00th=[ 635], 00:16:58.374 | 70.00th=[ 660], 80.00th=[ 693], 90.00th=[ 865], 95.00th=[ 906], 00:16:58.374 | 99.00th=[ 971], 99.50th=[ 1004], 99.90th=[ 1172], 99.95th=[ 1172], 00:16:58.374 | 99.99th=[ 1172] 00:16:58.374 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:16:58.374 slat (nsec): min=9026, max=49897, avg=25621.46, stdev=9410.23 00:16:58.374 clat (usec): min=120, max=645, avg=392.36, stdev=91.93 00:16:58.374 lat (usec): min=133, max=675, avg=417.98, stdev=93.40 00:16:58.374 clat percentiles (usec): 00:16:58.374 | 1.00th=[ 217], 5.00th=[ 239], 10.00th=[ 273], 20.00th=[ 318], 00:16:58.374 | 30.00th=[ 334], 40.00th=[ 355], 50.00th=[ 379], 60.00th=[ 416], 00:16:58.374 | 70.00th=[ 461], 80.00th=[ 486], 90.00th=[ 515], 95.00th=[ 529], 00:16:58.374 | 99.00th=[ 570], 99.50th=[ 578], 99.90th=[ 619], 99.95th=[ 644], 00:16:58.374 | 99.99th=[ 644] 00:16:58.374 bw ( KiB/s): min= 4096, max= 4096, per=39.12%, avg=4096.00, stdev= 0.00, samples=1 00:16:58.374 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:58.374 lat (usec) : 250=3.61%, 500=56.44%, 750=32.46%, 1000=7.23% 00:16:58.374 lat (msec) : 2=0.26% 00:16:58.374 cpu : usr=2.60%, sys=5.10%, ctx=1910, majf=0, minf=1 00:16:58.374 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:58.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:58.374 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:58.374 issued rwts: total=886,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:58.374 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:58.374 job2: (groupid=0, jobs=1): err= 0: pid=1622640: Fri Apr 26 15:27:15 2024 00:16:58.374 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:16:58.374 slat (nsec): min=25209, max=58432, avg=26389.14, stdev=3652.54 00:16:58.374 clat (usec): min=809, max=2341, avg=1174.67, stdev=108.15 00:16:58.374 lat (usec): min=835, max=2366, avg=1201.06, stdev=107.95 00:16:58.374 clat percentiles (usec): 00:16:58.374 | 1.00th=[ 857], 5.00th=[ 979], 10.00th=[ 1045], 20.00th=[ 1123], 00:16:58.375 | 30.00th=[ 1156], 40.00th=[ 1172], 50.00th=[ 1188], 60.00th=[ 1205], 00:16:58.375 | 70.00th=[ 1221], 80.00th=[ 1254], 90.00th=[ 1270], 95.00th=[ 1287], 00:16:58.375 | 99.00th=[ 1336], 99.50th=[ 1369], 99.90th=[ 2343], 99.95th=[ 2343], 00:16:58.375 | 99.99th=[ 2343] 00:16:58.375 write: IOPS=571, BW=2286KiB/s (2341kB/s)(2288KiB/1001msec); 0 zone resets 00:16:58.375 slat (nsec): min=9863, max=94366, avg=30237.63, stdev=8702.59 00:16:58.375 clat (usec): min=267, max=973, avg=627.72, stdev=132.10 00:16:58.375 lat (usec): min=301, max=1006, avg=657.95, stdev=134.51 00:16:58.375 clat percentiles (usec): 00:16:58.375 | 1.00th=[ 306], 5.00th=[ 388], 10.00th=[ 437], 20.00th=[ 523], 00:16:58.375 | 30.00th=[ 562], 40.00th=[ 603], 50.00th=[ 627], 60.00th=[ 668], 00:16:58.375 | 70.00th=[ 709], 80.00th=[ 742], 90.00th=[ 791], 95.00th=[ 840], 00:16:58.375 | 99.00th=[ 889], 99.50th=[ 922], 99.90th=[ 971], 99.95th=[ 971], 00:16:58.375 | 99.99th=[ 971] 00:16:58.375 bw ( KiB/s): min= 4096, max= 4096, per=39.12%, avg=4096.00, stdev= 0.00, samples=1 00:16:58.375 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:58.375 lat (usec) : 500=8.58%, 750=34.96%, 1000=12.36% 00:16:58.375 lat (msec) : 2=44.00%, 4=0.09% 00:16:58.375 cpu : usr=2.00%, sys=2.80%, ctx=1086, majf=0, minf=1 00:16:58.375 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:58.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:58.375 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:58.375 issued rwts: total=512,572,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:58.375 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:58.375 job3: (groupid=0, jobs=1): err= 0: pid=1622641: Fri Apr 26 15:27:15 2024 00:16:58.375 read: IOPS=47, BW=192KiB/s (196kB/s)(192KiB/1001msec) 00:16:58.375 slat (nsec): min=8203, max=43599, avg=25445.08, stdev=3659.11 00:16:58.375 clat (usec): min=896, max=42045, avg=13562.71, stdev=18737.87 00:16:58.375 lat (usec): min=922, max=42070, avg=13588.15, stdev=18737.92 00:16:58.375 clat percentiles (usec): 00:16:58.375 | 1.00th=[ 898], 5.00th=[ 1020], 10.00th=[ 1057], 20.00th=[ 1139], 00:16:58.375 | 30.00th=[ 1172], 40.00th=[ 1188], 50.00th=[ 1205], 60.00th=[ 1237], 00:16:58.375 | 70.00th=[25560], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:16:58.375 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:58.375 | 99.99th=[42206] 00:16:58.375 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:16:58.375 slat (nsec): min=9954, max=61901, avg=29456.10, stdev=8115.40 00:16:58.375 clat (usec): min=264, max=969, avg=642.94, stdev=131.12 00:16:58.375 lat (usec): min=296, max=1001, avg=672.39, stdev=134.10 00:16:58.375 clat percentiles (usec): 00:16:58.375 | 1.00th=[ 314], 5.00th=[ 404], 10.00th=[ 453], 20.00th=[ 537], 00:16:58.375 | 30.00th=[ 586], 40.00th=[ 627], 50.00th=[ 660], 60.00th=[ 693], 00:16:58.375 | 70.00th=[ 717], 80.00th=[ 750], 90.00th=[ 799], 95.00th=[ 840], 00:16:58.375 | 99.00th=[ 906], 99.50th=[ 938], 99.90th=[ 971], 99.95th=[ 971], 00:16:58.375 | 99.99th=[ 971] 00:16:58.375 bw ( KiB/s): min= 4096, max= 4096, per=39.12%, avg=4096.00, stdev= 0.00, samples=1 00:16:58.375 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:58.375 lat (usec) : 500=13.39%, 750=60.36%, 1000=18.04% 00:16:58.375 lat (msec) : 2=5.54%, 50=2.68% 00:16:58.375 cpu : usr=0.50%, sys=1.90%, ctx=561, majf=0, minf=1 00:16:58.375 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:58.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:58.375 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:58.375 issued rwts: total=48,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:58.375 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:58.375 00:16:58.375 Run status group 0 (all jobs): 00:16:58.375 READ: bw=6018KiB/s (6162kB/s), 192KiB/s-3540KiB/s (196kB/s-3625kB/s), io=6024KiB (6169kB), run=1001-1001msec 00:16:58.375 WRITE: bw=10.2MiB/s (10.7MB/s), 2046KiB/s-4092KiB/s (2095kB/s-4190kB/s), io=10.2MiB (10.7MB), run=1001-1001msec 00:16:58.375 00:16:58.375 Disk stats (read/write): 00:16:58.375 nvme0n1: ios=52/512, merge=0/0, ticks=1619/220, in_queue=1839, util=99.70% 00:16:58.375 nvme0n2: ios=678/1024, merge=0/0, ticks=447/405, in_queue=852, util=89.71% 00:16:58.375 nvme0n3: ios=465/512, merge=0/0, ticks=616/293, in_queue=909, util=100.00% 00:16:58.375 nvme0n4: ios=52/512, merge=0/0, ticks=1124/318, in_queue=1442, util=100.00% 00:16:58.375 15:27:15 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:16:58.375 [global] 00:16:58.375 thread=1 00:16:58.375 invalidate=1 00:16:58.375 rw=write 00:16:58.375 time_based=1 00:16:58.375 runtime=1 00:16:58.375 ioengine=libaio 00:16:58.375 direct=1 00:16:58.375 bs=4096 00:16:58.375 iodepth=128 00:16:58.375 norandommap=0 00:16:58.375 numjobs=1 00:16:58.375 00:16:58.375 verify_dump=1 00:16:58.375 verify_backlog=512 00:16:58.375 verify_state_save=0 00:16:58.375 do_verify=1 00:16:58.375 verify=crc32c-intel 00:16:58.375 [job0] 00:16:58.375 filename=/dev/nvme0n1 00:16:58.375 [job1] 00:16:58.375 filename=/dev/nvme0n2 00:16:58.375 [job2] 00:16:58.375 filename=/dev/nvme0n3 00:16:58.375 [job3] 00:16:58.375 filename=/dev/nvme0n4 00:16:58.375 Could not set queue depth (nvme0n1) 00:16:58.375 Could not set queue depth (nvme0n2) 00:16:58.375 Could not set queue depth (nvme0n3) 00:16:58.375 Could not set queue depth (nvme0n4) 00:16:58.635 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:58.635 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:58.635 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:58.635 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:58.635 fio-3.35 00:16:58.635 Starting 4 threads 00:17:00.047 00:17:00.047 job0: (groupid=0, jobs=1): err= 0: pid=1623158: Fri Apr 26 15:27:17 2024 00:17:00.047 read: IOPS=5470, BW=21.4MiB/s (22.4MB/s)(21.5MiB/1005msec) 00:17:00.047 slat (nsec): min=897, max=10992k, avg=103269.17, stdev=744195.03 00:17:00.047 clat (usec): min=1746, max=22905, avg=12293.10, stdev=3033.61 00:17:00.047 lat (usec): min=3709, max=22938, avg=12396.37, stdev=3078.19 00:17:00.047 clat percentiles (usec): 00:17:00.047 | 1.00th=[ 4752], 5.00th=[ 8291], 10.00th=[10028], 20.00th=[10683], 00:17:00.047 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11600], 60.00th=[11863], 00:17:00.047 | 70.00th=[12125], 80.00th=[13698], 90.00th=[17433], 95.00th=[19006], 00:17:00.047 | 99.00th=[21365], 99.50th=[21627], 99.90th=[22152], 99.95th=[22152], 00:17:00.047 | 99.99th=[22938] 00:17:00.047 write: IOPS=5603, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1005msec); 0 zone resets 00:17:00.047 slat (nsec): min=1577, max=8870.2k, avg=72655.18, stdev=286381.14 00:17:00.047 clat (usec): min=1108, max=22154, avg=10625.14, stdev=2340.05 00:17:00.047 lat (usec): min=1118, max=22156, avg=10697.79, stdev=2356.59 00:17:00.047 clat percentiles (usec): 00:17:00.047 | 1.00th=[ 3130], 5.00th=[ 5211], 10.00th=[ 7046], 20.00th=[ 9634], 00:17:00.047 | 30.00th=[10421], 40.00th=[10814], 50.00th=[11469], 60.00th=[11731], 00:17:00.048 | 70.00th=[11994], 80.00th=[12125], 90.00th=[12256], 95.00th=[12387], 00:17:00.048 | 99.00th=[14484], 99.50th=[15533], 99.90th=[21890], 99.95th=[22152], 00:17:00.048 | 99.99th=[22152] 00:17:00.048 bw ( KiB/s): min=21904, max=23152, per=22.53%, avg=22528.00, stdev=882.47, samples=2 00:17:00.048 iops : min= 5476, max= 5788, avg=5632.00, stdev=220.62, samples=2 00:17:00.048 lat (msec) : 2=0.03%, 4=1.37%, 10=14.93%, 20=82.22%, 50=1.46% 00:17:00.048 cpu : usr=2.69%, sys=5.78%, ctx=733, majf=0, minf=1 00:17:00.048 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:17:00.048 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:00.048 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:00.048 issued rwts: total=5498,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:00.048 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:00.048 job1: (groupid=0, jobs=1): err= 0: pid=1623159: Fri Apr 26 15:27:17 2024 00:17:00.048 read: IOPS=6583, BW=25.7MiB/s (27.0MB/s)(26.0MiB/1011msec) 00:17:00.048 slat (nsec): min=890, max=10496k, avg=71128.87, stdev=399569.68 00:17:00.048 clat (usec): min=4156, max=22898, avg=9296.40, stdev=1967.51 00:17:00.048 lat (usec): min=4160, max=22900, avg=9367.53, stdev=1992.88 00:17:00.048 clat percentiles (usec): 00:17:00.048 | 1.00th=[ 6587], 5.00th=[ 7504], 10.00th=[ 7898], 20.00th=[ 8225], 00:17:00.048 | 30.00th=[ 8717], 40.00th=[ 8979], 50.00th=[ 9110], 60.00th=[ 9241], 00:17:00.048 | 70.00th=[ 9503], 80.00th=[ 9634], 90.00th=[10028], 95.00th=[11731], 00:17:00.048 | 99.00th=[20317], 99.50th=[20317], 99.90th=[22414], 99.95th=[22414], 00:17:00.048 | 99.99th=[22938] 00:17:00.048 write: IOPS=6890, BW=26.9MiB/s (28.2MB/s)(27.2MiB/1011msec); 0 zone resets 00:17:00.048 slat (nsec): min=1564, max=13943k, avg=71743.80, stdev=410264.93 00:17:00.048 clat (usec): min=2694, max=24303, avg=9453.38, stdev=2642.32 00:17:00.048 lat (usec): min=2702, max=24306, avg=9525.12, stdev=2673.25 00:17:00.048 clat percentiles (usec): 00:17:00.048 | 1.00th=[ 4752], 5.00th=[ 6980], 10.00th=[ 7701], 20.00th=[ 8094], 00:17:00.048 | 30.00th=[ 8455], 40.00th=[ 8717], 50.00th=[ 9241], 60.00th=[ 9372], 00:17:00.048 | 70.00th=[ 9503], 80.00th=[ 9765], 90.00th=[10290], 95.00th=[14353], 00:17:00.048 | 99.00th=[22414], 99.50th=[22676], 99.90th=[23462], 99.95th=[24249], 00:17:00.048 | 99.99th=[24249] 00:17:00.048 bw ( KiB/s): min=26424, max=28288, per=27.36%, avg=27356.00, stdev=1318.05, samples=2 00:17:00.048 iops : min= 6606, max= 7072, avg=6839.00, stdev=329.51, samples=2 00:17:00.048 lat (msec) : 4=0.11%, 10=89.30%, 20=8.90%, 50=1.69% 00:17:00.048 cpu : usr=3.76%, sys=4.75%, ctx=683, majf=0, minf=1 00:17:00.048 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:17:00.048 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:00.048 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:00.048 issued rwts: total=6656,6966,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:00.048 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:00.048 job2: (groupid=0, jobs=1): err= 0: pid=1623161: Fri Apr 26 15:27:17 2024 00:17:00.048 read: IOPS=6077, BW=23.7MiB/s (24.9MB/s)(24.0MiB/1011msec) 00:17:00.048 slat (nsec): min=926, max=13478k, avg=82911.53, stdev=610447.61 00:17:00.048 clat (usec): min=2059, max=26137, avg=10727.85, stdev=3273.56 00:17:00.048 lat (usec): min=2066, max=26162, avg=10810.76, stdev=3311.45 00:17:00.048 clat percentiles (usec): 00:17:00.048 | 1.00th=[ 4359], 5.00th=[ 7308], 10.00th=[ 7767], 20.00th=[ 8225], 00:17:00.048 | 30.00th=[ 8586], 40.00th=[ 8979], 50.00th=[ 9372], 60.00th=[10421], 00:17:00.048 | 70.00th=[11731], 80.00th=[13435], 90.00th=[15270], 95.00th=[17171], 00:17:00.048 | 99.00th=[22414], 99.50th=[22938], 99.90th=[22938], 99.95th=[22938], 00:17:00.048 | 99.99th=[26084] 00:17:00.048 write: IOPS=6454, BW=25.2MiB/s (26.4MB/s)(25.5MiB/1011msec); 0 zone resets 00:17:00.048 slat (nsec): min=1608, max=7573.1k, avg=69984.80, stdev=419220.55 00:17:00.048 clat (usec): min=1014, max=28787, avg=9555.71, stdev=4159.26 00:17:00.048 lat (usec): min=1210, max=28796, avg=9625.69, stdev=4180.87 00:17:00.048 clat percentiles (usec): 00:17:00.048 | 1.00th=[ 3392], 5.00th=[ 4948], 10.00th=[ 5473], 20.00th=[ 6521], 00:17:00.048 | 30.00th=[ 7701], 40.00th=[ 8717], 50.00th=[ 8979], 60.00th=[ 9110], 00:17:00.048 | 70.00th=[ 9503], 80.00th=[11863], 90.00th=[14484], 95.00th=[17433], 00:17:00.048 | 99.00th=[25560], 99.50th=[25822], 99.90th=[27919], 99.95th=[28705], 00:17:00.048 | 99.99th=[28705] 00:17:00.048 bw ( KiB/s): min=22536, max=28656, per=25.60%, avg=25596.00, stdev=4327.49, samples=2 00:17:00.048 iops : min= 5634, max= 7164, avg=6399.00, stdev=1081.87, samples=2 00:17:00.048 lat (msec) : 2=0.08%, 4=1.03%, 10=64.51%, 20=31.40%, 50=2.98% 00:17:00.048 cpu : usr=4.26%, sys=6.34%, ctx=601, majf=0, minf=1 00:17:00.048 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:17:00.048 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:00.048 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:00.048 issued rwts: total=6144,6526,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:00.048 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:00.048 job3: (groupid=0, jobs=1): err= 0: pid=1623162: Fri Apr 26 15:27:17 2024 00:17:00.048 read: IOPS=5999, BW=23.4MiB/s (24.6MB/s)(23.5MiB/1003msec) 00:17:00.048 slat (nsec): min=893, max=9668.0k, avg=82639.42, stdev=432055.94 00:17:00.048 clat (usec): min=709, max=31395, avg=10706.36, stdev=2692.44 00:17:00.048 lat (usec): min=2381, max=31397, avg=10789.00, stdev=2688.96 00:17:00.048 clat percentiles (usec): 00:17:00.048 | 1.00th=[ 5145], 5.00th=[ 8029], 10.00th=[ 8586], 20.00th=[ 9372], 00:17:00.048 | 30.00th=[ 9896], 40.00th=[10028], 50.00th=[10159], 60.00th=[10290], 00:17:00.048 | 70.00th=[10421], 80.00th=[10945], 90.00th=[14484], 95.00th=[16909], 00:17:00.048 | 99.00th=[20841], 99.50th=[22152], 99.90th=[24773], 99.95th=[31327], 00:17:00.048 | 99.99th=[31327] 00:17:00.048 write: IOPS=6125, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec); 0 zone resets 00:17:00.048 slat (nsec): min=1543, max=5660.8k, avg=79077.80, stdev=393605.33 00:17:00.048 clat (usec): min=5601, max=27475, avg=10176.11, stdev=2022.81 00:17:00.048 lat (usec): min=5613, max=27477, avg=10255.19, stdev=2021.44 00:17:00.048 clat percentiles (usec): 00:17:00.048 | 1.00th=[ 7242], 5.00th=[ 7767], 10.00th=[ 8094], 20.00th=[ 9110], 00:17:00.048 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10159], 00:17:00.048 | 70.00th=[10421], 80.00th=[10683], 90.00th=[11600], 95.00th=[14353], 00:17:00.048 | 99.00th=[19268], 99.50th=[21627], 99.90th=[27395], 99.95th=[27395], 00:17:00.048 | 99.99th=[27395] 00:17:00.048 bw ( KiB/s): min=24544, max=24608, per=24.58%, avg=24576.00, stdev=45.25, samples=2 00:17:00.048 iops : min= 6136, max= 6152, avg=6144.00, stdev=11.31, samples=2 00:17:00.048 lat (usec) : 750=0.01% 00:17:00.048 lat (msec) : 4=0.26%, 10=44.02%, 20=54.63%, 50=1.09% 00:17:00.048 cpu : usr=2.30%, sys=3.29%, ctx=733, majf=0, minf=1 00:17:00.048 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:17:00.048 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:00.048 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:00.048 issued rwts: total=6017,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:00.048 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:00.048 00:17:00.048 Run status group 0 (all jobs): 00:17:00.048 READ: bw=93.9MiB/s (98.5MB/s), 21.4MiB/s-25.7MiB/s (22.4MB/s-27.0MB/s), io=95.0MiB (99.6MB), run=1003-1011msec 00:17:00.048 WRITE: bw=97.6MiB/s (102MB/s), 21.9MiB/s-26.9MiB/s (23.0MB/s-28.2MB/s), io=98.7MiB (103MB), run=1003-1011msec 00:17:00.048 00:17:00.048 Disk stats (read/write): 00:17:00.048 nvme0n1: ios=4146/4559, merge=0/0, ticks=48753/47419, in_queue=96172, util=84.17% 00:17:00.048 nvme0n2: ios=5171/5413, merge=0/0, ticks=15708/15070, in_queue=30778, util=99.79% 00:17:00.048 nvme0n3: ios=5120/5143, merge=0/0, ticks=46376/41857, in_queue=88233, util=86.78% 00:17:00.048 nvme0n4: ios=4608/4925, merge=0/0, ticks=16966/14265, in_queue=31231, util=88.94% 00:17:00.048 15:27:17 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:17:00.048 [global] 00:17:00.048 thread=1 00:17:00.048 invalidate=1 00:17:00.048 rw=randwrite 00:17:00.048 time_based=1 00:17:00.048 runtime=1 00:17:00.048 ioengine=libaio 00:17:00.048 direct=1 00:17:00.048 bs=4096 00:17:00.048 iodepth=128 00:17:00.048 norandommap=0 00:17:00.048 numjobs=1 00:17:00.048 00:17:00.048 verify_dump=1 00:17:00.048 verify_backlog=512 00:17:00.048 verify_state_save=0 00:17:00.048 do_verify=1 00:17:00.048 verify=crc32c-intel 00:17:00.048 [job0] 00:17:00.048 filename=/dev/nvme0n1 00:17:00.048 [job1] 00:17:00.048 filename=/dev/nvme0n2 00:17:00.048 [job2] 00:17:00.048 filename=/dev/nvme0n3 00:17:00.048 [job3] 00:17:00.048 filename=/dev/nvme0n4 00:17:00.048 Could not set queue depth (nvme0n1) 00:17:00.048 Could not set queue depth (nvme0n2) 00:17:00.048 Could not set queue depth (nvme0n3) 00:17:00.048 Could not set queue depth (nvme0n4) 00:17:00.314 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:00.314 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:00.314 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:00.314 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:00.314 fio-3.35 00:17:00.314 Starting 4 threads 00:17:01.722 00:17:01.722 job0: (groupid=0, jobs=1): err= 0: pid=1623686: Fri Apr 26 15:27:18 2024 00:17:01.722 read: IOPS=4450, BW=17.4MiB/s (18.2MB/s)(17.5MiB/1004msec) 00:17:01.722 slat (nsec): min=851, max=9505.7k, avg=97501.19, stdev=551989.15 00:17:01.722 clat (usec): min=755, max=42673, avg=12560.62, stdev=6219.13 00:17:01.722 lat (usec): min=763, max=42683, avg=12658.12, stdev=6251.63 00:17:01.722 clat percentiles (usec): 00:17:01.722 | 1.00th=[ 1483], 5.00th=[ 4752], 10.00th=[ 6456], 20.00th=[ 7635], 00:17:01.722 | 30.00th=[ 8291], 40.00th=[ 9241], 50.00th=[10814], 60.00th=[13435], 00:17:01.722 | 70.00th=[15270], 80.00th=[18482], 90.00th=[21890], 95.00th=[23987], 00:17:01.722 | 99.00th=[29754], 99.50th=[33817], 99.90th=[42730], 99.95th=[42730], 00:17:01.722 | 99.99th=[42730] 00:17:01.722 write: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec); 0 zone resets 00:17:01.722 slat (nsec): min=1533, max=10689k, avg=116814.40, stdev=636405.35 00:17:01.722 clat (usec): min=3325, max=74709, avg=15324.03, stdev=11812.65 00:17:01.722 lat (usec): min=3334, max=74719, avg=15440.85, stdev=11881.87 00:17:01.722 clat percentiles (usec): 00:17:01.722 | 1.00th=[ 4555], 5.00th=[ 5932], 10.00th=[ 6915], 20.00th=[ 7373], 00:17:01.722 | 30.00th=[ 8225], 40.00th=[10683], 50.00th=[12780], 60.00th=[14222], 00:17:01.722 | 70.00th=[16057], 80.00th=[17957], 90.00th=[27395], 95.00th=[38011], 00:17:01.722 | 99.00th=[68682], 99.50th=[69731], 99.90th=[74974], 99.95th=[74974], 00:17:01.722 | 99.99th=[74974] 00:17:01.722 bw ( KiB/s): min=17392, max=19472, per=22.79%, avg=18432.00, stdev=1470.78, samples=2 00:17:01.722 iops : min= 4348, max= 4868, avg=4608.00, stdev=367.70, samples=2 00:17:01.722 lat (usec) : 1000=0.08% 00:17:01.722 lat (msec) : 2=0.91%, 4=1.13%, 10=38.68%, 20=45.17%, 50=12.36% 00:17:01.722 lat (msec) : 100=1.65% 00:17:01.722 cpu : usr=2.09%, sys=4.09%, ctx=524, majf=0, minf=1 00:17:01.722 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:17:01.722 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:01.722 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:01.722 issued rwts: total=4468,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:01.722 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:01.722 job1: (groupid=0, jobs=1): err= 0: pid=1623687: Fri Apr 26 15:27:18 2024 00:17:01.722 read: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec) 00:17:01.722 slat (nsec): min=921, max=19456k, avg=165362.34, stdev=1069685.90 00:17:01.722 clat (usec): min=8859, max=67814, avg=20783.20, stdev=9341.66 00:17:01.722 lat (usec): min=8862, max=67841, avg=20948.56, stdev=9439.09 00:17:01.722 clat percentiles (usec): 00:17:01.722 | 1.00th=[10814], 5.00th=[12125], 10.00th=[12387], 20.00th=[13698], 00:17:01.722 | 30.00th=[16581], 40.00th=[16909], 50.00th=[17957], 60.00th=[19268], 00:17:01.722 | 70.00th=[21365], 80.00th=[25035], 90.00th=[30802], 95.00th=[44827], 00:17:01.722 | 99.00th=[55313], 99.50th=[55313], 99.90th=[63177], 99.95th=[64750], 00:17:01.722 | 99.99th=[67634] 00:17:01.722 write: IOPS=3425, BW=13.4MiB/s (14.0MB/s)(13.5MiB/1006msec); 0 zone resets 00:17:01.722 slat (nsec): min=1556, max=17172k, avg=138112.43, stdev=763653.68 00:17:01.722 clat (usec): min=568, max=82008, avg=18379.81, stdev=12435.89 00:17:01.722 lat (usec): min=1196, max=82018, avg=18517.93, stdev=12500.62 00:17:01.722 clat percentiles (usec): 00:17:01.722 | 1.00th=[ 5866], 5.00th=[ 8717], 10.00th=[ 9765], 20.00th=[10814], 00:17:01.722 | 30.00th=[12780], 40.00th=[13435], 50.00th=[14877], 60.00th=[17171], 00:17:01.722 | 70.00th=[18744], 80.00th=[20841], 90.00th=[26346], 95.00th=[44827], 00:17:01.722 | 99.00th=[80217], 99.50th=[81265], 99.90th=[82314], 99.95th=[82314], 00:17:01.722 | 99.99th=[82314] 00:17:01.722 bw ( KiB/s): min=10160, max=16416, per=16.43%, avg=13288.00, stdev=4423.66, samples=2 00:17:01.722 iops : min= 2540, max= 4104, avg=3322.00, stdev=1105.92, samples=2 00:17:01.722 lat (usec) : 750=0.02% 00:17:01.722 lat (msec) : 2=0.03%, 10=5.78%, 20=65.43%, 50=25.51%, 100=3.22% 00:17:01.722 cpu : usr=1.99%, sys=3.78%, ctx=334, majf=0, minf=1 00:17:01.722 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:17:01.722 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:01.722 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:01.722 issued rwts: total=3072,3446,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:01.722 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:01.722 job2: (groupid=0, jobs=1): err= 0: pid=1623688: Fri Apr 26 15:27:18 2024 00:17:01.722 read: IOPS=5922, BW=23.1MiB/s (24.3MB/s)(23.2MiB/1004msec) 00:17:01.722 slat (nsec): min=886, max=14390k, avg=75824.75, stdev=538520.31 00:17:01.722 clat (usec): min=1379, max=35725, avg=10758.56, stdev=4521.86 00:17:01.722 lat (usec): min=3545, max=35731, avg=10834.38, stdev=4542.95 00:17:01.722 clat percentiles (usec): 00:17:01.722 | 1.00th=[ 3884], 5.00th=[ 6325], 10.00th=[ 7111], 20.00th=[ 7832], 00:17:01.722 | 30.00th=[ 8356], 40.00th=[ 8717], 50.00th=[ 9372], 60.00th=[10159], 00:17:01.722 | 70.00th=[11731], 80.00th=[13698], 90.00th=[15533], 95.00th=[18482], 00:17:01.722 | 99.00th=[27919], 99.50th=[33424], 99.90th=[35914], 99.95th=[35914], 00:17:01.722 | 99.99th=[35914] 00:17:01.722 write: IOPS=6119, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1004msec); 0 zone resets 00:17:01.722 slat (nsec): min=1474, max=6800.6k, avg=74755.92, stdev=473865.14 00:17:01.722 clat (usec): min=583, max=35528, avg=10228.14, stdev=5364.78 00:17:01.722 lat (usec): min=1162, max=35532, avg=10302.89, stdev=5394.93 00:17:01.722 clat percentiles (usec): 00:17:01.722 | 1.00th=[ 2999], 5.00th=[ 4293], 10.00th=[ 4752], 20.00th=[ 6128], 00:17:01.722 | 30.00th=[ 6980], 40.00th=[ 7767], 50.00th=[ 9110], 60.00th=[ 9765], 00:17:01.722 | 70.00th=[11469], 80.00th=[13173], 90.00th=[17695], 95.00th=[22938], 00:17:01.722 | 99.00th=[27919], 99.50th=[28705], 99.90th=[32113], 99.95th=[32113], 00:17:01.722 | 99.99th=[35390] 00:17:01.722 bw ( KiB/s): min=24576, max=24576, per=30.38%, avg=24576.00, stdev= 0.00, samples=2 00:17:01.722 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=2 00:17:01.722 lat (usec) : 750=0.01% 00:17:01.722 lat (msec) : 2=0.17%, 4=2.03%, 10=57.69%, 20=33.67%, 50=6.44% 00:17:01.722 cpu : usr=3.29%, sys=7.58%, ctx=383, majf=0, minf=1 00:17:01.722 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:17:01.722 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:01.722 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:01.722 issued rwts: total=5946,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:01.723 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:01.723 job3: (groupid=0, jobs=1): err= 0: pid=1623689: Fri Apr 26 15:27:18 2024 00:17:01.723 read: IOPS=5724, BW=22.4MiB/s (23.4MB/s)(22.5MiB/1006msec) 00:17:01.723 slat (nsec): min=908, max=15626k, avg=82486.83, stdev=631445.16 00:17:01.723 clat (usec): min=2393, max=32009, avg=11103.27, stdev=4903.19 00:17:01.723 lat (usec): min=2413, max=32018, avg=11185.76, stdev=4938.52 00:17:01.723 clat percentiles (usec): 00:17:01.723 | 1.00th=[ 3490], 5.00th=[ 4817], 10.00th=[ 6456], 20.00th=[ 7767], 00:17:01.723 | 30.00th=[ 8717], 40.00th=[ 9372], 50.00th=[10159], 60.00th=[10552], 00:17:01.723 | 70.00th=[11863], 80.00th=[13960], 90.00th=[17957], 95.00th=[20055], 00:17:01.723 | 99.00th=[28967], 99.50th=[30278], 99.90th=[32113], 99.95th=[32113], 00:17:01.723 | 99.99th=[32113] 00:17:01.723 write: IOPS=6107, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1006msec); 0 zone resets 00:17:01.723 slat (nsec): min=1506, max=10012k, avg=67325.92, stdev=488529.18 00:17:01.723 clat (usec): min=1041, max=32705, avg=10379.92, stdev=5711.50 00:17:01.723 lat (usec): min=1049, max=32707, avg=10447.25, stdev=5734.15 00:17:01.723 clat percentiles (usec): 00:17:01.723 | 1.00th=[ 1729], 5.00th=[ 3687], 10.00th=[ 4686], 20.00th=[ 6128], 00:17:01.723 | 30.00th=[ 6915], 40.00th=[ 7439], 50.00th=[ 9110], 60.00th=[10421], 00:17:01.723 | 70.00th=[11731], 80.00th=[13829], 90.00th=[18482], 95.00th=[21890], 00:17:01.723 | 99.00th=[29492], 99.50th=[32113], 99.90th=[32113], 99.95th=[32637], 00:17:01.723 | 99.99th=[32637] 00:17:01.723 bw ( KiB/s): min=21128, max=28016, per=30.38%, avg=24572.00, stdev=4870.55, samples=2 00:17:01.723 iops : min= 5282, max= 7004, avg=6143.00, stdev=1217.64, samples=2 00:17:01.723 lat (msec) : 2=0.71%, 4=3.39%, 10=48.29%, 20=41.37%, 50=6.23% 00:17:01.723 cpu : usr=3.88%, sys=6.37%, ctx=419, majf=0, minf=1 00:17:01.723 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:17:01.723 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:01.723 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:01.723 issued rwts: total=5759,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:01.723 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:01.723 00:17:01.723 Run status group 0 (all jobs): 00:17:01.723 READ: bw=74.7MiB/s (78.4MB/s), 11.9MiB/s-23.1MiB/s (12.5MB/s-24.3MB/s), io=75.2MiB (78.8MB), run=1004-1006msec 00:17:01.723 WRITE: bw=79.0MiB/s (82.8MB/s), 13.4MiB/s-23.9MiB/s (14.0MB/s-25.1MB/s), io=79.5MiB (83.3MB), run=1004-1006msec 00:17:01.723 00:17:01.723 Disk stats (read/write): 00:17:01.723 nvme0n1: ios=3634/3879, merge=0/0, ticks=16307/17351, in_queue=33658, util=87.37% 00:17:01.723 nvme0n2: ios=2721/3072, merge=0/0, ticks=18194/17238, in_queue=35432, util=95.72% 00:17:01.723 nvme0n3: ios=4846/5120, merge=0/0, ticks=35914/31560, in_queue=67474, util=88.40% 00:17:01.723 nvme0n4: ios=4734/5120, merge=0/0, ticks=36798/38486, in_queue=75284, util=88.47% 00:17:01.723 15:27:18 -- target/fio.sh@55 -- # sync 00:17:01.723 15:27:18 -- target/fio.sh@59 -- # fio_pid=1624019 00:17:01.723 15:27:18 -- target/fio.sh@61 -- # sleep 3 00:17:01.723 15:27:18 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:17:01.723 [global] 00:17:01.723 thread=1 00:17:01.723 invalidate=1 00:17:01.723 rw=read 00:17:01.723 time_based=1 00:17:01.723 runtime=10 00:17:01.723 ioengine=libaio 00:17:01.723 direct=1 00:17:01.723 bs=4096 00:17:01.723 iodepth=1 00:17:01.723 norandommap=1 00:17:01.723 numjobs=1 00:17:01.723 00:17:01.723 [job0] 00:17:01.723 filename=/dev/nvme0n1 00:17:01.723 [job1] 00:17:01.723 filename=/dev/nvme0n2 00:17:01.723 [job2] 00:17:01.723 filename=/dev/nvme0n3 00:17:01.723 [job3] 00:17:01.723 filename=/dev/nvme0n4 00:17:01.723 Could not set queue depth (nvme0n1) 00:17:01.723 Could not set queue depth (nvme0n2) 00:17:01.723 Could not set queue depth (nvme0n3) 00:17:01.723 Could not set queue depth (nvme0n4) 00:17:01.981 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:01.981 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:01.981 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:01.981 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:01.981 fio-3.35 00:17:01.981 Starting 4 threads 00:17:04.509 15:27:21 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:17:04.509 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=262144, buflen=4096 00:17:04.509 fio: pid=1624216, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:04.768 15:27:21 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:17:04.768 15:27:22 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:04.768 15:27:22 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:17:04.768 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=274432, buflen=4096 00:17:04.768 fio: pid=1624215, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:05.027 15:27:22 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:05.027 15:27:22 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:17:05.027 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=10551296, buflen=4096 00:17:05.027 fio: pid=1624213, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:05.027 15:27:22 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:05.027 15:27:22 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:17:05.027 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=311296, buflen=4096 00:17:05.027 fio: pid=1624214, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:05.286 00:17:05.286 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1624213: Fri Apr 26 15:27:22 2024 00:17:05.286 read: IOPS=872, BW=3491KiB/s (3574kB/s)(10.1MiB/2952msec) 00:17:05.286 slat (usec): min=6, max=30290, avg=41.14, stdev=621.38 00:17:05.286 clat (usec): min=280, max=43017, avg=1090.32, stdev=1985.10 00:17:05.286 lat (usec): min=288, max=43044, avg=1131.47, stdev=2079.81 00:17:05.286 clat percentiles (usec): 00:17:05.286 | 1.00th=[ 660], 5.00th=[ 832], 10.00th=[ 881], 20.00th=[ 938], 00:17:05.286 | 30.00th=[ 971], 40.00th=[ 988], 50.00th=[ 1004], 60.00th=[ 1020], 00:17:05.286 | 70.00th=[ 1037], 80.00th=[ 1057], 90.00th=[ 1090], 95.00th=[ 1123], 00:17:05.286 | 99.00th=[ 1188], 99.50th=[ 1254], 99.90th=[42206], 99.95th=[42206], 00:17:05.286 | 99.99th=[43254] 00:17:05.286 bw ( KiB/s): min= 3808, max= 4000, per=100.00%, avg=3904.00, stdev=70.88, samples=5 00:17:05.286 iops : min= 952, max= 1000, avg=976.00, stdev=17.72, samples=5 00:17:05.286 lat (usec) : 500=0.43%, 750=1.01%, 1000=47.15% 00:17:05.286 lat (msec) : 2=51.07%, 4=0.08%, 50=0.23% 00:17:05.286 cpu : usr=1.73%, sys=3.32%, ctx=2579, majf=0, minf=1 00:17:05.286 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:05.286 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:05.286 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:05.286 issued rwts: total=2577,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:05.286 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:05.286 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1624214: Fri Apr 26 15:27:22 2024 00:17:05.286 read: IOPS=24, BW=97.9KiB/s (100kB/s)(304KiB/3106msec) 00:17:05.286 slat (usec): min=24, max=13544, avg=398.17, stdev=1949.70 00:17:05.286 clat (usec): min=788, max=42077, avg=40181.69, stdev=8017.03 00:17:05.286 lat (usec): min=814, max=54944, avg=40584.77, stdev=8308.40 00:17:05.286 clat percentiles (usec): 00:17:05.286 | 1.00th=[ 791], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:17:05.286 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:17:05.286 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:05.286 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:05.286 | 99.99th=[42206] 00:17:05.286 bw ( KiB/s): min= 95, max= 104, per=2.73%, avg=98.50, stdev= 4.28, samples=6 00:17:05.286 iops : min= 23, max= 26, avg=24.50, stdev= 1.22, samples=6 00:17:05.286 lat (usec) : 1000=2.60% 00:17:05.286 lat (msec) : 2=1.30%, 50=94.81% 00:17:05.286 cpu : usr=0.16%, sys=0.00%, ctx=81, majf=0, minf=1 00:17:05.286 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:05.286 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:05.286 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:05.286 issued rwts: total=77,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:05.286 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:05.286 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1624215: Fri Apr 26 15:27:22 2024 00:17:05.286 read: IOPS=24, BW=96.4KiB/s (98.7kB/s)(268KiB/2781msec) 00:17:05.286 slat (usec): min=23, max=293, avg=28.41, stdev=32.61 00:17:05.286 clat (usec): min=1073, max=42084, avg=41158.40, stdev=4985.88 00:17:05.286 lat (usec): min=1110, max=42108, avg=41186.87, stdev=4984.79 00:17:05.287 clat percentiles (usec): 00:17:05.287 | 1.00th=[ 1074], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:17:05.287 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:17:05.287 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:05.287 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:05.287 | 99.99th=[42206] 00:17:05.287 bw ( KiB/s): min= 96, max= 96, per=2.68%, avg=96.00, stdev= 0.00, samples=5 00:17:05.287 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=5 00:17:05.287 lat (msec) : 2=1.47%, 50=97.06% 00:17:05.287 cpu : usr=0.00%, sys=0.11%, ctx=69, majf=0, minf=1 00:17:05.287 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:05.287 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:05.287 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:05.287 issued rwts: total=68,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:05.287 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:05.287 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1624216: Fri Apr 26 15:27:22 2024 00:17:05.287 read: IOPS=25, BW=98.9KiB/s (101kB/s)(256KiB/2589msec) 00:17:05.287 slat (nsec): min=26141, max=43802, avg=28358.48, stdev=2899.80 00:17:05.287 clat (usec): min=768, max=42041, avg=40001.96, stdev=6783.95 00:17:05.287 lat (usec): min=796, max=42067, avg=40030.34, stdev=6782.63 00:17:05.287 clat percentiles (usec): 00:17:05.287 | 1.00th=[ 766], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:17:05.287 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:17:05.287 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:17:05.287 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:05.287 | 99.99th=[42206] 00:17:05.287 bw ( KiB/s): min= 96, max= 104, per=2.76%, avg=99.20, stdev= 4.38, samples=5 00:17:05.287 iops : min= 24, max= 26, avg=24.80, stdev= 1.10, samples=5 00:17:05.287 lat (usec) : 1000=1.54% 00:17:05.287 lat (msec) : 10=1.54%, 50=95.38% 00:17:05.287 cpu : usr=0.15%, sys=0.00%, ctx=68, majf=0, minf=2 00:17:05.287 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:05.287 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:05.287 complete : 0=1.5%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:05.287 issued rwts: total=65,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:05.287 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:05.287 00:17:05.287 Run status group 0 (all jobs): 00:17:05.287 READ: bw=3584KiB/s (3670kB/s), 96.4KiB/s-3491KiB/s (98.7kB/s-3574kB/s), io=10.9MiB (11.4MB), run=2589-3106msec 00:17:05.287 00:17:05.287 Disk stats (read/write): 00:17:05.287 nvme0n1: ios=2573/0, merge=0/0, ticks=2557/0, in_queue=2557, util=93.56% 00:17:05.287 nvme0n2: ios=76/0, merge=0/0, ticks=3055/0, in_queue=3055, util=94.86% 00:17:05.287 nvme0n3: ios=62/0, merge=0/0, ticks=2551/0, in_queue=2551, util=96.03% 00:17:05.287 nvme0n4: ios=95/0, merge=0/0, ticks=3281/0, in_queue=3281, util=98.85% 00:17:05.287 15:27:22 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:05.287 15:27:22 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:17:05.545 15:27:22 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:05.545 15:27:22 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:17:05.545 15:27:22 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:05.545 15:27:22 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:17:05.804 15:27:23 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:05.804 15:27:23 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:17:06.072 15:27:23 -- target/fio.sh@69 -- # fio_status=0 00:17:06.072 15:27:23 -- target/fio.sh@70 -- # wait 1624019 00:17:06.072 15:27:23 -- target/fio.sh@70 -- # fio_status=4 00:17:06.072 15:27:23 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:06.072 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:06.072 15:27:23 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:06.072 15:27:23 -- common/autotest_common.sh@1205 -- # local i=0 00:17:06.072 15:27:23 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:17:06.072 15:27:23 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:06.072 15:27:23 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:17:06.072 15:27:23 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:06.072 15:27:23 -- common/autotest_common.sh@1217 -- # return 0 00:17:06.072 15:27:23 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:17:06.072 15:27:23 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:17:06.072 nvmf hotplug test: fio failed as expected 00:17:06.072 15:27:23 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:06.332 15:27:23 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:17:06.332 15:27:23 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:17:06.332 15:27:23 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:17:06.332 15:27:23 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:17:06.332 15:27:23 -- target/fio.sh@91 -- # nvmftestfini 00:17:06.332 15:27:23 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:06.332 15:27:23 -- nvmf/common.sh@117 -- # sync 00:17:06.332 15:27:23 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:06.332 15:27:23 -- nvmf/common.sh@120 -- # set +e 00:17:06.332 15:27:23 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:06.332 15:27:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:06.332 rmmod nvme_tcp 00:17:06.332 rmmod nvme_fabrics 00:17:06.332 rmmod nvme_keyring 00:17:06.332 15:27:23 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:06.332 15:27:23 -- nvmf/common.sh@124 -- # set -e 00:17:06.332 15:27:23 -- nvmf/common.sh@125 -- # return 0 00:17:06.332 15:27:23 -- nvmf/common.sh@478 -- # '[' -n 1620056 ']' 00:17:06.332 15:27:23 -- nvmf/common.sh@479 -- # killprocess 1620056 00:17:06.332 15:27:23 -- common/autotest_common.sh@936 -- # '[' -z 1620056 ']' 00:17:06.332 15:27:23 -- common/autotest_common.sh@940 -- # kill -0 1620056 00:17:06.332 15:27:23 -- common/autotest_common.sh@941 -- # uname 00:17:06.332 15:27:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:06.332 15:27:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1620056 00:17:06.332 15:27:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:06.332 15:27:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:06.332 15:27:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1620056' 00:17:06.332 killing process with pid 1620056 00:17:06.332 15:27:23 -- common/autotest_common.sh@955 -- # kill 1620056 00:17:06.332 15:27:23 -- common/autotest_common.sh@960 -- # wait 1620056 00:17:06.593 15:27:23 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:06.593 15:27:23 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:06.593 15:27:23 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:06.593 15:27:23 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:06.593 15:27:23 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:06.593 15:27:23 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:06.593 15:27:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:06.593 15:27:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:08.505 15:27:25 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:08.505 00:17:08.505 real 0m28.536s 00:17:08.505 user 2m27.391s 00:17:08.505 sys 0m9.099s 00:17:08.505 15:27:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:08.506 15:27:25 -- common/autotest_common.sh@10 -- # set +x 00:17:08.506 ************************************ 00:17:08.506 END TEST nvmf_fio_target 00:17:08.506 ************************************ 00:17:08.766 15:27:25 -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:08.766 15:27:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:08.766 15:27:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:08.766 15:27:25 -- common/autotest_common.sh@10 -- # set +x 00:17:08.766 ************************************ 00:17:08.766 START TEST nvmf_bdevio 00:17:08.766 ************************************ 00:17:08.766 15:27:26 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:08.766 * Looking for test storage... 00:17:09.027 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:09.027 15:27:26 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:09.027 15:27:26 -- nvmf/common.sh@7 -- # uname -s 00:17:09.027 15:27:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:09.027 15:27:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:09.027 15:27:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:09.027 15:27:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:09.027 15:27:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:09.027 15:27:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:09.027 15:27:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:09.027 15:27:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:09.027 15:27:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:09.027 15:27:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:09.027 15:27:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:09.027 15:27:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:09.027 15:27:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:09.027 15:27:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:09.027 15:27:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:09.028 15:27:26 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:09.028 15:27:26 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:09.028 15:27:26 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:09.028 15:27:26 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:09.028 15:27:26 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:09.028 15:27:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.028 15:27:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.028 15:27:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.028 15:27:26 -- paths/export.sh@5 -- # export PATH 00:17:09.028 15:27:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.028 15:27:26 -- nvmf/common.sh@47 -- # : 0 00:17:09.028 15:27:26 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:09.028 15:27:26 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:09.028 15:27:26 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:09.028 15:27:26 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:09.028 15:27:26 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:09.028 15:27:26 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:09.028 15:27:26 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:09.028 15:27:26 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:09.028 15:27:26 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:09.028 15:27:26 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:09.028 15:27:26 -- target/bdevio.sh@14 -- # nvmftestinit 00:17:09.028 15:27:26 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:09.028 15:27:26 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:09.028 15:27:26 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:09.028 15:27:26 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:09.028 15:27:26 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:09.028 15:27:26 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:09.028 15:27:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:09.028 15:27:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:09.028 15:27:26 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:09.028 15:27:26 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:09.028 15:27:26 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:09.028 15:27:26 -- common/autotest_common.sh@10 -- # set +x 00:17:15.615 15:27:33 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:15.615 15:27:33 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:15.615 15:27:33 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:15.615 15:27:33 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:15.615 15:27:33 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:15.615 15:27:33 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:15.615 15:27:33 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:15.615 15:27:33 -- nvmf/common.sh@295 -- # net_devs=() 00:17:15.615 15:27:33 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:15.615 15:27:33 -- nvmf/common.sh@296 -- # e810=() 00:17:15.615 15:27:33 -- nvmf/common.sh@296 -- # local -ga e810 00:17:15.615 15:27:33 -- nvmf/common.sh@297 -- # x722=() 00:17:15.615 15:27:33 -- nvmf/common.sh@297 -- # local -ga x722 00:17:15.615 15:27:33 -- nvmf/common.sh@298 -- # mlx=() 00:17:15.615 15:27:33 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:15.615 15:27:33 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:15.615 15:27:33 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:15.615 15:27:33 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:15.615 15:27:33 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:15.615 15:27:33 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:15.615 15:27:33 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:15.615 15:27:33 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:15.615 15:27:33 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:15.615 15:27:33 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:15.615 15:27:33 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:15.615 15:27:33 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:15.615 15:27:33 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:15.615 15:27:33 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:15.615 15:27:33 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:15.615 15:27:33 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:15.615 15:27:33 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:15.615 15:27:33 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:15.615 15:27:33 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:15.615 15:27:33 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:15.615 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:15.615 15:27:33 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:15.615 15:27:33 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:15.615 15:27:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:15.615 15:27:33 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:15.615 15:27:33 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:15.615 15:27:33 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:15.615 15:27:33 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:15.615 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:15.615 15:27:33 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:15.615 15:27:33 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:15.615 15:27:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:15.615 15:27:33 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:15.615 15:27:33 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:15.615 15:27:33 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:15.615 15:27:33 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:15.615 15:27:33 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:15.615 15:27:33 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:15.615 15:27:33 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:15.615 15:27:33 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:15.615 15:27:33 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:15.615 15:27:33 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:15.615 Found net devices under 0000:31:00.0: cvl_0_0 00:17:15.615 15:27:33 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:15.615 15:27:33 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:15.615 15:27:33 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:15.615 15:27:33 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:15.615 15:27:33 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:15.615 15:27:33 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:15.615 Found net devices under 0000:31:00.1: cvl_0_1 00:17:15.615 15:27:33 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:15.615 15:27:33 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:15.615 15:27:33 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:15.615 15:27:33 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:15.615 15:27:33 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:15.615 15:27:33 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:15.615 15:27:33 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:15.615 15:27:33 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:15.615 15:27:33 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:15.615 15:27:33 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:15.615 15:27:33 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:15.615 15:27:33 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:15.615 15:27:33 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:15.615 15:27:33 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:15.615 15:27:33 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:15.615 15:27:33 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:15.615 15:27:33 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:15.615 15:27:33 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:15.615 15:27:33 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:15.876 15:27:33 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:15.876 15:27:33 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:15.876 15:27:33 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:15.876 15:27:33 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:15.876 15:27:33 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:15.876 15:27:33 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:15.876 15:27:33 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:15.876 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:15.876 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.645 ms 00:17:15.876 00:17:15.876 --- 10.0.0.2 ping statistics --- 00:17:15.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:15.876 rtt min/avg/max/mdev = 0.645/0.645/0.645/0.000 ms 00:17:15.876 15:27:33 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:16.137 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:16.137 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.268 ms 00:17:16.137 00:17:16.137 --- 10.0.0.1 ping statistics --- 00:17:16.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:16.137 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:17:16.137 15:27:33 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:16.137 15:27:33 -- nvmf/common.sh@411 -- # return 0 00:17:16.137 15:27:33 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:16.137 15:27:33 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:16.137 15:27:33 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:16.137 15:27:33 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:16.137 15:27:33 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:16.137 15:27:33 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:16.137 15:27:33 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:16.137 15:27:33 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:16.137 15:27:33 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:16.137 15:27:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:16.137 15:27:33 -- common/autotest_common.sh@10 -- # set +x 00:17:16.137 15:27:33 -- nvmf/common.sh@470 -- # nvmfpid=1629299 00:17:16.137 15:27:33 -- nvmf/common.sh@471 -- # waitforlisten 1629299 00:17:16.137 15:27:33 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:17:16.137 15:27:33 -- common/autotest_common.sh@817 -- # '[' -z 1629299 ']' 00:17:16.137 15:27:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:16.137 15:27:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:16.137 15:27:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:16.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:16.137 15:27:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:16.137 15:27:33 -- common/autotest_common.sh@10 -- # set +x 00:17:16.137 [2024-04-26 15:27:33.423200] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:17:16.137 [2024-04-26 15:27:33.423246] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:16.137 EAL: No free 2048 kB hugepages reported on node 1 00:17:16.137 [2024-04-26 15:27:33.490950] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:16.137 [2024-04-26 15:27:33.571224] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:16.137 [2024-04-26 15:27:33.571285] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:16.137 [2024-04-26 15:27:33.571293] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:16.137 [2024-04-26 15:27:33.571300] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:16.137 [2024-04-26 15:27:33.571306] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:16.137 [2024-04-26 15:27:33.571476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:16.137 [2024-04-26 15:27:33.571634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:16.137 [2024-04-26 15:27:33.571791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:16.137 [2024-04-26 15:27:33.571792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:17.079 15:27:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:17.079 15:27:34 -- common/autotest_common.sh@850 -- # return 0 00:17:17.079 15:27:34 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:17.079 15:27:34 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:17.079 15:27:34 -- common/autotest_common.sh@10 -- # set +x 00:17:17.079 15:27:34 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:17.079 15:27:34 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:17.079 15:27:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:17.079 15:27:34 -- common/autotest_common.sh@10 -- # set +x 00:17:17.079 [2024-04-26 15:27:34.263270] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:17.079 15:27:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:17.079 15:27:34 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:17.079 15:27:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:17.079 15:27:34 -- common/autotest_common.sh@10 -- # set +x 00:17:17.079 Malloc0 00:17:17.079 15:27:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:17.079 15:27:34 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:17.079 15:27:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:17.079 15:27:34 -- common/autotest_common.sh@10 -- # set +x 00:17:17.079 15:27:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:17.079 15:27:34 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:17.079 15:27:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:17.079 15:27:34 -- common/autotest_common.sh@10 -- # set +x 00:17:17.079 15:27:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:17.079 15:27:34 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:17.079 15:27:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:17.079 15:27:34 -- common/autotest_common.sh@10 -- # set +x 00:17:17.079 [2024-04-26 15:27:34.312287] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:17.079 15:27:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:17.079 15:27:34 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:17:17.079 15:27:34 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:17.079 15:27:34 -- nvmf/common.sh@521 -- # config=() 00:17:17.079 15:27:34 -- nvmf/common.sh@521 -- # local subsystem config 00:17:17.079 15:27:34 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:17.079 15:27:34 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:17.079 { 00:17:17.079 "params": { 00:17:17.079 "name": "Nvme$subsystem", 00:17:17.079 "trtype": "$TEST_TRANSPORT", 00:17:17.079 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:17.079 "adrfam": "ipv4", 00:17:17.079 "trsvcid": "$NVMF_PORT", 00:17:17.079 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:17.079 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:17.079 "hdgst": ${hdgst:-false}, 00:17:17.079 "ddgst": ${ddgst:-false} 00:17:17.079 }, 00:17:17.079 "method": "bdev_nvme_attach_controller" 00:17:17.079 } 00:17:17.079 EOF 00:17:17.079 )") 00:17:17.079 15:27:34 -- nvmf/common.sh@543 -- # cat 00:17:17.079 15:27:34 -- nvmf/common.sh@545 -- # jq . 00:17:17.079 15:27:34 -- nvmf/common.sh@546 -- # IFS=, 00:17:17.079 15:27:34 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:17:17.079 "params": { 00:17:17.079 "name": "Nvme1", 00:17:17.079 "trtype": "tcp", 00:17:17.079 "traddr": "10.0.0.2", 00:17:17.079 "adrfam": "ipv4", 00:17:17.079 "trsvcid": "4420", 00:17:17.079 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:17.079 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:17.079 "hdgst": false, 00:17:17.079 "ddgst": false 00:17:17.079 }, 00:17:17.079 "method": "bdev_nvme_attach_controller" 00:17:17.079 }' 00:17:17.079 [2024-04-26 15:27:34.367603] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:17:17.079 [2024-04-26 15:27:34.367669] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1629618 ] 00:17:17.079 EAL: No free 2048 kB hugepages reported on node 1 00:17:17.079 [2024-04-26 15:27:34.432843] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:17.079 [2024-04-26 15:27:34.506133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:17.079 [2024-04-26 15:27:34.506262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:17.079 [2024-04-26 15:27:34.506265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:17.648 I/O targets: 00:17:17.648 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:17.648 00:17:17.648 00:17:17.648 CUnit - A unit testing framework for C - Version 2.1-3 00:17:17.648 http://cunit.sourceforge.net/ 00:17:17.648 00:17:17.648 00:17:17.648 Suite: bdevio tests on: Nvme1n1 00:17:17.648 Test: blockdev write read block ...passed 00:17:17.648 Test: blockdev write zeroes read block ...passed 00:17:17.648 Test: blockdev write zeroes read no split ...passed 00:17:17.648 Test: blockdev write zeroes read split ...passed 00:17:17.648 Test: blockdev write zeroes read split partial ...passed 00:17:17.648 Test: blockdev reset ...[2024-04-26 15:27:34.983232] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:17.648 [2024-04-26 15:27:34.983296] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc23910 (9): Bad file descriptor 00:17:17.648 [2024-04-26 15:27:34.994405] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:17.648 passed 00:17:17.648 Test: blockdev write read 8 blocks ...passed 00:17:17.648 Test: blockdev write read size > 128k ...passed 00:17:17.648 Test: blockdev write read invalid size ...passed 00:17:17.648 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:17.648 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:17.648 Test: blockdev write read max offset ...passed 00:17:17.906 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:17.907 Test: blockdev writev readv 8 blocks ...passed 00:17:17.907 Test: blockdev writev readv 30 x 1block ...passed 00:17:17.907 Test: blockdev writev readv block ...passed 00:17:17.907 Test: blockdev writev readv size > 128k ...passed 00:17:17.907 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:17.907 Test: blockdev comparev and writev ...[2024-04-26 15:27:35.262066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:17.907 [2024-04-26 15:27:35.262091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:17.907 [2024-04-26 15:27:35.262102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:17.907 [2024-04-26 15:27:35.262111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:17.907 [2024-04-26 15:27:35.262629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:17.907 [2024-04-26 15:27:35.262639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:17.907 [2024-04-26 15:27:35.262648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:17.907 [2024-04-26 15:27:35.262654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:17.907 [2024-04-26 15:27:35.263179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:17.907 [2024-04-26 15:27:35.263187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:17.907 [2024-04-26 15:27:35.263197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:17.907 [2024-04-26 15:27:35.263202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:17.907 [2024-04-26 15:27:35.263722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:17.907 [2024-04-26 15:27:35.263731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:17.907 [2024-04-26 15:27:35.263740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:17.907 [2024-04-26 15:27:35.263745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:17.907 passed 00:17:17.907 Test: blockdev nvme passthru rw ...passed 00:17:17.907 Test: blockdev nvme passthru vendor specific ...[2024-04-26 15:27:35.348660] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:17.907 [2024-04-26 15:27:35.348672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:17.907 [2024-04-26 15:27:35.349096] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:17.907 [2024-04-26 15:27:35.349104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:17.907 [2024-04-26 15:27:35.349546] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:17.907 [2024-04-26 15:27:35.349554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:17.907 [2024-04-26 15:27:35.349998] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:17.907 [2024-04-26 15:27:35.350005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:17.907 passed 00:17:18.167 Test: blockdev nvme admin passthru ...passed 00:17:18.167 Test: blockdev copy ...passed 00:17:18.167 00:17:18.167 Run Summary: Type Total Ran Passed Failed Inactive 00:17:18.167 suites 1 1 n/a 0 0 00:17:18.167 tests 23 23 23 0 0 00:17:18.167 asserts 152 152 152 0 n/a 00:17:18.167 00:17:18.167 Elapsed time = 1.190 seconds 00:17:18.167 15:27:35 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:18.167 15:27:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:18.167 15:27:35 -- common/autotest_common.sh@10 -- # set +x 00:17:18.167 15:27:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:18.167 15:27:35 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:18.167 15:27:35 -- target/bdevio.sh@30 -- # nvmftestfini 00:17:18.167 15:27:35 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:18.167 15:27:35 -- nvmf/common.sh@117 -- # sync 00:17:18.167 15:27:35 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:18.167 15:27:35 -- nvmf/common.sh@120 -- # set +e 00:17:18.167 15:27:35 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:18.167 15:27:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:18.167 rmmod nvme_tcp 00:17:18.167 rmmod nvme_fabrics 00:17:18.167 rmmod nvme_keyring 00:17:18.167 15:27:35 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:18.167 15:27:35 -- nvmf/common.sh@124 -- # set -e 00:17:18.167 15:27:35 -- nvmf/common.sh@125 -- # return 0 00:17:18.167 15:27:35 -- nvmf/common.sh@478 -- # '[' -n 1629299 ']' 00:17:18.167 15:27:35 -- nvmf/common.sh@479 -- # killprocess 1629299 00:17:18.167 15:27:35 -- common/autotest_common.sh@936 -- # '[' -z 1629299 ']' 00:17:18.167 15:27:35 -- common/autotest_common.sh@940 -- # kill -0 1629299 00:17:18.428 15:27:35 -- common/autotest_common.sh@941 -- # uname 00:17:18.428 15:27:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:18.428 15:27:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1629299 00:17:18.428 15:27:35 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:17:18.428 15:27:35 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:17:18.428 15:27:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1629299' 00:17:18.428 killing process with pid 1629299 00:17:18.428 15:27:35 -- common/autotest_common.sh@955 -- # kill 1629299 00:17:18.428 15:27:35 -- common/autotest_common.sh@960 -- # wait 1629299 00:17:18.428 15:27:35 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:18.428 15:27:35 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:18.428 15:27:35 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:18.428 15:27:35 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:18.428 15:27:35 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:18.428 15:27:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:18.428 15:27:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:18.428 15:27:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:21.030 15:27:37 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:21.030 00:17:21.030 real 0m11.804s 00:17:21.030 user 0m13.555s 00:17:21.030 sys 0m5.738s 00:17:21.030 15:27:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:21.030 15:27:37 -- common/autotest_common.sh@10 -- # set +x 00:17:21.030 ************************************ 00:17:21.030 END TEST nvmf_bdevio 00:17:21.030 ************************************ 00:17:21.030 15:27:37 -- nvmf/nvmf.sh@58 -- # '[' tcp = tcp ']' 00:17:21.030 15:27:37 -- nvmf/nvmf.sh@59 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:21.030 15:27:37 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:17:21.030 15:27:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:21.030 15:27:37 -- common/autotest_common.sh@10 -- # set +x 00:17:21.030 ************************************ 00:17:21.030 START TEST nvmf_bdevio_no_huge 00:17:21.030 ************************************ 00:17:21.030 15:27:38 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:21.030 * Looking for test storage... 00:17:21.030 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:21.030 15:27:38 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:21.030 15:27:38 -- nvmf/common.sh@7 -- # uname -s 00:17:21.030 15:27:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:21.030 15:27:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:21.030 15:27:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:21.030 15:27:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:21.030 15:27:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:21.030 15:27:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:21.030 15:27:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:21.030 15:27:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:21.030 15:27:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:21.030 15:27:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:21.030 15:27:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:21.030 15:27:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:21.030 15:27:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:21.030 15:27:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:21.030 15:27:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:21.030 15:27:38 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:21.030 15:27:38 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:21.030 15:27:38 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:21.030 15:27:38 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:21.030 15:27:38 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:21.030 15:27:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.030 15:27:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.030 15:27:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.030 15:27:38 -- paths/export.sh@5 -- # export PATH 00:17:21.030 15:27:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.030 15:27:38 -- nvmf/common.sh@47 -- # : 0 00:17:21.030 15:27:38 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:21.030 15:27:38 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:21.030 15:27:38 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:21.030 15:27:38 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:21.030 15:27:38 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:21.030 15:27:38 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:21.030 15:27:38 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:21.030 15:27:38 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:21.030 15:27:38 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:21.030 15:27:38 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:21.030 15:27:38 -- target/bdevio.sh@14 -- # nvmftestinit 00:17:21.030 15:27:38 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:21.030 15:27:38 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:21.030 15:27:38 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:21.030 15:27:38 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:21.030 15:27:38 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:21.030 15:27:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:21.030 15:27:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:21.030 15:27:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:21.030 15:27:38 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:21.030 15:27:38 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:21.030 15:27:38 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:21.030 15:27:38 -- common/autotest_common.sh@10 -- # set +x 00:17:29.177 15:27:45 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:29.177 15:27:45 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:29.177 15:27:45 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:29.177 15:27:45 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:29.177 15:27:45 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:29.177 15:27:45 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:29.177 15:27:45 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:29.177 15:27:45 -- nvmf/common.sh@295 -- # net_devs=() 00:17:29.177 15:27:45 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:29.177 15:27:45 -- nvmf/common.sh@296 -- # e810=() 00:17:29.177 15:27:45 -- nvmf/common.sh@296 -- # local -ga e810 00:17:29.177 15:27:45 -- nvmf/common.sh@297 -- # x722=() 00:17:29.177 15:27:45 -- nvmf/common.sh@297 -- # local -ga x722 00:17:29.177 15:27:45 -- nvmf/common.sh@298 -- # mlx=() 00:17:29.177 15:27:45 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:29.177 15:27:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:29.177 15:27:45 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:29.177 15:27:45 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:29.177 15:27:45 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:29.177 15:27:45 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:29.177 15:27:45 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:29.177 15:27:45 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:29.177 15:27:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:29.177 15:27:45 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:29.177 15:27:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:29.177 15:27:45 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:29.177 15:27:45 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:29.177 15:27:45 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:29.177 15:27:45 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:29.177 15:27:45 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:29.177 15:27:45 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:29.177 15:27:45 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:29.177 15:27:45 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:29.177 15:27:45 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:29.177 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:29.177 15:27:45 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:29.177 15:27:45 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:29.177 15:27:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:29.177 15:27:45 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:29.177 15:27:45 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:29.177 15:27:45 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:29.177 15:27:45 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:29.177 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:29.177 15:27:45 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:29.177 15:27:45 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:29.177 15:27:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:29.177 15:27:45 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:29.177 15:27:45 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:29.177 15:27:45 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:29.177 15:27:45 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:29.177 15:27:45 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:29.177 15:27:45 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:29.177 15:27:45 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:29.177 15:27:45 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:29.177 15:27:45 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:29.177 15:27:45 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:29.177 Found net devices under 0000:31:00.0: cvl_0_0 00:17:29.177 15:27:45 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:29.177 15:27:45 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:29.177 15:27:45 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:29.177 15:27:45 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:29.177 15:27:45 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:29.177 15:27:45 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:29.177 Found net devices under 0000:31:00.1: cvl_0_1 00:17:29.177 15:27:45 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:29.177 15:27:45 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:29.177 15:27:45 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:29.177 15:27:45 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:29.177 15:27:45 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:29.177 15:27:45 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:29.177 15:27:45 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:29.177 15:27:45 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:29.177 15:27:45 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:29.177 15:27:45 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:29.177 15:27:45 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:29.177 15:27:45 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:29.177 15:27:45 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:29.177 15:27:45 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:29.177 15:27:45 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:29.177 15:27:45 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:29.177 15:27:45 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:29.178 15:27:45 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:29.178 15:27:45 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:29.178 15:27:45 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:29.178 15:27:45 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:29.178 15:27:45 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:29.178 15:27:45 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:29.178 15:27:45 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:29.178 15:27:45 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:29.178 15:27:45 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:29.178 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:29.178 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.649 ms 00:17:29.178 00:17:29.178 --- 10.0.0.2 ping statistics --- 00:17:29.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:29.178 rtt min/avg/max/mdev = 0.649/0.649/0.649/0.000 ms 00:17:29.178 15:27:45 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:29.178 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:29.178 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:17:29.178 00:17:29.178 --- 10.0.0.1 ping statistics --- 00:17:29.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:29.178 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:17:29.178 15:27:45 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:29.178 15:27:45 -- nvmf/common.sh@411 -- # return 0 00:17:29.178 15:27:45 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:29.178 15:27:45 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:29.178 15:27:45 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:29.178 15:27:45 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:29.178 15:27:45 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:29.178 15:27:45 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:29.178 15:27:45 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:29.178 15:27:45 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:29.178 15:27:45 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:29.178 15:27:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:29.178 15:27:45 -- common/autotest_common.sh@10 -- # set +x 00:17:29.178 15:27:45 -- nvmf/common.sh@470 -- # nvmfpid=1634062 00:17:29.178 15:27:45 -- nvmf/common.sh@471 -- # waitforlisten 1634062 00:17:29.178 15:27:45 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:29.178 15:27:45 -- common/autotest_common.sh@817 -- # '[' -z 1634062 ']' 00:17:29.178 15:27:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:29.178 15:27:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:29.178 15:27:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:29.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:29.178 15:27:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:29.178 15:27:45 -- common/autotest_common.sh@10 -- # set +x 00:17:29.178 [2024-04-26 15:27:45.550124] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:17:29.178 [2024-04-26 15:27:45.550178] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:29.178 [2024-04-26 15:27:45.642699] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:29.178 [2024-04-26 15:27:45.749133] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:29.178 [2024-04-26 15:27:45.749186] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:29.178 [2024-04-26 15:27:45.749194] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:29.178 [2024-04-26 15:27:45.749201] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:29.178 [2024-04-26 15:27:45.749207] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:29.178 [2024-04-26 15:27:45.749411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:29.178 [2024-04-26 15:27:45.749571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:29.178 [2024-04-26 15:27:45.749732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:29.178 [2024-04-26 15:27:45.749733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:29.178 15:27:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:29.178 15:27:46 -- common/autotest_common.sh@850 -- # return 0 00:17:29.178 15:27:46 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:29.178 15:27:46 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:29.178 15:27:46 -- common/autotest_common.sh@10 -- # set +x 00:17:29.178 15:27:46 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:29.178 15:27:46 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:29.178 15:27:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:29.178 15:27:46 -- common/autotest_common.sh@10 -- # set +x 00:17:29.178 [2024-04-26 15:27:46.395546] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:29.178 15:27:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:29.178 15:27:46 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:29.178 15:27:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:29.178 15:27:46 -- common/autotest_common.sh@10 -- # set +x 00:17:29.178 Malloc0 00:17:29.178 15:27:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:29.178 15:27:46 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:29.178 15:27:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:29.178 15:27:46 -- common/autotest_common.sh@10 -- # set +x 00:17:29.178 15:27:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:29.178 15:27:46 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:29.178 15:27:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:29.178 15:27:46 -- common/autotest_common.sh@10 -- # set +x 00:17:29.178 15:27:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:29.178 15:27:46 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:29.178 15:27:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:29.178 15:27:46 -- common/autotest_common.sh@10 -- # set +x 00:17:29.178 [2024-04-26 15:27:46.449201] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:29.178 15:27:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:29.178 15:27:46 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:29.178 15:27:46 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:29.178 15:27:46 -- nvmf/common.sh@521 -- # config=() 00:17:29.178 15:27:46 -- nvmf/common.sh@521 -- # local subsystem config 00:17:29.178 15:27:46 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:29.178 15:27:46 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:29.178 { 00:17:29.178 "params": { 00:17:29.178 "name": "Nvme$subsystem", 00:17:29.178 "trtype": "$TEST_TRANSPORT", 00:17:29.178 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:29.178 "adrfam": "ipv4", 00:17:29.178 "trsvcid": "$NVMF_PORT", 00:17:29.178 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:29.178 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:29.178 "hdgst": ${hdgst:-false}, 00:17:29.178 "ddgst": ${ddgst:-false} 00:17:29.178 }, 00:17:29.178 "method": "bdev_nvme_attach_controller" 00:17:29.178 } 00:17:29.178 EOF 00:17:29.178 )") 00:17:29.178 15:27:46 -- nvmf/common.sh@543 -- # cat 00:17:29.178 15:27:46 -- nvmf/common.sh@545 -- # jq . 00:17:29.178 15:27:46 -- nvmf/common.sh@546 -- # IFS=, 00:17:29.178 15:27:46 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:17:29.178 "params": { 00:17:29.178 "name": "Nvme1", 00:17:29.178 "trtype": "tcp", 00:17:29.178 "traddr": "10.0.0.2", 00:17:29.178 "adrfam": "ipv4", 00:17:29.178 "trsvcid": "4420", 00:17:29.178 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:29.178 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:29.178 "hdgst": false, 00:17:29.178 "ddgst": false 00:17:29.178 }, 00:17:29.178 "method": "bdev_nvme_attach_controller" 00:17:29.178 }' 00:17:29.178 [2024-04-26 15:27:46.501980] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:17:29.178 [2024-04-26 15:27:46.502046] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1634244 ] 00:17:29.178 [2024-04-26 15:27:46.572715] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:29.436 [2024-04-26 15:27:46.667901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:29.436 [2024-04-26 15:27:46.668018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:29.436 [2024-04-26 15:27:46.668021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:29.436 I/O targets: 00:17:29.436 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:29.436 00:17:29.436 00:17:29.436 CUnit - A unit testing framework for C - Version 2.1-3 00:17:29.436 http://cunit.sourceforge.net/ 00:17:29.436 00:17:29.436 00:17:29.436 Suite: bdevio tests on: Nvme1n1 00:17:29.695 Test: blockdev write read block ...passed 00:17:29.695 Test: blockdev write zeroes read block ...passed 00:17:29.695 Test: blockdev write zeroes read no split ...passed 00:17:29.695 Test: blockdev write zeroes read split ...passed 00:17:29.695 Test: blockdev write zeroes read split partial ...passed 00:17:29.695 Test: blockdev reset ...[2024-04-26 15:27:47.014792] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:29.695 [2024-04-26 15:27:47.014853] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13da230 (9): Bad file descriptor 00:17:29.696 [2024-04-26 15:27:47.118350] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:29.696 passed 00:17:29.696 Test: blockdev write read 8 blocks ...passed 00:17:29.696 Test: blockdev write read size > 128k ...passed 00:17:29.696 Test: blockdev write read invalid size ...passed 00:17:29.954 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:29.954 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:29.954 Test: blockdev write read max offset ...passed 00:17:29.954 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:29.954 Test: blockdev writev readv 8 blocks ...passed 00:17:29.954 Test: blockdev writev readv 30 x 1block ...passed 00:17:29.954 Test: blockdev writev readv block ...passed 00:17:29.954 Test: blockdev writev readv size > 128k ...passed 00:17:29.954 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:29.954 Test: blockdev comparev and writev ...[2024-04-26 15:27:47.340472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:29.954 [2024-04-26 15:27:47.340496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.954 [2024-04-26 15:27:47.340507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:29.954 [2024-04-26 15:27:47.340513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:29.954 [2024-04-26 15:27:47.340862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:29.954 [2024-04-26 15:27:47.340870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:29.954 [2024-04-26 15:27:47.340880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:29.954 [2024-04-26 15:27:47.340889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:29.954 [2024-04-26 15:27:47.341261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:29.954 [2024-04-26 15:27:47.341268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:29.954 [2024-04-26 15:27:47.341277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:29.954 [2024-04-26 15:27:47.341282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:29.954 [2024-04-26 15:27:47.341612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:29.954 [2024-04-26 15:27:47.341619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:29.954 [2024-04-26 15:27:47.341628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:29.954 [2024-04-26 15:27:47.341633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:29.954 passed 00:17:30.214 Test: blockdev nvme passthru rw ...passed 00:17:30.214 Test: blockdev nvme passthru vendor specific ...[2024-04-26 15:27:47.425377] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:30.214 [2024-04-26 15:27:47.425392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:30.214 [2024-04-26 15:27:47.425620] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:30.214 [2024-04-26 15:27:47.425627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:30.214 [2024-04-26 15:27:47.425885] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:30.214 [2024-04-26 15:27:47.425892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:30.214 [2024-04-26 15:27:47.426150] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:30.214 [2024-04-26 15:27:47.426157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:30.214 passed 00:17:30.214 Test: blockdev nvme admin passthru ...passed 00:17:30.214 Test: blockdev copy ...passed 00:17:30.214 00:17:30.214 Run Summary: Type Total Ran Passed Failed Inactive 00:17:30.214 suites 1 1 n/a 0 0 00:17:30.214 tests 23 23 23 0 0 00:17:30.214 asserts 152 152 152 0 n/a 00:17:30.214 00:17:30.214 Elapsed time = 1.289 seconds 00:17:30.474 15:27:47 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:30.474 15:27:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:30.474 15:27:47 -- common/autotest_common.sh@10 -- # set +x 00:17:30.474 15:27:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:30.474 15:27:47 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:30.474 15:27:47 -- target/bdevio.sh@30 -- # nvmftestfini 00:17:30.474 15:27:47 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:30.474 15:27:47 -- nvmf/common.sh@117 -- # sync 00:17:30.474 15:27:47 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:30.474 15:27:47 -- nvmf/common.sh@120 -- # set +e 00:17:30.474 15:27:47 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:30.474 15:27:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:30.474 rmmod nvme_tcp 00:17:30.474 rmmod nvme_fabrics 00:17:30.474 rmmod nvme_keyring 00:17:30.474 15:27:47 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:30.474 15:27:47 -- nvmf/common.sh@124 -- # set -e 00:17:30.474 15:27:47 -- nvmf/common.sh@125 -- # return 0 00:17:30.474 15:27:47 -- nvmf/common.sh@478 -- # '[' -n 1634062 ']' 00:17:30.474 15:27:47 -- nvmf/common.sh@479 -- # killprocess 1634062 00:17:30.474 15:27:47 -- common/autotest_common.sh@936 -- # '[' -z 1634062 ']' 00:17:30.474 15:27:47 -- common/autotest_common.sh@940 -- # kill -0 1634062 00:17:30.474 15:27:47 -- common/autotest_common.sh@941 -- # uname 00:17:30.474 15:27:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:30.474 15:27:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1634062 00:17:30.474 15:27:47 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:17:30.474 15:27:47 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:17:30.474 15:27:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1634062' 00:17:30.474 killing process with pid 1634062 00:17:30.474 15:27:47 -- common/autotest_common.sh@955 -- # kill 1634062 00:17:30.474 15:27:47 -- common/autotest_common.sh@960 -- # wait 1634062 00:17:30.735 15:27:48 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:30.735 15:27:48 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:30.735 15:27:48 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:30.735 15:27:48 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:30.735 15:27:48 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:30.735 15:27:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:30.735 15:27:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:30.735 15:27:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:33.280 15:27:50 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:33.280 00:17:33.280 real 0m12.099s 00:17:33.280 user 0m13.677s 00:17:33.280 sys 0m6.266s 00:17:33.280 15:27:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:33.280 15:27:50 -- common/autotest_common.sh@10 -- # set +x 00:17:33.280 ************************************ 00:17:33.280 END TEST nvmf_bdevio_no_huge 00:17:33.280 ************************************ 00:17:33.280 15:27:50 -- nvmf/nvmf.sh@60 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:33.280 15:27:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:33.280 15:27:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:33.280 15:27:50 -- common/autotest_common.sh@10 -- # set +x 00:17:33.280 ************************************ 00:17:33.280 START TEST nvmf_tls 00:17:33.280 ************************************ 00:17:33.280 15:27:50 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:33.280 * Looking for test storage... 00:17:33.280 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:33.280 15:27:50 -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:33.280 15:27:50 -- nvmf/common.sh@7 -- # uname -s 00:17:33.280 15:27:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:33.280 15:27:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:33.280 15:27:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:33.280 15:27:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:33.280 15:27:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:33.280 15:27:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:33.280 15:27:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:33.280 15:27:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:33.280 15:27:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:33.280 15:27:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:33.280 15:27:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:33.280 15:27:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:33.280 15:27:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:33.280 15:27:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:33.280 15:27:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:33.280 15:27:50 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:33.280 15:27:50 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:33.280 15:27:50 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:33.280 15:27:50 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:33.280 15:27:50 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:33.280 15:27:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.280 15:27:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.280 15:27:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.280 15:27:50 -- paths/export.sh@5 -- # export PATH 00:17:33.280 15:27:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.280 15:27:50 -- nvmf/common.sh@47 -- # : 0 00:17:33.280 15:27:50 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:33.281 15:27:50 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:33.281 15:27:50 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:33.281 15:27:50 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:33.281 15:27:50 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:33.281 15:27:50 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:33.281 15:27:50 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:33.281 15:27:50 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:33.281 15:27:50 -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:33.281 15:27:50 -- target/tls.sh@62 -- # nvmftestinit 00:17:33.281 15:27:50 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:33.281 15:27:50 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:33.281 15:27:50 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:33.281 15:27:50 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:33.281 15:27:50 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:33.281 15:27:50 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:33.281 15:27:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:33.281 15:27:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:33.281 15:27:50 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:33.281 15:27:50 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:33.281 15:27:50 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:33.281 15:27:50 -- common/autotest_common.sh@10 -- # set +x 00:17:41.427 15:27:57 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:41.427 15:27:57 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:41.427 15:27:57 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:41.427 15:27:57 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:41.427 15:27:57 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:41.427 15:27:57 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:41.427 15:27:57 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:41.427 15:27:57 -- nvmf/common.sh@295 -- # net_devs=() 00:17:41.428 15:27:57 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:41.428 15:27:57 -- nvmf/common.sh@296 -- # e810=() 00:17:41.428 15:27:57 -- nvmf/common.sh@296 -- # local -ga e810 00:17:41.428 15:27:57 -- nvmf/common.sh@297 -- # x722=() 00:17:41.428 15:27:57 -- nvmf/common.sh@297 -- # local -ga x722 00:17:41.428 15:27:57 -- nvmf/common.sh@298 -- # mlx=() 00:17:41.428 15:27:57 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:41.428 15:27:57 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:41.428 15:27:57 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:41.428 15:27:57 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:41.428 15:27:57 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:41.428 15:27:57 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:41.428 15:27:57 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:41.428 15:27:57 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:41.428 15:27:57 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:41.428 15:27:57 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:41.428 15:27:57 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:41.428 15:27:57 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:41.428 15:27:57 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:41.428 15:27:57 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:41.428 15:27:57 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:41.428 15:27:57 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:41.428 15:27:57 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:41.428 15:27:57 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:41.428 15:27:57 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:41.428 15:27:57 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:41.428 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:41.428 15:27:57 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:41.428 15:27:57 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:41.428 15:27:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:41.428 15:27:57 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:41.428 15:27:57 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:41.428 15:27:57 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:41.428 15:27:57 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:41.428 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:41.428 15:27:57 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:41.428 15:27:57 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:41.428 15:27:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:41.428 15:27:57 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:41.428 15:27:57 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:41.428 15:27:57 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:41.428 15:27:57 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:41.428 15:27:57 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:41.428 15:27:57 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:41.428 15:27:57 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:41.428 15:27:57 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:41.428 15:27:57 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:41.428 15:27:57 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:41.428 Found net devices under 0000:31:00.0: cvl_0_0 00:17:41.428 15:27:57 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:41.428 15:27:57 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:41.428 15:27:57 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:41.428 15:27:57 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:41.428 15:27:57 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:41.428 15:27:57 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:41.428 Found net devices under 0000:31:00.1: cvl_0_1 00:17:41.428 15:27:57 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:41.428 15:27:57 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:41.428 15:27:57 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:41.428 15:27:57 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:41.428 15:27:57 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:41.428 15:27:57 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:41.428 15:27:57 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:41.428 15:27:57 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:41.428 15:27:57 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:41.428 15:27:57 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:41.428 15:27:57 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:41.428 15:27:57 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:41.428 15:27:57 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:41.428 15:27:57 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:41.428 15:27:57 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:41.428 15:27:57 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:41.428 15:27:57 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:41.428 15:27:57 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:41.428 15:27:57 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:41.428 15:27:57 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:41.428 15:27:57 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:41.428 15:27:57 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:41.428 15:27:57 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:41.428 15:27:57 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:41.428 15:27:57 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:41.428 15:27:57 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:41.428 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:41.428 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.706 ms 00:17:41.428 00:17:41.428 --- 10.0.0.2 ping statistics --- 00:17:41.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:41.428 rtt min/avg/max/mdev = 0.706/0.706/0.706/0.000 ms 00:17:41.428 15:27:57 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:41.428 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:41.428 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:17:41.428 00:17:41.428 --- 10.0.0.1 ping statistics --- 00:17:41.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:41.428 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:17:41.428 15:27:57 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:41.428 15:27:57 -- nvmf/common.sh@411 -- # return 0 00:17:41.428 15:27:57 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:41.428 15:27:57 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:41.428 15:27:57 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:41.428 15:27:57 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:41.428 15:27:57 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:41.428 15:27:57 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:41.428 15:27:57 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:41.428 15:27:57 -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:41.428 15:27:57 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:41.428 15:27:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:41.428 15:27:57 -- common/autotest_common.sh@10 -- # set +x 00:17:41.428 15:27:57 -- nvmf/common.sh@470 -- # nvmfpid=1638817 00:17:41.428 15:27:57 -- nvmf/common.sh@471 -- # waitforlisten 1638817 00:17:41.428 15:27:57 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:41.428 15:27:57 -- common/autotest_common.sh@817 -- # '[' -z 1638817 ']' 00:17:41.428 15:27:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:41.428 15:27:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:41.428 15:27:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:41.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:41.428 15:27:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:41.428 15:27:57 -- common/autotest_common.sh@10 -- # set +x 00:17:41.428 [2024-04-26 15:27:57.853702] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:17:41.428 [2024-04-26 15:27:57.853748] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:41.428 EAL: No free 2048 kB hugepages reported on node 1 00:17:41.428 [2024-04-26 15:27:57.937947] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.428 [2024-04-26 15:27:58.000199] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:41.428 [2024-04-26 15:27:58.000240] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:41.428 [2024-04-26 15:27:58.000248] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:41.428 [2024-04-26 15:27:58.000254] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:41.428 [2024-04-26 15:27:58.000260] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:41.428 [2024-04-26 15:27:58.000285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:41.428 15:27:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:41.428 15:27:58 -- common/autotest_common.sh@850 -- # return 0 00:17:41.428 15:27:58 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:41.428 15:27:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:41.428 15:27:58 -- common/autotest_common.sh@10 -- # set +x 00:17:41.428 15:27:58 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:41.428 15:27:58 -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:17:41.428 15:27:58 -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:41.428 true 00:17:41.428 15:27:58 -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:41.428 15:27:58 -- target/tls.sh@73 -- # jq -r .tls_version 00:17:41.690 15:27:58 -- target/tls.sh@73 -- # version=0 00:17:41.690 15:27:58 -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:17:41.690 15:27:58 -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:41.952 15:27:59 -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:41.952 15:27:59 -- target/tls.sh@81 -- # jq -r .tls_version 00:17:41.952 15:27:59 -- target/tls.sh@81 -- # version=13 00:17:41.952 15:27:59 -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:17:41.952 15:27:59 -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:42.214 15:27:59 -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:42.214 15:27:59 -- target/tls.sh@89 -- # jq -r .tls_version 00:17:42.214 15:27:59 -- target/tls.sh@89 -- # version=7 00:17:42.214 15:27:59 -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:17:42.214 15:27:59 -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:42.214 15:27:59 -- target/tls.sh@96 -- # jq -r .enable_ktls 00:17:42.476 15:27:59 -- target/tls.sh@96 -- # ktls=false 00:17:42.476 15:27:59 -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:17:42.476 15:27:59 -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:42.737 15:27:59 -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:42.737 15:27:59 -- target/tls.sh@104 -- # jq -r .enable_ktls 00:17:42.737 15:28:00 -- target/tls.sh@104 -- # ktls=true 00:17:42.737 15:28:00 -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:17:42.738 15:28:00 -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:42.999 15:28:00 -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:42.999 15:28:00 -- target/tls.sh@112 -- # jq -r .enable_ktls 00:17:43.263 15:28:00 -- target/tls.sh@112 -- # ktls=false 00:17:43.263 15:28:00 -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:17:43.263 15:28:00 -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:17:43.263 15:28:00 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:17:43.263 15:28:00 -- nvmf/common.sh@691 -- # local prefix key digest 00:17:43.263 15:28:00 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:17:43.263 15:28:00 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:17:43.263 15:28:00 -- nvmf/common.sh@693 -- # digest=1 00:17:43.263 15:28:00 -- nvmf/common.sh@694 -- # python - 00:17:43.263 15:28:00 -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:43.263 15:28:00 -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:17:43.263 15:28:00 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:17:43.263 15:28:00 -- nvmf/common.sh@691 -- # local prefix key digest 00:17:43.263 15:28:00 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:17:43.263 15:28:00 -- nvmf/common.sh@693 -- # key=ffeeddccbbaa99887766554433221100 00:17:43.263 15:28:00 -- nvmf/common.sh@693 -- # digest=1 00:17:43.263 15:28:00 -- nvmf/common.sh@694 -- # python - 00:17:43.263 15:28:00 -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:43.263 15:28:00 -- target/tls.sh@121 -- # mktemp 00:17:43.263 15:28:00 -- target/tls.sh@121 -- # key_path=/tmp/tmp.rHm1ep33Qn 00:17:43.263 15:28:00 -- target/tls.sh@122 -- # mktemp 00:17:43.263 15:28:00 -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.CSyOIW5qRK 00:17:43.263 15:28:00 -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:43.263 15:28:00 -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:43.263 15:28:00 -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.rHm1ep33Qn 00:17:43.263 15:28:00 -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.CSyOIW5qRK 00:17:43.263 15:28:00 -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:43.525 15:28:00 -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:17:43.787 15:28:01 -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.rHm1ep33Qn 00:17:43.787 15:28:01 -- target/tls.sh@49 -- # local key=/tmp/tmp.rHm1ep33Qn 00:17:43.787 15:28:01 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:43.787 [2024-04-26 15:28:01.182429] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:43.787 15:28:01 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:44.048 15:28:01 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:44.309 [2024-04-26 15:28:01.515234] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:44.309 [2024-04-26 15:28:01.515574] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:44.309 15:28:01 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:44.309 malloc0 00:17:44.309 15:28:01 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:44.570 15:28:01 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.rHm1ep33Qn 00:17:44.570 [2024-04-26 15:28:01.986644] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:44.570 15:28:02 -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.rHm1ep33Qn 00:17:44.832 EAL: No free 2048 kB hugepages reported on node 1 00:17:54.834 Initializing NVMe Controllers 00:17:54.834 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:54.834 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:54.834 Initialization complete. Launching workers. 00:17:54.834 ======================================================== 00:17:54.834 Latency(us) 00:17:54.834 Device Information : IOPS MiB/s Average min max 00:17:54.834 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18544.97 72.44 3451.14 1123.24 5299.63 00:17:54.834 ======================================================== 00:17:54.834 Total : 18544.97 72.44 3451.14 1123.24 5299.63 00:17:54.834 00:17:54.834 15:28:12 -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.rHm1ep33Qn 00:17:54.834 15:28:12 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:54.834 15:28:12 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:54.834 15:28:12 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:54.834 15:28:12 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.rHm1ep33Qn' 00:17:54.834 15:28:12 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:54.834 15:28:12 -- target/tls.sh@28 -- # bdevperf_pid=1641551 00:17:54.834 15:28:12 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:54.834 15:28:12 -- target/tls.sh@31 -- # waitforlisten 1641551 /var/tmp/bdevperf.sock 00:17:54.834 15:28:12 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:54.834 15:28:12 -- common/autotest_common.sh@817 -- # '[' -z 1641551 ']' 00:17:54.834 15:28:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:54.834 15:28:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:54.834 15:28:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:54.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:54.834 15:28:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:54.834 15:28:12 -- common/autotest_common.sh@10 -- # set +x 00:17:54.834 [2024-04-26 15:28:12.155822] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:17:54.834 [2024-04-26 15:28:12.155884] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1641551 ] 00:17:54.834 EAL: No free 2048 kB hugepages reported on node 1 00:17:54.834 [2024-04-26 15:28:12.206507] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.834 [2024-04-26 15:28:12.256977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:55.778 15:28:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:55.778 15:28:12 -- common/autotest_common.sh@850 -- # return 0 00:17:55.778 15:28:12 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.rHm1ep33Qn 00:17:55.778 [2024-04-26 15:28:13.038127] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:55.778 [2024-04-26 15:28:13.038188] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:55.778 TLSTESTn1 00:17:55.778 15:28:13 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:55.778 Running I/O for 10 seconds... 00:18:08.016 00:18:08.016 Latency(us) 00:18:08.016 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:08.016 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:08.016 Verification LBA range: start 0x0 length 0x2000 00:18:08.016 TLSTESTn1 : 10.02 5804.57 22.67 0.00 0.00 22018.51 4614.83 62477.65 00:18:08.016 =================================================================================================================== 00:18:08.016 Total : 5804.57 22.67 0.00 0.00 22018.51 4614.83 62477.65 00:18:08.016 0 00:18:08.016 15:28:23 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:08.016 15:28:23 -- target/tls.sh@45 -- # killprocess 1641551 00:18:08.016 15:28:23 -- common/autotest_common.sh@936 -- # '[' -z 1641551 ']' 00:18:08.016 15:28:23 -- common/autotest_common.sh@940 -- # kill -0 1641551 00:18:08.016 15:28:23 -- common/autotest_common.sh@941 -- # uname 00:18:08.016 15:28:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:08.016 15:28:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1641551 00:18:08.016 15:28:23 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:08.016 15:28:23 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:08.016 15:28:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1641551' 00:18:08.016 killing process with pid 1641551 00:18:08.016 15:28:23 -- common/autotest_common.sh@955 -- # kill 1641551 00:18:08.016 Received shutdown signal, test time was about 10.000000 seconds 00:18:08.016 00:18:08.016 Latency(us) 00:18:08.016 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:08.016 =================================================================================================================== 00:18:08.016 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:08.016 [2024-04-26 15:28:23.334717] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:08.016 15:28:23 -- common/autotest_common.sh@960 -- # wait 1641551 00:18:08.016 15:28:23 -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.CSyOIW5qRK 00:18:08.016 15:28:23 -- common/autotest_common.sh@638 -- # local es=0 00:18:08.016 15:28:23 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.CSyOIW5qRK 00:18:08.016 15:28:23 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:18:08.016 15:28:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:08.016 15:28:23 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:18:08.016 15:28:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:08.016 15:28:23 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.CSyOIW5qRK 00:18:08.016 15:28:23 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:08.016 15:28:23 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:08.016 15:28:23 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:08.016 15:28:23 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.CSyOIW5qRK' 00:18:08.016 15:28:23 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:08.016 15:28:23 -- target/tls.sh@28 -- # bdevperf_pid=1643897 00:18:08.016 15:28:23 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:08.016 15:28:23 -- target/tls.sh@31 -- # waitforlisten 1643897 /var/tmp/bdevperf.sock 00:18:08.016 15:28:23 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:08.016 15:28:23 -- common/autotest_common.sh@817 -- # '[' -z 1643897 ']' 00:18:08.016 15:28:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:08.016 15:28:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:08.016 15:28:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:08.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:08.016 15:28:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:08.016 15:28:23 -- common/autotest_common.sh@10 -- # set +x 00:18:08.016 [2024-04-26 15:28:23.498463] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:18:08.016 [2024-04-26 15:28:23.498520] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1643897 ] 00:18:08.016 EAL: No free 2048 kB hugepages reported on node 1 00:18:08.016 [2024-04-26 15:28:23.547785] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.016 [2024-04-26 15:28:23.597546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:08.016 15:28:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:08.016 15:28:24 -- common/autotest_common.sh@850 -- # return 0 00:18:08.016 15:28:24 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.CSyOIW5qRK 00:18:08.016 [2024-04-26 15:28:24.418404] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:08.016 [2024-04-26 15:28:24.418463] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:08.016 [2024-04-26 15:28:24.425607] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:08.016 [2024-04-26 15:28:24.426405] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7dbae0 (107): Transport endpoint is not connected 00:18:08.016 [2024-04-26 15:28:24.427400] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7dbae0 (9): Bad file descriptor 00:18:08.016 [2024-04-26 15:28:24.428402] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:08.016 [2024-04-26 15:28:24.428409] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:08.016 [2024-04-26 15:28:24.428414] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:08.016 request: 00:18:08.016 { 00:18:08.016 "name": "TLSTEST", 00:18:08.016 "trtype": "tcp", 00:18:08.016 "traddr": "10.0.0.2", 00:18:08.016 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:08.016 "adrfam": "ipv4", 00:18:08.016 "trsvcid": "4420", 00:18:08.017 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:08.017 "psk": "/tmp/tmp.CSyOIW5qRK", 00:18:08.017 "method": "bdev_nvme_attach_controller", 00:18:08.017 "req_id": 1 00:18:08.017 } 00:18:08.017 Got JSON-RPC error response 00:18:08.017 response: 00:18:08.017 { 00:18:08.017 "code": -32602, 00:18:08.017 "message": "Invalid parameters" 00:18:08.017 } 00:18:08.017 15:28:24 -- target/tls.sh@36 -- # killprocess 1643897 00:18:08.017 15:28:24 -- common/autotest_common.sh@936 -- # '[' -z 1643897 ']' 00:18:08.017 15:28:24 -- common/autotest_common.sh@940 -- # kill -0 1643897 00:18:08.017 15:28:24 -- common/autotest_common.sh@941 -- # uname 00:18:08.017 15:28:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:08.017 15:28:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1643897 00:18:08.017 15:28:24 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:08.017 15:28:24 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:08.017 15:28:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1643897' 00:18:08.017 killing process with pid 1643897 00:18:08.017 15:28:24 -- common/autotest_common.sh@955 -- # kill 1643897 00:18:08.017 Received shutdown signal, test time was about 10.000000 seconds 00:18:08.017 00:18:08.017 Latency(us) 00:18:08.017 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:08.017 =================================================================================================================== 00:18:08.017 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:08.017 [2024-04-26 15:28:24.514809] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:08.017 15:28:24 -- common/autotest_common.sh@960 -- # wait 1643897 00:18:08.017 15:28:24 -- target/tls.sh@37 -- # return 1 00:18:08.017 15:28:24 -- common/autotest_common.sh@641 -- # es=1 00:18:08.017 15:28:24 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:08.017 15:28:24 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:08.017 15:28:24 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:08.017 15:28:24 -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.rHm1ep33Qn 00:18:08.017 15:28:24 -- common/autotest_common.sh@638 -- # local es=0 00:18:08.017 15:28:24 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.rHm1ep33Qn 00:18:08.017 15:28:24 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:18:08.017 15:28:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:08.017 15:28:24 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:18:08.017 15:28:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:08.017 15:28:24 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.rHm1ep33Qn 00:18:08.017 15:28:24 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:08.017 15:28:24 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:08.017 15:28:24 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:08.017 15:28:24 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.rHm1ep33Qn' 00:18:08.017 15:28:24 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:08.017 15:28:24 -- target/tls.sh@28 -- # bdevperf_pid=1643974 00:18:08.017 15:28:24 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:08.017 15:28:24 -- target/tls.sh@31 -- # waitforlisten 1643974 /var/tmp/bdevperf.sock 00:18:08.017 15:28:24 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:08.017 15:28:24 -- common/autotest_common.sh@817 -- # '[' -z 1643974 ']' 00:18:08.017 15:28:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:08.017 15:28:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:08.017 15:28:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:08.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:08.017 15:28:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:08.017 15:28:24 -- common/autotest_common.sh@10 -- # set +x 00:18:08.017 [2024-04-26 15:28:24.668601] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:18:08.017 [2024-04-26 15:28:24.668652] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1643974 ] 00:18:08.017 EAL: No free 2048 kB hugepages reported on node 1 00:18:08.017 [2024-04-26 15:28:24.719412] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.017 [2024-04-26 15:28:24.770093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:08.017 15:28:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:08.017 15:28:25 -- common/autotest_common.sh@850 -- # return 0 00:18:08.017 15:28:25 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.rHm1ep33Qn 00:18:08.279 [2024-04-26 15:28:25.579264] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:08.279 [2024-04-26 15:28:25.579328] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:08.279 [2024-04-26 15:28:25.588354] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:08.279 [2024-04-26 15:28:25.588375] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:08.279 [2024-04-26 15:28:25.588396] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:08.279 [2024-04-26 15:28:25.588426] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1301ae0 (107): Transport endpoint is not connected 00:18:08.279 [2024-04-26 15:28:25.589408] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1301ae0 (9): Bad file descriptor 00:18:08.279 [2024-04-26 15:28:25.590410] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:08.279 [2024-04-26 15:28:25.590417] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:08.279 [2024-04-26 15:28:25.590425] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:08.279 request: 00:18:08.279 { 00:18:08.279 "name": "TLSTEST", 00:18:08.279 "trtype": "tcp", 00:18:08.279 "traddr": "10.0.0.2", 00:18:08.279 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:08.279 "adrfam": "ipv4", 00:18:08.279 "trsvcid": "4420", 00:18:08.279 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:08.279 "psk": "/tmp/tmp.rHm1ep33Qn", 00:18:08.279 "method": "bdev_nvme_attach_controller", 00:18:08.279 "req_id": 1 00:18:08.279 } 00:18:08.279 Got JSON-RPC error response 00:18:08.279 response: 00:18:08.279 { 00:18:08.279 "code": -32602, 00:18:08.279 "message": "Invalid parameters" 00:18:08.279 } 00:18:08.279 15:28:25 -- target/tls.sh@36 -- # killprocess 1643974 00:18:08.279 15:28:25 -- common/autotest_common.sh@936 -- # '[' -z 1643974 ']' 00:18:08.279 15:28:25 -- common/autotest_common.sh@940 -- # kill -0 1643974 00:18:08.279 15:28:25 -- common/autotest_common.sh@941 -- # uname 00:18:08.279 15:28:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:08.279 15:28:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1643974 00:18:08.279 15:28:25 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:08.279 15:28:25 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:08.279 15:28:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1643974' 00:18:08.279 killing process with pid 1643974 00:18:08.279 15:28:25 -- common/autotest_common.sh@955 -- # kill 1643974 00:18:08.279 Received shutdown signal, test time was about 10.000000 seconds 00:18:08.279 00:18:08.279 Latency(us) 00:18:08.279 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:08.279 =================================================================================================================== 00:18:08.279 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:08.279 [2024-04-26 15:28:25.671970] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:08.279 15:28:25 -- common/autotest_common.sh@960 -- # wait 1643974 00:18:08.541 15:28:25 -- target/tls.sh@37 -- # return 1 00:18:08.541 15:28:25 -- common/autotest_common.sh@641 -- # es=1 00:18:08.541 15:28:25 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:08.541 15:28:25 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:08.541 15:28:25 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:08.541 15:28:25 -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.rHm1ep33Qn 00:18:08.541 15:28:25 -- common/autotest_common.sh@638 -- # local es=0 00:18:08.541 15:28:25 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.rHm1ep33Qn 00:18:08.541 15:28:25 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:18:08.541 15:28:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:08.541 15:28:25 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:18:08.541 15:28:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:08.541 15:28:25 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.rHm1ep33Qn 00:18:08.541 15:28:25 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:08.541 15:28:25 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:08.541 15:28:25 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:08.541 15:28:25 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.rHm1ep33Qn' 00:18:08.541 15:28:25 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:08.542 15:28:25 -- target/tls.sh@28 -- # bdevperf_pid=1644258 00:18:08.542 15:28:25 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:08.542 15:28:25 -- target/tls.sh@31 -- # waitforlisten 1644258 /var/tmp/bdevperf.sock 00:18:08.542 15:28:25 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:08.542 15:28:25 -- common/autotest_common.sh@817 -- # '[' -z 1644258 ']' 00:18:08.542 15:28:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:08.542 15:28:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:08.542 15:28:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:08.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:08.542 15:28:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:08.542 15:28:25 -- common/autotest_common.sh@10 -- # set +x 00:18:08.542 [2024-04-26 15:28:25.834746] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:18:08.542 [2024-04-26 15:28:25.834799] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1644258 ] 00:18:08.542 EAL: No free 2048 kB hugepages reported on node 1 00:18:08.542 [2024-04-26 15:28:25.885501] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.542 [2024-04-26 15:28:25.934061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:09.486 15:28:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:09.486 15:28:26 -- common/autotest_common.sh@850 -- # return 0 00:18:09.486 15:28:26 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.rHm1ep33Qn 00:18:09.486 [2024-04-26 15:28:26.742960] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:09.486 [2024-04-26 15:28:26.743029] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:09.486 [2024-04-26 15:28:26.751178] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:09.486 [2024-04-26 15:28:26.751197] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:09.486 [2024-04-26 15:28:26.751218] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:09.486 [2024-04-26 15:28:26.752072] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f9ae0 (107): Transport endpoint is not connected 00:18:09.486 [2024-04-26 15:28:26.753067] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f9ae0 (9): Bad file descriptor 00:18:09.486 [2024-04-26 15:28:26.754069] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:18:09.486 [2024-04-26 15:28:26.754077] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:09.486 [2024-04-26 15:28:26.754083] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:18:09.486 request: 00:18:09.486 { 00:18:09.486 "name": "TLSTEST", 00:18:09.486 "trtype": "tcp", 00:18:09.486 "traddr": "10.0.0.2", 00:18:09.486 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:09.486 "adrfam": "ipv4", 00:18:09.486 "trsvcid": "4420", 00:18:09.486 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:09.486 "psk": "/tmp/tmp.rHm1ep33Qn", 00:18:09.486 "method": "bdev_nvme_attach_controller", 00:18:09.486 "req_id": 1 00:18:09.486 } 00:18:09.486 Got JSON-RPC error response 00:18:09.486 response: 00:18:09.486 { 00:18:09.486 "code": -32602, 00:18:09.486 "message": "Invalid parameters" 00:18:09.486 } 00:18:09.486 15:28:26 -- target/tls.sh@36 -- # killprocess 1644258 00:18:09.486 15:28:26 -- common/autotest_common.sh@936 -- # '[' -z 1644258 ']' 00:18:09.486 15:28:26 -- common/autotest_common.sh@940 -- # kill -0 1644258 00:18:09.486 15:28:26 -- common/autotest_common.sh@941 -- # uname 00:18:09.486 15:28:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:09.486 15:28:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1644258 00:18:09.486 15:28:26 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:09.486 15:28:26 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:09.486 15:28:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1644258' 00:18:09.486 killing process with pid 1644258 00:18:09.486 15:28:26 -- common/autotest_common.sh@955 -- # kill 1644258 00:18:09.486 Received shutdown signal, test time was about 10.000000 seconds 00:18:09.486 00:18:09.486 Latency(us) 00:18:09.486 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:09.486 =================================================================================================================== 00:18:09.486 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:09.486 [2024-04-26 15:28:26.839598] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:09.486 15:28:26 -- common/autotest_common.sh@960 -- # wait 1644258 00:18:09.748 15:28:26 -- target/tls.sh@37 -- # return 1 00:18:09.748 15:28:26 -- common/autotest_common.sh@641 -- # es=1 00:18:09.748 15:28:26 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:09.748 15:28:26 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:09.748 15:28:26 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:09.748 15:28:26 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:09.748 15:28:26 -- common/autotest_common.sh@638 -- # local es=0 00:18:09.748 15:28:26 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:09.748 15:28:26 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:18:09.748 15:28:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:09.748 15:28:26 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:18:09.748 15:28:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:09.748 15:28:26 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:09.748 15:28:26 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:09.748 15:28:26 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:09.748 15:28:26 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:09.748 15:28:26 -- target/tls.sh@23 -- # psk= 00:18:09.748 15:28:26 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:09.748 15:28:26 -- target/tls.sh@28 -- # bdevperf_pid=1644593 00:18:09.748 15:28:26 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:09.748 15:28:26 -- target/tls.sh@31 -- # waitforlisten 1644593 /var/tmp/bdevperf.sock 00:18:09.748 15:28:26 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:09.748 15:28:26 -- common/autotest_common.sh@817 -- # '[' -z 1644593 ']' 00:18:09.748 15:28:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:09.748 15:28:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:09.748 15:28:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:09.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:09.748 15:28:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:09.748 15:28:26 -- common/autotest_common.sh@10 -- # set +x 00:18:09.748 [2024-04-26 15:28:26.993334] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:18:09.748 [2024-04-26 15:28:26.993384] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1644593 ] 00:18:09.748 EAL: No free 2048 kB hugepages reported on node 1 00:18:09.748 [2024-04-26 15:28:27.043886] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.748 [2024-04-26 15:28:27.093272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:10.319 15:28:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:10.319 15:28:27 -- common/autotest_common.sh@850 -- # return 0 00:18:10.319 15:28:27 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:10.581 [2024-04-26 15:28:27.904969] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:10.581 [2024-04-26 15:28:27.906793] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb63c0 (9): Bad file descriptor 00:18:10.581 [2024-04-26 15:28:27.907793] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:10.581 [2024-04-26 15:28:27.907801] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:10.581 [2024-04-26 15:28:27.907806] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:10.581 request: 00:18:10.581 { 00:18:10.581 "name": "TLSTEST", 00:18:10.581 "trtype": "tcp", 00:18:10.581 "traddr": "10.0.0.2", 00:18:10.581 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:10.581 "adrfam": "ipv4", 00:18:10.581 "trsvcid": "4420", 00:18:10.581 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:10.581 "method": "bdev_nvme_attach_controller", 00:18:10.581 "req_id": 1 00:18:10.581 } 00:18:10.581 Got JSON-RPC error response 00:18:10.581 response: 00:18:10.581 { 00:18:10.581 "code": -32602, 00:18:10.581 "message": "Invalid parameters" 00:18:10.581 } 00:18:10.581 15:28:27 -- target/tls.sh@36 -- # killprocess 1644593 00:18:10.581 15:28:27 -- common/autotest_common.sh@936 -- # '[' -z 1644593 ']' 00:18:10.581 15:28:27 -- common/autotest_common.sh@940 -- # kill -0 1644593 00:18:10.581 15:28:27 -- common/autotest_common.sh@941 -- # uname 00:18:10.581 15:28:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:10.581 15:28:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1644593 00:18:10.581 15:28:27 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:10.581 15:28:27 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:10.581 15:28:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1644593' 00:18:10.581 killing process with pid 1644593 00:18:10.581 15:28:27 -- common/autotest_common.sh@955 -- # kill 1644593 00:18:10.581 Received shutdown signal, test time was about 10.000000 seconds 00:18:10.581 00:18:10.581 Latency(us) 00:18:10.581 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:10.581 =================================================================================================================== 00:18:10.581 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:10.581 15:28:27 -- common/autotest_common.sh@960 -- # wait 1644593 00:18:10.842 15:28:28 -- target/tls.sh@37 -- # return 1 00:18:10.842 15:28:28 -- common/autotest_common.sh@641 -- # es=1 00:18:10.842 15:28:28 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:10.842 15:28:28 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:10.843 15:28:28 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:10.843 15:28:28 -- target/tls.sh@158 -- # killprocess 1638817 00:18:10.843 15:28:28 -- common/autotest_common.sh@936 -- # '[' -z 1638817 ']' 00:18:10.843 15:28:28 -- common/autotest_common.sh@940 -- # kill -0 1638817 00:18:10.843 15:28:28 -- common/autotest_common.sh@941 -- # uname 00:18:10.843 15:28:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:10.843 15:28:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1638817 00:18:10.843 15:28:28 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:10.843 15:28:28 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:10.843 15:28:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1638817' 00:18:10.843 killing process with pid 1638817 00:18:10.843 15:28:28 -- common/autotest_common.sh@955 -- # kill 1638817 00:18:10.843 [2024-04-26 15:28:28.149264] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:10.843 15:28:28 -- common/autotest_common.sh@960 -- # wait 1638817 00:18:10.843 15:28:28 -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:10.843 15:28:28 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:10.843 15:28:28 -- nvmf/common.sh@691 -- # local prefix key digest 00:18:10.843 15:28:28 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:18:10.843 15:28:28 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:10.843 15:28:28 -- nvmf/common.sh@693 -- # digest=2 00:18:10.843 15:28:28 -- nvmf/common.sh@694 -- # python - 00:18:11.104 15:28:28 -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:11.104 15:28:28 -- target/tls.sh@160 -- # mktemp 00:18:11.104 15:28:28 -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.H90Cb48IfE 00:18:11.104 15:28:28 -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:11.104 15:28:28 -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.H90Cb48IfE 00:18:11.104 15:28:28 -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:18:11.104 15:28:28 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:11.104 15:28:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:11.104 15:28:28 -- common/autotest_common.sh@10 -- # set +x 00:18:11.104 15:28:28 -- nvmf/common.sh@470 -- # nvmfpid=1644806 00:18:11.104 15:28:28 -- nvmf/common.sh@471 -- # waitforlisten 1644806 00:18:11.104 15:28:28 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:11.104 15:28:28 -- common/autotest_common.sh@817 -- # '[' -z 1644806 ']' 00:18:11.104 15:28:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:11.104 15:28:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:11.104 15:28:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:11.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:11.104 15:28:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:11.104 15:28:28 -- common/autotest_common.sh@10 -- # set +x 00:18:11.104 [2024-04-26 15:28:28.367990] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:18:11.104 [2024-04-26 15:28:28.368044] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:11.104 EAL: No free 2048 kB hugepages reported on node 1 00:18:11.104 [2024-04-26 15:28:28.454238] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.104 [2024-04-26 15:28:28.517298] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:11.104 [2024-04-26 15:28:28.517334] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:11.104 [2024-04-26 15:28:28.517340] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:11.104 [2024-04-26 15:28:28.517345] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:11.104 [2024-04-26 15:28:28.517349] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:11.104 [2024-04-26 15:28:28.517371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:11.747 15:28:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:11.747 15:28:29 -- common/autotest_common.sh@850 -- # return 0 00:18:11.747 15:28:29 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:11.747 15:28:29 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:11.747 15:28:29 -- common/autotest_common.sh@10 -- # set +x 00:18:12.028 15:28:29 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:12.028 15:28:29 -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.H90Cb48IfE 00:18:12.028 15:28:29 -- target/tls.sh@49 -- # local key=/tmp/tmp.H90Cb48IfE 00:18:12.028 15:28:29 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:12.028 [2024-04-26 15:28:29.354040] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:12.028 15:28:29 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:12.290 15:28:29 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:12.290 [2024-04-26 15:28:29.666812] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:12.290 [2024-04-26 15:28:29.667016] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:12.290 15:28:29 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:12.551 malloc0 00:18:12.551 15:28:29 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:12.551 15:28:29 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.H90Cb48IfE 00:18:12.812 [2024-04-26 15:28:30.137964] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:12.812 15:28:30 -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.H90Cb48IfE 00:18:12.812 15:28:30 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:12.812 15:28:30 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:12.812 15:28:30 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:12.812 15:28:30 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.H90Cb48IfE' 00:18:12.812 15:28:30 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:12.812 15:28:30 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:12.812 15:28:30 -- target/tls.sh@28 -- # bdevperf_pid=1645241 00:18:12.812 15:28:30 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:12.812 15:28:30 -- target/tls.sh@31 -- # waitforlisten 1645241 /var/tmp/bdevperf.sock 00:18:12.812 15:28:30 -- common/autotest_common.sh@817 -- # '[' -z 1645241 ']' 00:18:12.812 15:28:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:12.812 15:28:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:12.812 15:28:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:12.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:12.812 15:28:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:12.812 15:28:30 -- common/autotest_common.sh@10 -- # set +x 00:18:12.812 [2024-04-26 15:28:30.184332] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:18:12.812 [2024-04-26 15:28:30.184381] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1645241 ] 00:18:12.812 EAL: No free 2048 kB hugepages reported on node 1 00:18:12.812 [2024-04-26 15:28:30.233971] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.073 [2024-04-26 15:28:30.284671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:13.073 15:28:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:13.073 15:28:30 -- common/autotest_common.sh@850 -- # return 0 00:18:13.073 15:28:30 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.H90Cb48IfE 00:18:13.073 [2024-04-26 15:28:30.495935] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:13.073 [2024-04-26 15:28:30.495985] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:13.334 TLSTESTn1 00:18:13.334 15:28:30 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:13.334 Running I/O for 10 seconds... 00:18:23.358 00:18:23.358 Latency(us) 00:18:23.358 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:23.358 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:23.358 Verification LBA range: start 0x0 length 0x2000 00:18:23.358 TLSTESTn1 : 10.01 5764.89 22.52 0.00 0.00 22173.16 4696.75 28835.84 00:18:23.358 =================================================================================================================== 00:18:23.358 Total : 5764.89 22.52 0.00 0.00 22173.16 4696.75 28835.84 00:18:23.358 0 00:18:23.358 15:28:40 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:23.358 15:28:40 -- target/tls.sh@45 -- # killprocess 1645241 00:18:23.358 15:28:40 -- common/autotest_common.sh@936 -- # '[' -z 1645241 ']' 00:18:23.358 15:28:40 -- common/autotest_common.sh@940 -- # kill -0 1645241 00:18:23.358 15:28:40 -- common/autotest_common.sh@941 -- # uname 00:18:23.358 15:28:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:23.358 15:28:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1645241 00:18:23.358 15:28:40 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:23.358 15:28:40 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:23.358 15:28:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1645241' 00:18:23.358 killing process with pid 1645241 00:18:23.358 15:28:40 -- common/autotest_common.sh@955 -- # kill 1645241 00:18:23.358 Received shutdown signal, test time was about 10.000000 seconds 00:18:23.358 00:18:23.358 Latency(us) 00:18:23.358 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:23.358 =================================================================================================================== 00:18:23.358 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:23.358 [2024-04-26 15:28:40.792165] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:23.358 15:28:40 -- common/autotest_common.sh@960 -- # wait 1645241 00:18:23.620 15:28:40 -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.H90Cb48IfE 00:18:23.620 15:28:40 -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.H90Cb48IfE 00:18:23.620 15:28:40 -- common/autotest_common.sh@638 -- # local es=0 00:18:23.620 15:28:40 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.H90Cb48IfE 00:18:23.620 15:28:40 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:18:23.620 15:28:40 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:23.620 15:28:40 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:18:23.620 15:28:40 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:23.620 15:28:40 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.H90Cb48IfE 00:18:23.620 15:28:40 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:23.620 15:28:40 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:23.620 15:28:40 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:23.620 15:28:40 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.H90Cb48IfE' 00:18:23.620 15:28:40 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:23.620 15:28:40 -- target/tls.sh@28 -- # bdevperf_pid=1647328 00:18:23.620 15:28:40 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:23.620 15:28:40 -- target/tls.sh@31 -- # waitforlisten 1647328 /var/tmp/bdevperf.sock 00:18:23.620 15:28:40 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:23.620 15:28:40 -- common/autotest_common.sh@817 -- # '[' -z 1647328 ']' 00:18:23.620 15:28:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:23.620 15:28:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:23.620 15:28:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:23.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:23.620 15:28:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:23.620 15:28:40 -- common/autotest_common.sh@10 -- # set +x 00:18:23.620 [2024-04-26 15:28:40.960281] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:18:23.620 [2024-04-26 15:28:40.960336] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1647328 ] 00:18:23.620 EAL: No free 2048 kB hugepages reported on node 1 00:18:23.620 [2024-04-26 15:28:41.009707] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:23.620 [2024-04-26 15:28:41.059769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:24.565 15:28:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:24.565 15:28:41 -- common/autotest_common.sh@850 -- # return 0 00:18:24.565 15:28:41 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.H90Cb48IfE 00:18:24.565 [2024-04-26 15:28:41.876661] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:24.565 [2024-04-26 15:28:41.876700] bdev_nvme.c:6071:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:24.565 [2024-04-26 15:28:41.876706] bdev_nvme.c:6180:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.H90Cb48IfE 00:18:24.565 request: 00:18:24.565 { 00:18:24.565 "name": "TLSTEST", 00:18:24.565 "trtype": "tcp", 00:18:24.565 "traddr": "10.0.0.2", 00:18:24.565 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:24.565 "adrfam": "ipv4", 00:18:24.565 "trsvcid": "4420", 00:18:24.565 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:24.565 "psk": "/tmp/tmp.H90Cb48IfE", 00:18:24.565 "method": "bdev_nvme_attach_controller", 00:18:24.565 "req_id": 1 00:18:24.565 } 00:18:24.565 Got JSON-RPC error response 00:18:24.565 response: 00:18:24.565 { 00:18:24.565 "code": -1, 00:18:24.565 "message": "Operation not permitted" 00:18:24.565 } 00:18:24.565 15:28:41 -- target/tls.sh@36 -- # killprocess 1647328 00:18:24.565 15:28:41 -- common/autotest_common.sh@936 -- # '[' -z 1647328 ']' 00:18:24.565 15:28:41 -- common/autotest_common.sh@940 -- # kill -0 1647328 00:18:24.565 15:28:41 -- common/autotest_common.sh@941 -- # uname 00:18:24.565 15:28:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:24.565 15:28:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1647328 00:18:24.565 15:28:41 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:24.565 15:28:41 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:24.565 15:28:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1647328' 00:18:24.565 killing process with pid 1647328 00:18:24.565 15:28:41 -- common/autotest_common.sh@955 -- # kill 1647328 00:18:24.565 Received shutdown signal, test time was about 10.000000 seconds 00:18:24.565 00:18:24.565 Latency(us) 00:18:24.565 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:24.565 =================================================================================================================== 00:18:24.565 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:24.565 15:28:41 -- common/autotest_common.sh@960 -- # wait 1647328 00:18:24.827 15:28:42 -- target/tls.sh@37 -- # return 1 00:18:24.827 15:28:42 -- common/autotest_common.sh@641 -- # es=1 00:18:24.827 15:28:42 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:24.827 15:28:42 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:24.827 15:28:42 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:24.827 15:28:42 -- target/tls.sh@174 -- # killprocess 1644806 00:18:24.827 15:28:42 -- common/autotest_common.sh@936 -- # '[' -z 1644806 ']' 00:18:24.827 15:28:42 -- common/autotest_common.sh@940 -- # kill -0 1644806 00:18:24.827 15:28:42 -- common/autotest_common.sh@941 -- # uname 00:18:24.827 15:28:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:24.827 15:28:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1644806 00:18:24.827 15:28:42 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:24.827 15:28:42 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:24.827 15:28:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1644806' 00:18:24.827 killing process with pid 1644806 00:18:24.827 15:28:42 -- common/autotest_common.sh@955 -- # kill 1644806 00:18:24.827 [2024-04-26 15:28:42.111506] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:24.827 15:28:42 -- common/autotest_common.sh@960 -- # wait 1644806 00:18:24.827 15:28:42 -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:18:24.827 15:28:42 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:24.827 15:28:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:24.827 15:28:42 -- common/autotest_common.sh@10 -- # set +x 00:18:24.827 15:28:42 -- nvmf/common.sh@470 -- # nvmfpid=1647585 00:18:24.827 15:28:42 -- nvmf/common.sh@471 -- # waitforlisten 1647585 00:18:24.827 15:28:42 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:24.827 15:28:42 -- common/autotest_common.sh@817 -- # '[' -z 1647585 ']' 00:18:24.827 15:28:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:24.827 15:28:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:24.827 15:28:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:24.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:24.827 15:28:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:24.827 15:28:42 -- common/autotest_common.sh@10 -- # set +x 00:18:25.088 [2024-04-26 15:28:42.298107] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:18:25.088 [2024-04-26 15:28:42.298195] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:25.088 EAL: No free 2048 kB hugepages reported on node 1 00:18:25.088 [2024-04-26 15:28:42.382648] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.088 [2024-04-26 15:28:42.435442] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:25.088 [2024-04-26 15:28:42.435477] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:25.088 [2024-04-26 15:28:42.435482] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:25.088 [2024-04-26 15:28:42.435487] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:25.088 [2024-04-26 15:28:42.435491] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:25.088 [2024-04-26 15:28:42.435505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:25.660 15:28:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:25.660 15:28:43 -- common/autotest_common.sh@850 -- # return 0 00:18:25.660 15:28:43 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:25.660 15:28:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:25.660 15:28:43 -- common/autotest_common.sh@10 -- # set +x 00:18:25.660 15:28:43 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:25.660 15:28:43 -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.H90Cb48IfE 00:18:25.660 15:28:43 -- common/autotest_common.sh@638 -- # local es=0 00:18:25.660 15:28:43 -- common/autotest_common.sh@640 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.H90Cb48IfE 00:18:25.660 15:28:43 -- common/autotest_common.sh@626 -- # local arg=setup_nvmf_tgt 00:18:25.660 15:28:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:25.660 15:28:43 -- common/autotest_common.sh@630 -- # type -t setup_nvmf_tgt 00:18:25.660 15:28:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:25.660 15:28:43 -- common/autotest_common.sh@641 -- # setup_nvmf_tgt /tmp/tmp.H90Cb48IfE 00:18:25.660 15:28:43 -- target/tls.sh@49 -- # local key=/tmp/tmp.H90Cb48IfE 00:18:25.660 15:28:43 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:25.920 [2024-04-26 15:28:43.229431] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:25.920 15:28:43 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:26.181 15:28:43 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:26.181 [2024-04-26 15:28:43.522154] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:26.181 [2024-04-26 15:28:43.522337] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:26.181 15:28:43 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:26.441 malloc0 00:18:26.441 15:28:43 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:26.441 15:28:43 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.H90Cb48IfE 00:18:26.701 [2024-04-26 15:28:43.993259] tcp.c:3562:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:26.701 [2024-04-26 15:28:43.993282] tcp.c:3648:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:18:26.701 [2024-04-26 15:28:43.993299] subsystem.c: 971:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:18:26.701 request: 00:18:26.701 { 00:18:26.701 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:26.701 "host": "nqn.2016-06.io.spdk:host1", 00:18:26.701 "psk": "/tmp/tmp.H90Cb48IfE", 00:18:26.701 "method": "nvmf_subsystem_add_host", 00:18:26.701 "req_id": 1 00:18:26.701 } 00:18:26.701 Got JSON-RPC error response 00:18:26.701 response: 00:18:26.701 { 00:18:26.701 "code": -32603, 00:18:26.702 "message": "Internal error" 00:18:26.702 } 00:18:26.702 15:28:44 -- common/autotest_common.sh@641 -- # es=1 00:18:26.702 15:28:44 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:26.702 15:28:44 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:26.702 15:28:44 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:26.702 15:28:44 -- target/tls.sh@180 -- # killprocess 1647585 00:18:26.702 15:28:44 -- common/autotest_common.sh@936 -- # '[' -z 1647585 ']' 00:18:26.702 15:28:44 -- common/autotest_common.sh@940 -- # kill -0 1647585 00:18:26.702 15:28:44 -- common/autotest_common.sh@941 -- # uname 00:18:26.702 15:28:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:26.702 15:28:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1647585 00:18:26.702 15:28:44 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:26.702 15:28:44 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:26.702 15:28:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1647585' 00:18:26.702 killing process with pid 1647585 00:18:26.702 15:28:44 -- common/autotest_common.sh@955 -- # kill 1647585 00:18:26.702 15:28:44 -- common/autotest_common.sh@960 -- # wait 1647585 00:18:26.981 15:28:44 -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.H90Cb48IfE 00:18:26.981 15:28:44 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:18:26.981 15:28:44 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:26.981 15:28:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:26.981 15:28:44 -- common/autotest_common.sh@10 -- # set +x 00:18:26.981 15:28:44 -- nvmf/common.sh@470 -- # nvmfpid=1648045 00:18:26.981 15:28:44 -- nvmf/common.sh@471 -- # waitforlisten 1648045 00:18:26.981 15:28:44 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:26.981 15:28:44 -- common/autotest_common.sh@817 -- # '[' -z 1648045 ']' 00:18:26.981 15:28:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:26.981 15:28:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:26.981 15:28:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:26.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:26.981 15:28:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:26.981 15:28:44 -- common/autotest_common.sh@10 -- # set +x 00:18:26.981 [2024-04-26 15:28:44.252806] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:18:26.981 [2024-04-26 15:28:44.252875] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:26.981 EAL: No free 2048 kB hugepages reported on node 1 00:18:26.981 [2024-04-26 15:28:44.336223] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.981 [2024-04-26 15:28:44.389809] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:26.981 [2024-04-26 15:28:44.389848] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:26.981 [2024-04-26 15:28:44.389853] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:26.981 [2024-04-26 15:28:44.389858] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:26.981 [2024-04-26 15:28:44.389866] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:26.981 [2024-04-26 15:28:44.389884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:27.920 15:28:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:27.920 15:28:45 -- common/autotest_common.sh@850 -- # return 0 00:18:27.920 15:28:45 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:27.920 15:28:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:27.920 15:28:45 -- common/autotest_common.sh@10 -- # set +x 00:18:27.920 15:28:45 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:27.920 15:28:45 -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.H90Cb48IfE 00:18:27.920 15:28:45 -- target/tls.sh@49 -- # local key=/tmp/tmp.H90Cb48IfE 00:18:27.920 15:28:45 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:27.920 [2024-04-26 15:28:45.183804] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:27.920 15:28:45 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:27.920 15:28:45 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:28.181 [2024-04-26 15:28:45.492561] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:28.181 [2024-04-26 15:28:45.492744] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:28.181 15:28:45 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:28.441 malloc0 00:18:28.441 15:28:45 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:28.441 15:28:45 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.H90Cb48IfE 00:18:28.701 [2024-04-26 15:28:45.951646] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:28.701 15:28:45 -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:28.701 15:28:45 -- target/tls.sh@188 -- # bdevperf_pid=1648403 00:18:28.701 15:28:45 -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:28.701 15:28:45 -- target/tls.sh@191 -- # waitforlisten 1648403 /var/tmp/bdevperf.sock 00:18:28.701 15:28:45 -- common/autotest_common.sh@817 -- # '[' -z 1648403 ']' 00:18:28.701 15:28:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:28.701 15:28:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:28.701 15:28:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:28.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:28.701 15:28:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:28.701 15:28:45 -- common/autotest_common.sh@10 -- # set +x 00:18:28.701 [2024-04-26 15:28:45.994894] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:18:28.701 [2024-04-26 15:28:45.994943] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1648403 ] 00:18:28.701 EAL: No free 2048 kB hugepages reported on node 1 00:18:28.701 [2024-04-26 15:28:46.049540] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.701 [2024-04-26 15:28:46.100113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:28.961 15:28:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:28.961 15:28:46 -- common/autotest_common.sh@850 -- # return 0 00:18:28.961 15:28:46 -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.H90Cb48IfE 00:18:28.961 [2024-04-26 15:28:46.319549] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:28.961 [2024-04-26 15:28:46.319610] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:28.961 TLSTESTn1 00:18:29.222 15:28:46 -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:18:29.222 15:28:46 -- target/tls.sh@196 -- # tgtconf='{ 00:18:29.222 "subsystems": [ 00:18:29.222 { 00:18:29.222 "subsystem": "keyring", 00:18:29.222 "config": [] 00:18:29.222 }, 00:18:29.222 { 00:18:29.222 "subsystem": "iobuf", 00:18:29.222 "config": [ 00:18:29.222 { 00:18:29.222 "method": "iobuf_set_options", 00:18:29.222 "params": { 00:18:29.222 "small_pool_count": 8192, 00:18:29.222 "large_pool_count": 1024, 00:18:29.222 "small_bufsize": 8192, 00:18:29.222 "large_bufsize": 135168 00:18:29.222 } 00:18:29.222 } 00:18:29.222 ] 00:18:29.222 }, 00:18:29.222 { 00:18:29.222 "subsystem": "sock", 00:18:29.222 "config": [ 00:18:29.222 { 00:18:29.222 "method": "sock_impl_set_options", 00:18:29.222 "params": { 00:18:29.222 "impl_name": "posix", 00:18:29.222 "recv_buf_size": 2097152, 00:18:29.222 "send_buf_size": 2097152, 00:18:29.222 "enable_recv_pipe": true, 00:18:29.222 "enable_quickack": false, 00:18:29.222 "enable_placement_id": 0, 00:18:29.222 "enable_zerocopy_send_server": true, 00:18:29.222 "enable_zerocopy_send_client": false, 00:18:29.222 "zerocopy_threshold": 0, 00:18:29.222 "tls_version": 0, 00:18:29.222 "enable_ktls": false 00:18:29.222 } 00:18:29.222 }, 00:18:29.222 { 00:18:29.222 "method": "sock_impl_set_options", 00:18:29.222 "params": { 00:18:29.222 "impl_name": "ssl", 00:18:29.222 "recv_buf_size": 4096, 00:18:29.222 "send_buf_size": 4096, 00:18:29.222 "enable_recv_pipe": true, 00:18:29.222 "enable_quickack": false, 00:18:29.222 "enable_placement_id": 0, 00:18:29.222 "enable_zerocopy_send_server": true, 00:18:29.222 "enable_zerocopy_send_client": false, 00:18:29.222 "zerocopy_threshold": 0, 00:18:29.222 "tls_version": 0, 00:18:29.222 "enable_ktls": false 00:18:29.222 } 00:18:29.222 } 00:18:29.222 ] 00:18:29.222 }, 00:18:29.222 { 00:18:29.222 "subsystem": "vmd", 00:18:29.222 "config": [] 00:18:29.222 }, 00:18:29.222 { 00:18:29.222 "subsystem": "accel", 00:18:29.222 "config": [ 00:18:29.222 { 00:18:29.222 "method": "accel_set_options", 00:18:29.222 "params": { 00:18:29.222 "small_cache_size": 128, 00:18:29.222 "large_cache_size": 16, 00:18:29.222 "task_count": 2048, 00:18:29.222 "sequence_count": 2048, 00:18:29.222 "buf_count": 2048 00:18:29.222 } 00:18:29.222 } 00:18:29.222 ] 00:18:29.222 }, 00:18:29.223 { 00:18:29.223 "subsystem": "bdev", 00:18:29.223 "config": [ 00:18:29.223 { 00:18:29.223 "method": "bdev_set_options", 00:18:29.223 "params": { 00:18:29.223 "bdev_io_pool_size": 65535, 00:18:29.223 "bdev_io_cache_size": 256, 00:18:29.223 "bdev_auto_examine": true, 00:18:29.223 "iobuf_small_cache_size": 128, 00:18:29.223 "iobuf_large_cache_size": 16 00:18:29.223 } 00:18:29.223 }, 00:18:29.223 { 00:18:29.223 "method": "bdev_raid_set_options", 00:18:29.223 "params": { 00:18:29.223 "process_window_size_kb": 1024 00:18:29.223 } 00:18:29.223 }, 00:18:29.223 { 00:18:29.223 "method": "bdev_iscsi_set_options", 00:18:29.223 "params": { 00:18:29.223 "timeout_sec": 30 00:18:29.223 } 00:18:29.223 }, 00:18:29.223 { 00:18:29.223 "method": "bdev_nvme_set_options", 00:18:29.223 "params": { 00:18:29.223 "action_on_timeout": "none", 00:18:29.223 "timeout_us": 0, 00:18:29.223 "timeout_admin_us": 0, 00:18:29.223 "keep_alive_timeout_ms": 10000, 00:18:29.223 "arbitration_burst": 0, 00:18:29.223 "low_priority_weight": 0, 00:18:29.223 "medium_priority_weight": 0, 00:18:29.223 "high_priority_weight": 0, 00:18:29.223 "nvme_adminq_poll_period_us": 10000, 00:18:29.223 "nvme_ioq_poll_period_us": 0, 00:18:29.223 "io_queue_requests": 0, 00:18:29.223 "delay_cmd_submit": true, 00:18:29.223 "transport_retry_count": 4, 00:18:29.223 "bdev_retry_count": 3, 00:18:29.223 "transport_ack_timeout": 0, 00:18:29.223 "ctrlr_loss_timeout_sec": 0, 00:18:29.223 "reconnect_delay_sec": 0, 00:18:29.223 "fast_io_fail_timeout_sec": 0, 00:18:29.223 "disable_auto_failback": false, 00:18:29.223 "generate_uuids": false, 00:18:29.223 "transport_tos": 0, 00:18:29.223 "nvme_error_stat": false, 00:18:29.223 "rdma_srq_size": 0, 00:18:29.223 "io_path_stat": false, 00:18:29.223 "allow_accel_sequence": false, 00:18:29.223 "rdma_max_cq_size": 0, 00:18:29.223 "rdma_cm_event_timeout_ms": 0, 00:18:29.223 "dhchap_digests": [ 00:18:29.223 "sha256", 00:18:29.223 "sha384", 00:18:29.223 "sha512" 00:18:29.223 ], 00:18:29.223 "dhchap_dhgroups": [ 00:18:29.223 "null", 00:18:29.223 "ffdhe2048", 00:18:29.223 "ffdhe3072", 00:18:29.223 "ffdhe4096", 00:18:29.223 "ffdhe6144", 00:18:29.223 "ffdhe8192" 00:18:29.223 ] 00:18:29.223 } 00:18:29.223 }, 00:18:29.223 { 00:18:29.223 "method": "bdev_nvme_set_hotplug", 00:18:29.223 "params": { 00:18:29.223 "period_us": 100000, 00:18:29.223 "enable": false 00:18:29.223 } 00:18:29.223 }, 00:18:29.223 { 00:18:29.223 "method": "bdev_malloc_create", 00:18:29.223 "params": { 00:18:29.223 "name": "malloc0", 00:18:29.223 "num_blocks": 8192, 00:18:29.223 "block_size": 4096, 00:18:29.223 "physical_block_size": 4096, 00:18:29.223 "uuid": "3a01f101-f8cb-45c4-894a-34ed8a7e69d3", 00:18:29.223 "optimal_io_boundary": 0 00:18:29.223 } 00:18:29.223 }, 00:18:29.223 { 00:18:29.223 "method": "bdev_wait_for_examine" 00:18:29.223 } 00:18:29.223 ] 00:18:29.223 }, 00:18:29.223 { 00:18:29.223 "subsystem": "nbd", 00:18:29.223 "config": [] 00:18:29.223 }, 00:18:29.223 { 00:18:29.223 "subsystem": "scheduler", 00:18:29.223 "config": [ 00:18:29.223 { 00:18:29.223 "method": "framework_set_scheduler", 00:18:29.223 "params": { 00:18:29.223 "name": "static" 00:18:29.223 } 00:18:29.223 } 00:18:29.223 ] 00:18:29.223 }, 00:18:29.223 { 00:18:29.223 "subsystem": "nvmf", 00:18:29.223 "config": [ 00:18:29.223 { 00:18:29.223 "method": "nvmf_set_config", 00:18:29.223 "params": { 00:18:29.223 "discovery_filter": "match_any", 00:18:29.223 "admin_cmd_passthru": { 00:18:29.223 "identify_ctrlr": false 00:18:29.223 } 00:18:29.223 } 00:18:29.223 }, 00:18:29.223 { 00:18:29.223 "method": "nvmf_set_max_subsystems", 00:18:29.223 "params": { 00:18:29.223 "max_subsystems": 1024 00:18:29.223 } 00:18:29.223 }, 00:18:29.223 { 00:18:29.223 "method": "nvmf_set_crdt", 00:18:29.223 "params": { 00:18:29.223 "crdt1": 0, 00:18:29.223 "crdt2": 0, 00:18:29.223 "crdt3": 0 00:18:29.223 } 00:18:29.223 }, 00:18:29.223 { 00:18:29.223 "method": "nvmf_create_transport", 00:18:29.223 "params": { 00:18:29.223 "trtype": "TCP", 00:18:29.223 "max_queue_depth": 128, 00:18:29.223 "max_io_qpairs_per_ctrlr": 127, 00:18:29.223 "in_capsule_data_size": 4096, 00:18:29.223 "max_io_size": 131072, 00:18:29.223 "io_unit_size": 131072, 00:18:29.223 "max_aq_depth": 128, 00:18:29.223 "num_shared_buffers": 511, 00:18:29.223 "buf_cache_size": 4294967295, 00:18:29.223 "dif_insert_or_strip": false, 00:18:29.223 "zcopy": false, 00:18:29.223 "c2h_success": false, 00:18:29.223 "sock_priority": 0, 00:18:29.223 "abort_timeout_sec": 1, 00:18:29.223 "ack_timeout": 0, 00:18:29.223 "data_wr_pool_size": 0 00:18:29.223 } 00:18:29.223 }, 00:18:29.223 { 00:18:29.223 "method": "nvmf_create_subsystem", 00:18:29.223 "params": { 00:18:29.223 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:29.223 "allow_any_host": false, 00:18:29.223 "serial_number": "SPDK00000000000001", 00:18:29.223 "model_number": "SPDK bdev Controller", 00:18:29.223 "max_namespaces": 10, 00:18:29.223 "min_cntlid": 1, 00:18:29.223 "max_cntlid": 65519, 00:18:29.223 "ana_reporting": false 00:18:29.223 } 00:18:29.223 }, 00:18:29.223 { 00:18:29.223 "method": "nvmf_subsystem_add_host", 00:18:29.223 "params": { 00:18:29.223 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:29.223 "host": "nqn.2016-06.io.spdk:host1", 00:18:29.223 "psk": "/tmp/tmp.H90Cb48IfE" 00:18:29.223 } 00:18:29.223 }, 00:18:29.223 { 00:18:29.223 "method": "nvmf_subsystem_add_ns", 00:18:29.223 "params": { 00:18:29.223 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:29.223 "namespace": { 00:18:29.223 "nsid": 1, 00:18:29.223 "bdev_name": "malloc0", 00:18:29.223 "nguid": "3A01F101F8CB45C4894A34ED8A7E69D3", 00:18:29.223 "uuid": "3a01f101-f8cb-45c4-894a-34ed8a7e69d3", 00:18:29.223 "no_auto_visible": false 00:18:29.223 } 00:18:29.223 } 00:18:29.223 }, 00:18:29.223 { 00:18:29.223 "method": "nvmf_subsystem_add_listener", 00:18:29.223 "params": { 00:18:29.223 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:29.223 "listen_address": { 00:18:29.223 "trtype": "TCP", 00:18:29.223 "adrfam": "IPv4", 00:18:29.223 "traddr": "10.0.0.2", 00:18:29.223 "trsvcid": "4420" 00:18:29.223 }, 00:18:29.223 "secure_channel": true 00:18:29.223 } 00:18:29.223 } 00:18:29.223 ] 00:18:29.223 } 00:18:29.223 ] 00:18:29.223 }' 00:18:29.223 15:28:46 -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:29.484 15:28:46 -- target/tls.sh@197 -- # bdevperfconf='{ 00:18:29.484 "subsystems": [ 00:18:29.484 { 00:18:29.484 "subsystem": "keyring", 00:18:29.484 "config": [] 00:18:29.484 }, 00:18:29.484 { 00:18:29.484 "subsystem": "iobuf", 00:18:29.484 "config": [ 00:18:29.484 { 00:18:29.484 "method": "iobuf_set_options", 00:18:29.484 "params": { 00:18:29.484 "small_pool_count": 8192, 00:18:29.484 "large_pool_count": 1024, 00:18:29.484 "small_bufsize": 8192, 00:18:29.484 "large_bufsize": 135168 00:18:29.484 } 00:18:29.484 } 00:18:29.484 ] 00:18:29.484 }, 00:18:29.484 { 00:18:29.484 "subsystem": "sock", 00:18:29.484 "config": [ 00:18:29.484 { 00:18:29.484 "method": "sock_impl_set_options", 00:18:29.484 "params": { 00:18:29.484 "impl_name": "posix", 00:18:29.484 "recv_buf_size": 2097152, 00:18:29.484 "send_buf_size": 2097152, 00:18:29.484 "enable_recv_pipe": true, 00:18:29.484 "enable_quickack": false, 00:18:29.484 "enable_placement_id": 0, 00:18:29.484 "enable_zerocopy_send_server": true, 00:18:29.484 "enable_zerocopy_send_client": false, 00:18:29.484 "zerocopy_threshold": 0, 00:18:29.484 "tls_version": 0, 00:18:29.484 "enable_ktls": false 00:18:29.484 } 00:18:29.484 }, 00:18:29.484 { 00:18:29.484 "method": "sock_impl_set_options", 00:18:29.484 "params": { 00:18:29.484 "impl_name": "ssl", 00:18:29.484 "recv_buf_size": 4096, 00:18:29.484 "send_buf_size": 4096, 00:18:29.484 "enable_recv_pipe": true, 00:18:29.484 "enable_quickack": false, 00:18:29.484 "enable_placement_id": 0, 00:18:29.484 "enable_zerocopy_send_server": true, 00:18:29.484 "enable_zerocopy_send_client": false, 00:18:29.484 "zerocopy_threshold": 0, 00:18:29.484 "tls_version": 0, 00:18:29.484 "enable_ktls": false 00:18:29.484 } 00:18:29.484 } 00:18:29.484 ] 00:18:29.484 }, 00:18:29.484 { 00:18:29.484 "subsystem": "vmd", 00:18:29.484 "config": [] 00:18:29.484 }, 00:18:29.484 { 00:18:29.484 "subsystem": "accel", 00:18:29.484 "config": [ 00:18:29.484 { 00:18:29.484 "method": "accel_set_options", 00:18:29.484 "params": { 00:18:29.484 "small_cache_size": 128, 00:18:29.484 "large_cache_size": 16, 00:18:29.484 "task_count": 2048, 00:18:29.484 "sequence_count": 2048, 00:18:29.484 "buf_count": 2048 00:18:29.484 } 00:18:29.484 } 00:18:29.484 ] 00:18:29.484 }, 00:18:29.484 { 00:18:29.484 "subsystem": "bdev", 00:18:29.484 "config": [ 00:18:29.484 { 00:18:29.484 "method": "bdev_set_options", 00:18:29.484 "params": { 00:18:29.484 "bdev_io_pool_size": 65535, 00:18:29.484 "bdev_io_cache_size": 256, 00:18:29.484 "bdev_auto_examine": true, 00:18:29.484 "iobuf_small_cache_size": 128, 00:18:29.484 "iobuf_large_cache_size": 16 00:18:29.484 } 00:18:29.484 }, 00:18:29.484 { 00:18:29.484 "method": "bdev_raid_set_options", 00:18:29.484 "params": { 00:18:29.484 "process_window_size_kb": 1024 00:18:29.484 } 00:18:29.484 }, 00:18:29.484 { 00:18:29.484 "method": "bdev_iscsi_set_options", 00:18:29.484 "params": { 00:18:29.484 "timeout_sec": 30 00:18:29.484 } 00:18:29.484 }, 00:18:29.484 { 00:18:29.484 "method": "bdev_nvme_set_options", 00:18:29.484 "params": { 00:18:29.484 "action_on_timeout": "none", 00:18:29.484 "timeout_us": 0, 00:18:29.484 "timeout_admin_us": 0, 00:18:29.484 "keep_alive_timeout_ms": 10000, 00:18:29.484 "arbitration_burst": 0, 00:18:29.484 "low_priority_weight": 0, 00:18:29.484 "medium_priority_weight": 0, 00:18:29.484 "high_priority_weight": 0, 00:18:29.484 "nvme_adminq_poll_period_us": 10000, 00:18:29.484 "nvme_ioq_poll_period_us": 0, 00:18:29.484 "io_queue_requests": 512, 00:18:29.484 "delay_cmd_submit": true, 00:18:29.484 "transport_retry_count": 4, 00:18:29.484 "bdev_retry_count": 3, 00:18:29.484 "transport_ack_timeout": 0, 00:18:29.484 "ctrlr_loss_timeout_sec": 0, 00:18:29.484 "reconnect_delay_sec": 0, 00:18:29.484 "fast_io_fail_timeout_sec": 0, 00:18:29.484 "disable_auto_failback": false, 00:18:29.484 "generate_uuids": false, 00:18:29.484 "transport_tos": 0, 00:18:29.484 "nvme_error_stat": false, 00:18:29.484 "rdma_srq_size": 0, 00:18:29.484 "io_path_stat": false, 00:18:29.484 "allow_accel_sequence": false, 00:18:29.484 "rdma_max_cq_size": 0, 00:18:29.484 "rdma_cm_event_timeout_ms": 0, 00:18:29.484 "dhchap_digests": [ 00:18:29.484 "sha256", 00:18:29.484 "sha384", 00:18:29.484 "sha512" 00:18:29.484 ], 00:18:29.484 "dhchap_dhgroups": [ 00:18:29.484 "null", 00:18:29.484 "ffdhe2048", 00:18:29.484 "ffdhe3072", 00:18:29.484 "ffdhe4096", 00:18:29.484 "ffdhe6144", 00:18:29.484 "ffdhe8192" 00:18:29.484 ] 00:18:29.484 } 00:18:29.484 }, 00:18:29.484 { 00:18:29.484 "method": "bdev_nvme_attach_controller", 00:18:29.484 "params": { 00:18:29.484 "name": "TLSTEST", 00:18:29.484 "trtype": "TCP", 00:18:29.484 "adrfam": "IPv4", 00:18:29.484 "traddr": "10.0.0.2", 00:18:29.484 "trsvcid": "4420", 00:18:29.484 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:29.484 "prchk_reftag": false, 00:18:29.484 "prchk_guard": false, 00:18:29.484 "ctrlr_loss_timeout_sec": 0, 00:18:29.484 "reconnect_delay_sec": 0, 00:18:29.484 "fast_io_fail_timeout_sec": 0, 00:18:29.484 "psk": "/tmp/tmp.H90Cb48IfE", 00:18:29.484 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:29.484 "hdgst": false, 00:18:29.484 "ddgst": false 00:18:29.484 } 00:18:29.484 }, 00:18:29.484 { 00:18:29.484 "method": "bdev_nvme_set_hotplug", 00:18:29.484 "params": { 00:18:29.484 "period_us": 100000, 00:18:29.484 "enable": false 00:18:29.484 } 00:18:29.484 }, 00:18:29.484 { 00:18:29.484 "method": "bdev_wait_for_examine" 00:18:29.484 } 00:18:29.484 ] 00:18:29.484 }, 00:18:29.484 { 00:18:29.484 "subsystem": "nbd", 00:18:29.484 "config": [] 00:18:29.484 } 00:18:29.484 ] 00:18:29.484 }' 00:18:29.484 15:28:46 -- target/tls.sh@199 -- # killprocess 1648403 00:18:29.484 15:28:46 -- common/autotest_common.sh@936 -- # '[' -z 1648403 ']' 00:18:29.484 15:28:46 -- common/autotest_common.sh@940 -- # kill -0 1648403 00:18:29.484 15:28:46 -- common/autotest_common.sh@941 -- # uname 00:18:29.484 15:28:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:29.484 15:28:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1648403 00:18:29.745 15:28:46 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:29.745 15:28:46 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:29.745 15:28:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1648403' 00:18:29.745 killing process with pid 1648403 00:18:29.745 15:28:46 -- common/autotest_common.sh@955 -- # kill 1648403 00:18:29.745 Received shutdown signal, test time was about 10.000000 seconds 00:18:29.745 00:18:29.745 Latency(us) 00:18:29.745 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:29.745 =================================================================================================================== 00:18:29.745 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:29.745 [2024-04-26 15:28:46.941245] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:29.745 15:28:46 -- common/autotest_common.sh@960 -- # wait 1648403 00:18:29.745 15:28:47 -- target/tls.sh@200 -- # killprocess 1648045 00:18:29.745 15:28:47 -- common/autotest_common.sh@936 -- # '[' -z 1648045 ']' 00:18:29.745 15:28:47 -- common/autotest_common.sh@940 -- # kill -0 1648045 00:18:29.745 15:28:47 -- common/autotest_common.sh@941 -- # uname 00:18:29.745 15:28:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:29.745 15:28:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1648045 00:18:29.745 15:28:47 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:29.745 15:28:47 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:29.745 15:28:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1648045' 00:18:29.745 killing process with pid 1648045 00:18:29.745 15:28:47 -- common/autotest_common.sh@955 -- # kill 1648045 00:18:29.745 [2024-04-26 15:28:47.109189] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:29.745 15:28:47 -- common/autotest_common.sh@960 -- # wait 1648045 00:18:30.007 15:28:47 -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:30.007 15:28:47 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:30.007 15:28:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:30.007 15:28:47 -- common/autotest_common.sh@10 -- # set +x 00:18:30.007 15:28:47 -- target/tls.sh@203 -- # echo '{ 00:18:30.007 "subsystems": [ 00:18:30.007 { 00:18:30.007 "subsystem": "keyring", 00:18:30.007 "config": [] 00:18:30.007 }, 00:18:30.007 { 00:18:30.007 "subsystem": "iobuf", 00:18:30.007 "config": [ 00:18:30.007 { 00:18:30.007 "method": "iobuf_set_options", 00:18:30.007 "params": { 00:18:30.007 "small_pool_count": 8192, 00:18:30.007 "large_pool_count": 1024, 00:18:30.007 "small_bufsize": 8192, 00:18:30.007 "large_bufsize": 135168 00:18:30.007 } 00:18:30.007 } 00:18:30.007 ] 00:18:30.008 }, 00:18:30.008 { 00:18:30.008 "subsystem": "sock", 00:18:30.008 "config": [ 00:18:30.008 { 00:18:30.008 "method": "sock_impl_set_options", 00:18:30.008 "params": { 00:18:30.008 "impl_name": "posix", 00:18:30.008 "recv_buf_size": 2097152, 00:18:30.008 "send_buf_size": 2097152, 00:18:30.008 "enable_recv_pipe": true, 00:18:30.008 "enable_quickack": false, 00:18:30.008 "enable_placement_id": 0, 00:18:30.008 "enable_zerocopy_send_server": true, 00:18:30.008 "enable_zerocopy_send_client": false, 00:18:30.008 "zerocopy_threshold": 0, 00:18:30.008 "tls_version": 0, 00:18:30.008 "enable_ktls": false 00:18:30.008 } 00:18:30.008 }, 00:18:30.008 { 00:18:30.008 "method": "sock_impl_set_options", 00:18:30.008 "params": { 00:18:30.008 "impl_name": "ssl", 00:18:30.008 "recv_buf_size": 4096, 00:18:30.008 "send_buf_size": 4096, 00:18:30.008 "enable_recv_pipe": true, 00:18:30.008 "enable_quickack": false, 00:18:30.008 "enable_placement_id": 0, 00:18:30.008 "enable_zerocopy_send_server": true, 00:18:30.008 "enable_zerocopy_send_client": false, 00:18:30.008 "zerocopy_threshold": 0, 00:18:30.008 "tls_version": 0, 00:18:30.008 "enable_ktls": false 00:18:30.008 } 00:18:30.008 } 00:18:30.008 ] 00:18:30.008 }, 00:18:30.008 { 00:18:30.008 "subsystem": "vmd", 00:18:30.008 "config": [] 00:18:30.008 }, 00:18:30.008 { 00:18:30.008 "subsystem": "accel", 00:18:30.008 "config": [ 00:18:30.008 { 00:18:30.008 "method": "accel_set_options", 00:18:30.008 "params": { 00:18:30.008 "small_cache_size": 128, 00:18:30.008 "large_cache_size": 16, 00:18:30.008 "task_count": 2048, 00:18:30.008 "sequence_count": 2048, 00:18:30.008 "buf_count": 2048 00:18:30.008 } 00:18:30.008 } 00:18:30.008 ] 00:18:30.008 }, 00:18:30.008 { 00:18:30.008 "subsystem": "bdev", 00:18:30.008 "config": [ 00:18:30.008 { 00:18:30.008 "method": "bdev_set_options", 00:18:30.008 "params": { 00:18:30.008 "bdev_io_pool_size": 65535, 00:18:30.008 "bdev_io_cache_size": 256, 00:18:30.008 "bdev_auto_examine": true, 00:18:30.008 "iobuf_small_cache_size": 128, 00:18:30.008 "iobuf_large_cache_size": 16 00:18:30.008 } 00:18:30.008 }, 00:18:30.008 { 00:18:30.008 "method": "bdev_raid_set_options", 00:18:30.008 "params": { 00:18:30.008 "process_window_size_kb": 1024 00:18:30.008 } 00:18:30.008 }, 00:18:30.008 { 00:18:30.008 "method": "bdev_iscsi_set_options", 00:18:30.008 "params": { 00:18:30.008 "timeout_sec": 30 00:18:30.008 } 00:18:30.008 }, 00:18:30.008 { 00:18:30.008 "method": "bdev_nvme_set_options", 00:18:30.008 "params": { 00:18:30.008 "action_on_timeout": "none", 00:18:30.008 "timeout_us": 0, 00:18:30.008 "timeout_admin_us": 0, 00:18:30.008 "keep_alive_timeout_ms": 10000, 00:18:30.008 "arbitration_burst": 0, 00:18:30.008 "low_priority_weight": 0, 00:18:30.008 "medium_priority_weight": 0, 00:18:30.008 "high_priority_weight": 0, 00:18:30.008 "nvme_adminq_poll_period_us": 10000, 00:18:30.008 "nvme_ioq_poll_period_us": 0, 00:18:30.008 "io_queue_requests": 0, 00:18:30.008 "delay_cmd_submit": true, 00:18:30.008 "transport_retry_count": 4, 00:18:30.008 "bdev_retry_count": 3, 00:18:30.008 "transport_ack_timeout": 0, 00:18:30.008 "ctrlr_loss_timeout_sec": 0, 00:18:30.008 "reconnect_delay_sec": 0, 00:18:30.008 "fast_io_fail_timeout_sec": 0, 00:18:30.008 "disable_auto_failback": false, 00:18:30.008 "generate_uuids": false, 00:18:30.008 "transport_tos": 0, 00:18:30.008 "nvme_error_stat": false, 00:18:30.008 "rdma_srq_size": 0, 00:18:30.008 "io_path_stat": false, 00:18:30.008 "allow_accel_sequence": false, 00:18:30.008 "rdma_max_cq_size": 0, 00:18:30.008 "rdma_cm_event_timeout_ms": 0, 00:18:30.008 "dhchap_digests": [ 00:18:30.008 "sha256", 00:18:30.008 "sha384", 00:18:30.008 "sha512" 00:18:30.008 ], 00:18:30.008 "dhchap_dhgroups": [ 00:18:30.008 "null", 00:18:30.008 "ffdhe2048", 00:18:30.008 "ffdhe3072", 00:18:30.008 "ffdhe4096", 00:18:30.008 "ffdhe6144", 00:18:30.008 "ffdhe8192" 00:18:30.008 ] 00:18:30.008 } 00:18:30.008 }, 00:18:30.008 { 00:18:30.008 "method": "bdev_nvme_set_hotplug", 00:18:30.008 "params": { 00:18:30.008 "period_us": 100000, 00:18:30.008 "enable": false 00:18:30.008 } 00:18:30.008 }, 00:18:30.008 { 00:18:30.008 "method": "bdev_malloc_create", 00:18:30.008 "params": { 00:18:30.008 "name": "malloc0", 00:18:30.008 "num_blocks": 8192, 00:18:30.008 "block_size": 4096, 00:18:30.008 "physical_block_size": 4096, 00:18:30.008 "uuid": "3a01f101-f8cb-45c4-894a-34ed8a7e69d3", 00:18:30.008 "optimal_io_boundary": 0 00:18:30.008 } 00:18:30.008 }, 00:18:30.008 { 00:18:30.008 "method": "bdev_wait_for_examine" 00:18:30.008 } 00:18:30.008 ] 00:18:30.008 }, 00:18:30.008 { 00:18:30.008 "subsystem": "nbd", 00:18:30.008 "config": [] 00:18:30.008 }, 00:18:30.008 { 00:18:30.008 "subsystem": "scheduler", 00:18:30.008 "config": [ 00:18:30.008 { 00:18:30.008 "method": "framework_set_scheduler", 00:18:30.008 "params": { 00:18:30.008 "name": "static" 00:18:30.008 } 00:18:30.008 } 00:18:30.008 ] 00:18:30.008 }, 00:18:30.008 { 00:18:30.008 "subsystem": "nvmf", 00:18:30.008 "config": [ 00:18:30.008 { 00:18:30.008 "method": "nvmf_set_config", 00:18:30.008 "params": { 00:18:30.008 "discovery_filter": "match_any", 00:18:30.008 "admin_cmd_passthru": { 00:18:30.008 "identify_ctrlr": false 00:18:30.008 } 00:18:30.008 } 00:18:30.008 }, 00:18:30.008 { 00:18:30.008 "method": "nvmf_set_max_subsystems", 00:18:30.008 "params": { 00:18:30.008 "max_subsystems": 1024 00:18:30.008 } 00:18:30.008 }, 00:18:30.008 { 00:18:30.008 "method": "nvmf_set_crdt", 00:18:30.008 "params": { 00:18:30.008 "crdt1": 0, 00:18:30.008 "crdt2": 0, 00:18:30.008 "crdt3": 0 00:18:30.008 } 00:18:30.008 }, 00:18:30.008 { 00:18:30.008 "method": "nvmf_create_transport", 00:18:30.008 "params": { 00:18:30.008 "trtype": "TCP", 00:18:30.008 "max_queue_depth": 128, 00:18:30.008 "max_io_qpairs_per_ctrlr": 127, 00:18:30.008 "in_capsule_data_size": 4096, 00:18:30.008 "max_io_size": 131072, 00:18:30.008 "io_unit_size": 131072, 00:18:30.008 "max_aq_depth": 128, 00:18:30.008 "num_shared_buffers": 511, 00:18:30.008 "buf_cache_size": 4294967295, 00:18:30.008 "dif_insert_or_strip": false, 00:18:30.008 "zcopy": false, 00:18:30.008 "c2h_success": false, 00:18:30.008 "sock_priority": 0, 00:18:30.008 "abort_timeout_sec": 1, 00:18:30.008 "ack_timeout": 0, 00:18:30.008 "data_wr_pool_size": 0 00:18:30.008 } 00:18:30.008 }, 00:18:30.008 { 00:18:30.008 "method": "nvmf_create_subsystem", 00:18:30.008 "params": { 00:18:30.008 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:30.008 "allow_any_host": false, 00:18:30.008 "serial_number": "SPDK00000000000001", 00:18:30.008 "model_number": "SPDK bdev Controller", 00:18:30.008 "max_namespaces": 10, 00:18:30.008 "min_cntlid": 1, 00:18:30.008 "max_cntlid": 65519, 00:18:30.008 "ana_reporting": false 00:18:30.008 } 00:18:30.008 }, 00:18:30.008 { 00:18:30.008 "method": "nvmf_subsystem_add_host", 00:18:30.008 "params": { 00:18:30.008 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:30.008 "host": "nqn.2016-06.io.spdk:host1", 00:18:30.008 "psk": "/tmp/tmp.H90Cb48IfE" 00:18:30.008 } 00:18:30.008 }, 00:18:30.008 { 00:18:30.008 "method": "nvmf_subsystem_add_ns", 00:18:30.008 "params": { 00:18:30.009 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:30.009 "namespace": { 00:18:30.009 "nsid": 1, 00:18:30.009 "bdev_name": "malloc0", 00:18:30.009 "nguid": "3A01F101F8CB45C4894A34ED8A7E69D3", 00:18:30.009 "uuid": "3a01f101-f8cb-45c4-894a-34ed8a7e69d3", 00:18:30.009 "no_auto_visible": false 00:18:30.009 } 00:18:30.009 } 00:18:30.009 }, 00:18:30.009 { 00:18:30.009 "method": "nvmf_subsystem_add_listener", 00:18:30.009 "params": { 00:18:30.009 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:30.009 "listen_address": { 00:18:30.009 "trtype": "TCP", 00:18:30.009 "adrfam": "IPv4", 00:18:30.009 "traddr": "10.0.0.2", 00:18:30.009 "trsvcid": "4420" 00:18:30.009 }, 00:18:30.009 "secure_channel": true 00:18:30.009 } 00:18:30.009 } 00:18:30.009 ] 00:18:30.009 } 00:18:30.009 ] 00:18:30.009 }' 00:18:30.009 15:28:47 -- nvmf/common.sh@470 -- # nvmfpid=1648578 00:18:30.009 15:28:47 -- nvmf/common.sh@471 -- # waitforlisten 1648578 00:18:30.009 15:28:47 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:30.009 15:28:47 -- common/autotest_common.sh@817 -- # '[' -z 1648578 ']' 00:18:30.009 15:28:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:30.009 15:28:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:30.009 15:28:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:30.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:30.009 15:28:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:30.009 15:28:47 -- common/autotest_common.sh@10 -- # set +x 00:18:30.009 [2024-04-26 15:28:47.282093] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:18:30.009 [2024-04-26 15:28:47.282148] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:30.009 EAL: No free 2048 kB hugepages reported on node 1 00:18:30.009 [2024-04-26 15:28:47.365695] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.009 [2024-04-26 15:28:47.417674] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:30.009 [2024-04-26 15:28:47.417707] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:30.009 [2024-04-26 15:28:47.417712] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:30.009 [2024-04-26 15:28:47.417717] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:30.009 [2024-04-26 15:28:47.417721] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:30.009 [2024-04-26 15:28:47.417766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:30.269 [2024-04-26 15:28:47.593125] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:30.269 [2024-04-26 15:28:47.609094] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:30.269 [2024-04-26 15:28:47.625144] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:30.269 [2024-04-26 15:28:47.634163] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:30.843 15:28:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:30.843 15:28:48 -- common/autotest_common.sh@850 -- # return 0 00:18:30.843 15:28:48 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:30.843 15:28:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:30.843 15:28:48 -- common/autotest_common.sh@10 -- # set +x 00:18:30.843 15:28:48 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:30.843 15:28:48 -- target/tls.sh@207 -- # bdevperf_pid=1648780 00:18:30.843 15:28:48 -- target/tls.sh@208 -- # waitforlisten 1648780 /var/tmp/bdevperf.sock 00:18:30.843 15:28:48 -- common/autotest_common.sh@817 -- # '[' -z 1648780 ']' 00:18:30.843 15:28:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:30.843 15:28:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:30.843 15:28:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:30.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:30.843 15:28:48 -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:30.843 15:28:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:30.843 15:28:48 -- common/autotest_common.sh@10 -- # set +x 00:18:30.843 15:28:48 -- target/tls.sh@204 -- # echo '{ 00:18:30.843 "subsystems": [ 00:18:30.843 { 00:18:30.843 "subsystem": "keyring", 00:18:30.843 "config": [] 00:18:30.843 }, 00:18:30.843 { 00:18:30.843 "subsystem": "iobuf", 00:18:30.843 "config": [ 00:18:30.843 { 00:18:30.843 "method": "iobuf_set_options", 00:18:30.843 "params": { 00:18:30.843 "small_pool_count": 8192, 00:18:30.843 "large_pool_count": 1024, 00:18:30.843 "small_bufsize": 8192, 00:18:30.843 "large_bufsize": 135168 00:18:30.843 } 00:18:30.843 } 00:18:30.843 ] 00:18:30.843 }, 00:18:30.843 { 00:18:30.843 "subsystem": "sock", 00:18:30.843 "config": [ 00:18:30.843 { 00:18:30.843 "method": "sock_impl_set_options", 00:18:30.843 "params": { 00:18:30.843 "impl_name": "posix", 00:18:30.843 "recv_buf_size": 2097152, 00:18:30.843 "send_buf_size": 2097152, 00:18:30.843 "enable_recv_pipe": true, 00:18:30.843 "enable_quickack": false, 00:18:30.843 "enable_placement_id": 0, 00:18:30.843 "enable_zerocopy_send_server": true, 00:18:30.843 "enable_zerocopy_send_client": false, 00:18:30.843 "zerocopy_threshold": 0, 00:18:30.843 "tls_version": 0, 00:18:30.843 "enable_ktls": false 00:18:30.843 } 00:18:30.843 }, 00:18:30.843 { 00:18:30.843 "method": "sock_impl_set_options", 00:18:30.843 "params": { 00:18:30.843 "impl_name": "ssl", 00:18:30.843 "recv_buf_size": 4096, 00:18:30.843 "send_buf_size": 4096, 00:18:30.843 "enable_recv_pipe": true, 00:18:30.843 "enable_quickack": false, 00:18:30.843 "enable_placement_id": 0, 00:18:30.843 "enable_zerocopy_send_server": true, 00:18:30.843 "enable_zerocopy_send_client": false, 00:18:30.843 "zerocopy_threshold": 0, 00:18:30.843 "tls_version": 0, 00:18:30.843 "enable_ktls": false 00:18:30.843 } 00:18:30.843 } 00:18:30.843 ] 00:18:30.843 }, 00:18:30.843 { 00:18:30.843 "subsystem": "vmd", 00:18:30.843 "config": [] 00:18:30.843 }, 00:18:30.843 { 00:18:30.843 "subsystem": "accel", 00:18:30.843 "config": [ 00:18:30.843 { 00:18:30.843 "method": "accel_set_options", 00:18:30.843 "params": { 00:18:30.843 "small_cache_size": 128, 00:18:30.843 "large_cache_size": 16, 00:18:30.843 "task_count": 2048, 00:18:30.843 "sequence_count": 2048, 00:18:30.843 "buf_count": 2048 00:18:30.843 } 00:18:30.843 } 00:18:30.843 ] 00:18:30.843 }, 00:18:30.843 { 00:18:30.843 "subsystem": "bdev", 00:18:30.843 "config": [ 00:18:30.843 { 00:18:30.843 "method": "bdev_set_options", 00:18:30.843 "params": { 00:18:30.843 "bdev_io_pool_size": 65535, 00:18:30.843 "bdev_io_cache_size": 256, 00:18:30.843 "bdev_auto_examine": true, 00:18:30.843 "iobuf_small_cache_size": 128, 00:18:30.843 "iobuf_large_cache_size": 16 00:18:30.843 } 00:18:30.843 }, 00:18:30.843 { 00:18:30.843 "method": "bdev_raid_set_options", 00:18:30.843 "params": { 00:18:30.843 "process_window_size_kb": 1024 00:18:30.843 } 00:18:30.843 }, 00:18:30.843 { 00:18:30.843 "method": "bdev_iscsi_set_options", 00:18:30.843 "params": { 00:18:30.843 "timeout_sec": 30 00:18:30.843 } 00:18:30.843 }, 00:18:30.843 { 00:18:30.843 "method": "bdev_nvme_set_options", 00:18:30.843 "params": { 00:18:30.843 "action_on_timeout": "none", 00:18:30.843 "timeout_us": 0, 00:18:30.843 "timeout_admin_us": 0, 00:18:30.843 "keep_alive_timeout_ms": 10000, 00:18:30.843 "arbitration_burst": 0, 00:18:30.843 "low_priority_weight": 0, 00:18:30.843 "medium_priority_weight": 0, 00:18:30.843 "high_priority_weight": 0, 00:18:30.843 "nvme_adminq_poll_period_us": 10000, 00:18:30.843 "nvme_ioq_poll_period_us": 0, 00:18:30.843 "io_queue_requests": 512, 00:18:30.843 "delay_cmd_submit": true, 00:18:30.843 "transport_retry_count": 4, 00:18:30.843 "bdev_retry_count": 3, 00:18:30.843 "transport_ack_timeout": 0, 00:18:30.843 "ctrlr_loss_timeout_sec": 0, 00:18:30.843 "reconnect_delay_sec": 0, 00:18:30.843 "fast_io_fail_timeout_sec": 0, 00:18:30.843 "disable_auto_failback": false, 00:18:30.843 "generate_uuids": false, 00:18:30.843 "transport_tos": 0, 00:18:30.843 "nvme_error_stat": false, 00:18:30.843 "rdma_srq_size": 0, 00:18:30.843 "io_path_stat": false, 00:18:30.843 "allow_accel_sequence": false, 00:18:30.843 "rdma_max_cq_size": 0, 00:18:30.843 "rdma_cm_event_timeout_ms": 0, 00:18:30.843 "dhchap_digests": [ 00:18:30.843 "sha256", 00:18:30.843 "sha384", 00:18:30.843 "sha512" 00:18:30.843 ], 00:18:30.843 "dhchap_dhgroups": [ 00:18:30.843 "null", 00:18:30.843 "ffdhe2048", 00:18:30.843 "ffdhe3072", 00:18:30.843 "ffdhe4096", 00:18:30.843 "ffdhe6144", 00:18:30.843 "ffdhe8192" 00:18:30.843 ] 00:18:30.843 } 00:18:30.843 }, 00:18:30.843 { 00:18:30.843 "method": "bdev_nvme_attach_controller", 00:18:30.843 "params": { 00:18:30.843 "name": "TLSTEST", 00:18:30.843 "trtype": "TCP", 00:18:30.843 "adrfam": "IPv4", 00:18:30.843 "traddr": "10.0.0.2", 00:18:30.843 "trsvcid": "4420", 00:18:30.843 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:30.843 "prchk_reftag": false, 00:18:30.843 "prchk_guard": false, 00:18:30.843 "ctrlr_loss_timeout_sec": 0, 00:18:30.843 "reconnect_delay_sec": 0, 00:18:30.843 "fast_io_fail_timeout_sec": 0, 00:18:30.843 "psk": "/tmp/tmp.H90Cb48IfE", 00:18:30.843 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:30.843 "hdgst": false, 00:18:30.843 "ddgst": false 00:18:30.843 } 00:18:30.843 }, 00:18:30.843 { 00:18:30.843 "method": "bdev_nvme_set_hotplug", 00:18:30.843 "params": { 00:18:30.843 "period_us": 100000, 00:18:30.843 "enable": false 00:18:30.843 } 00:18:30.843 }, 00:18:30.843 { 00:18:30.843 "method": "bdev_wait_for_examine" 00:18:30.843 } 00:18:30.843 ] 00:18:30.843 }, 00:18:30.843 { 00:18:30.843 "subsystem": "nbd", 00:18:30.843 "config": [] 00:18:30.843 } 00:18:30.843 ] 00:18:30.843 }' 00:18:30.843 [2024-04-26 15:28:48.138318] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:18:30.843 [2024-04-26 15:28:48.138370] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1648780 ] 00:18:30.844 EAL: No free 2048 kB hugepages reported on node 1 00:18:30.844 [2024-04-26 15:28:48.189234] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.844 [2024-04-26 15:28:48.240450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:31.104 [2024-04-26 15:28:48.356973] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:31.104 [2024-04-26 15:28:48.357037] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:31.675 15:28:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:31.675 15:28:48 -- common/autotest_common.sh@850 -- # return 0 00:18:31.675 15:28:48 -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:31.675 Running I/O for 10 seconds... 00:18:41.669 00:18:41.669 Latency(us) 00:18:41.669 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:41.669 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:41.669 Verification LBA range: start 0x0 length 0x2000 00:18:41.669 TLSTESTn1 : 10.03 5654.79 22.09 0.00 0.00 22588.72 5133.65 77332.48 00:18:41.669 =================================================================================================================== 00:18:41.669 Total : 5654.79 22.09 0.00 0.00 22588.72 5133.65 77332.48 00:18:41.669 0 00:18:41.669 15:28:59 -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:41.669 15:28:59 -- target/tls.sh@214 -- # killprocess 1648780 00:18:41.669 15:28:59 -- common/autotest_common.sh@936 -- # '[' -z 1648780 ']' 00:18:41.669 15:28:59 -- common/autotest_common.sh@940 -- # kill -0 1648780 00:18:41.669 15:28:59 -- common/autotest_common.sh@941 -- # uname 00:18:41.669 15:28:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:41.669 15:28:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1648780 00:18:41.669 15:28:59 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:41.669 15:28:59 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:41.669 15:28:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1648780' 00:18:41.669 killing process with pid 1648780 00:18:41.669 15:28:59 -- common/autotest_common.sh@955 -- # kill 1648780 00:18:41.669 Received shutdown signal, test time was about 10.000000 seconds 00:18:41.669 00:18:41.669 Latency(us) 00:18:41.669 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:41.669 =================================================================================================================== 00:18:41.669 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:41.669 [2024-04-26 15:28:59.112531] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:41.669 15:28:59 -- common/autotest_common.sh@960 -- # wait 1648780 00:18:41.929 15:28:59 -- target/tls.sh@215 -- # killprocess 1648578 00:18:41.929 15:28:59 -- common/autotest_common.sh@936 -- # '[' -z 1648578 ']' 00:18:41.929 15:28:59 -- common/autotest_common.sh@940 -- # kill -0 1648578 00:18:41.929 15:28:59 -- common/autotest_common.sh@941 -- # uname 00:18:41.929 15:28:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:41.929 15:28:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1648578 00:18:41.929 15:28:59 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:41.929 15:28:59 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:41.929 15:28:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1648578' 00:18:41.929 killing process with pid 1648578 00:18:41.929 15:28:59 -- common/autotest_common.sh@955 -- # kill 1648578 00:18:41.929 [2024-04-26 15:28:59.280223] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:41.929 15:28:59 -- common/autotest_common.sh@960 -- # wait 1648578 00:18:42.189 15:28:59 -- target/tls.sh@218 -- # nvmfappstart 00:18:42.189 15:28:59 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:42.189 15:28:59 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:42.189 15:28:59 -- common/autotest_common.sh@10 -- # set +x 00:18:42.189 15:28:59 -- nvmf/common.sh@470 -- # nvmfpid=1650968 00:18:42.189 15:28:59 -- nvmf/common.sh@471 -- # waitforlisten 1650968 00:18:42.189 15:28:59 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:42.189 15:28:59 -- common/autotest_common.sh@817 -- # '[' -z 1650968 ']' 00:18:42.189 15:28:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:42.189 15:28:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:42.189 15:28:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:42.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:42.189 15:28:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:42.189 15:28:59 -- common/autotest_common.sh@10 -- # set +x 00:18:42.189 [2024-04-26 15:28:59.457125] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:18:42.189 [2024-04-26 15:28:59.457181] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:42.189 EAL: No free 2048 kB hugepages reported on node 1 00:18:42.189 [2024-04-26 15:28:59.522055] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:42.189 [2024-04-26 15:28:59.585883] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:42.189 [2024-04-26 15:28:59.585920] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:42.189 [2024-04-26 15:28:59.585927] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:42.189 [2024-04-26 15:28:59.585933] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:42.189 [2024-04-26 15:28:59.585939] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:42.189 [2024-04-26 15:28:59.585956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:43.130 15:29:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:43.130 15:29:00 -- common/autotest_common.sh@850 -- # return 0 00:18:43.130 15:29:00 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:43.130 15:29:00 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:43.130 15:29:00 -- common/autotest_common.sh@10 -- # set +x 00:18:43.130 15:29:00 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:43.130 15:29:00 -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.H90Cb48IfE 00:18:43.130 15:29:00 -- target/tls.sh@49 -- # local key=/tmp/tmp.H90Cb48IfE 00:18:43.130 15:29:00 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:43.130 [2024-04-26 15:29:00.404888] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:43.130 15:29:00 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:43.390 15:29:00 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:43.390 [2024-04-26 15:29:00.725696] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:43.390 [2024-04-26 15:29:00.725899] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:43.390 15:29:00 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:43.651 malloc0 00:18:43.651 15:29:00 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:43.651 15:29:01 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.H90Cb48IfE 00:18:43.910 [2024-04-26 15:29:01.241748] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:43.910 15:29:01 -- target/tls.sh@222 -- # bdevperf_pid=1651397 00:18:43.910 15:29:01 -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:43.910 15:29:01 -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:43.910 15:29:01 -- target/tls.sh@225 -- # waitforlisten 1651397 /var/tmp/bdevperf.sock 00:18:43.910 15:29:01 -- common/autotest_common.sh@817 -- # '[' -z 1651397 ']' 00:18:43.910 15:29:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:43.910 15:29:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:43.910 15:29:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:43.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:43.910 15:29:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:43.910 15:29:01 -- common/autotest_common.sh@10 -- # set +x 00:18:43.910 [2024-04-26 15:29:01.319591] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:18:43.910 [2024-04-26 15:29:01.319643] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1651397 ] 00:18:43.910 EAL: No free 2048 kB hugepages reported on node 1 00:18:44.168 [2024-04-26 15:29:01.395262] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:44.168 [2024-04-26 15:29:01.447261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:44.739 15:29:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:44.739 15:29:02 -- common/autotest_common.sh@850 -- # return 0 00:18:44.739 15:29:02 -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.H90Cb48IfE 00:18:44.998 15:29:02 -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:44.998 [2024-04-26 15:29:02.369376] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:44.998 nvme0n1 00:18:45.258 15:29:02 -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:45.258 Running I/O for 1 seconds... 00:18:46.198 00:18:46.198 Latency(us) 00:18:46.198 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:46.198 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:46.198 Verification LBA range: start 0x0 length 0x2000 00:18:46.198 nvme0n1 : 1.05 4760.69 18.60 0.00 0.00 26307.09 5379.41 48278.19 00:18:46.198 =================================================================================================================== 00:18:46.198 Total : 4760.69 18.60 0.00 0.00 26307.09 5379.41 48278.19 00:18:46.198 0 00:18:46.198 15:29:03 -- target/tls.sh@234 -- # killprocess 1651397 00:18:46.198 15:29:03 -- common/autotest_common.sh@936 -- # '[' -z 1651397 ']' 00:18:46.198 15:29:03 -- common/autotest_common.sh@940 -- # kill -0 1651397 00:18:46.198 15:29:03 -- common/autotest_common.sh@941 -- # uname 00:18:46.198 15:29:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:46.198 15:29:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1651397 00:18:46.457 15:29:03 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:46.457 15:29:03 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:46.457 15:29:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1651397' 00:18:46.457 killing process with pid 1651397 00:18:46.457 15:29:03 -- common/autotest_common.sh@955 -- # kill 1651397 00:18:46.457 Received shutdown signal, test time was about 1.000000 seconds 00:18:46.457 00:18:46.457 Latency(us) 00:18:46.457 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:46.457 =================================================================================================================== 00:18:46.457 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:46.457 15:29:03 -- common/autotest_common.sh@960 -- # wait 1651397 00:18:46.457 15:29:03 -- target/tls.sh@235 -- # killprocess 1650968 00:18:46.457 15:29:03 -- common/autotest_common.sh@936 -- # '[' -z 1650968 ']' 00:18:46.457 15:29:03 -- common/autotest_common.sh@940 -- # kill -0 1650968 00:18:46.457 15:29:03 -- common/autotest_common.sh@941 -- # uname 00:18:46.457 15:29:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:46.457 15:29:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1650968 00:18:46.457 15:29:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:46.457 15:29:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:46.457 15:29:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1650968' 00:18:46.457 killing process with pid 1650968 00:18:46.457 15:29:03 -- common/autotest_common.sh@955 -- # kill 1650968 00:18:46.457 [2024-04-26 15:29:03.821600] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:46.457 15:29:03 -- common/autotest_common.sh@960 -- # wait 1650968 00:18:46.716 15:29:03 -- target/tls.sh@238 -- # nvmfappstart 00:18:46.716 15:29:03 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:46.716 15:29:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:46.716 15:29:03 -- common/autotest_common.sh@10 -- # set +x 00:18:46.716 15:29:03 -- nvmf/common.sh@470 -- # nvmfpid=1651848 00:18:46.716 15:29:03 -- nvmf/common.sh@471 -- # waitforlisten 1651848 00:18:46.716 15:29:03 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:46.716 15:29:03 -- common/autotest_common.sh@817 -- # '[' -z 1651848 ']' 00:18:46.716 15:29:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:46.716 15:29:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:46.716 15:29:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:46.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:46.716 15:29:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:46.716 15:29:03 -- common/autotest_common.sh@10 -- # set +x 00:18:46.716 [2024-04-26 15:29:04.017685] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:18:46.716 [2024-04-26 15:29:04.017736] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:46.716 EAL: No free 2048 kB hugepages reported on node 1 00:18:46.716 [2024-04-26 15:29:04.083254] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.716 [2024-04-26 15:29:04.144924] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:46.716 [2024-04-26 15:29:04.144962] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:46.716 [2024-04-26 15:29:04.144970] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:46.716 [2024-04-26 15:29:04.144977] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:46.716 [2024-04-26 15:29:04.144982] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:46.716 [2024-04-26 15:29:04.145001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:47.659 15:29:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:47.659 15:29:04 -- common/autotest_common.sh@850 -- # return 0 00:18:47.659 15:29:04 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:47.659 15:29:04 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:47.659 15:29:04 -- common/autotest_common.sh@10 -- # set +x 00:18:47.659 15:29:04 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:47.659 15:29:04 -- target/tls.sh@239 -- # rpc_cmd 00:18:47.659 15:29:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:47.659 15:29:04 -- common/autotest_common.sh@10 -- # set +x 00:18:47.659 [2024-04-26 15:29:04.839533] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:47.659 malloc0 00:18:47.659 [2024-04-26 15:29:04.866337] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:47.659 [2024-04-26 15:29:04.866541] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:47.659 15:29:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:47.659 15:29:04 -- target/tls.sh@252 -- # bdevperf_pid=1652195 00:18:47.659 15:29:04 -- target/tls.sh@254 -- # waitforlisten 1652195 /var/tmp/bdevperf.sock 00:18:47.659 15:29:04 -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:47.659 15:29:04 -- common/autotest_common.sh@817 -- # '[' -z 1652195 ']' 00:18:47.659 15:29:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:47.659 15:29:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:47.659 15:29:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:47.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:47.659 15:29:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:47.659 15:29:04 -- common/autotest_common.sh@10 -- # set +x 00:18:47.659 [2024-04-26 15:29:04.943651] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:18:47.659 [2024-04-26 15:29:04.943698] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1652195 ] 00:18:47.659 EAL: No free 2048 kB hugepages reported on node 1 00:18:47.659 [2024-04-26 15:29:05.018252] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:47.659 [2024-04-26 15:29:05.070723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:48.599 15:29:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:48.600 15:29:05 -- common/autotest_common.sh@850 -- # return 0 00:18:48.600 15:29:05 -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.H90Cb48IfE 00:18:48.600 15:29:05 -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:48.600 [2024-04-26 15:29:05.956669] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:48.600 nvme0n1 00:18:48.859 15:29:06 -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:48.859 Running I/O for 1 seconds... 00:18:49.836 00:18:49.836 Latency(us) 00:18:49.836 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:49.836 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:49.836 Verification LBA range: start 0x0 length 0x2000 00:18:49.836 nvme0n1 : 1.02 5571.18 21.76 0.00 0.00 22758.07 4478.29 95682.56 00:18:49.836 =================================================================================================================== 00:18:49.836 Total : 5571.18 21.76 0.00 0.00 22758.07 4478.29 95682.56 00:18:49.836 0 00:18:49.836 15:29:07 -- target/tls.sh@263 -- # rpc_cmd save_config 00:18:49.836 15:29:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:49.836 15:29:07 -- common/autotest_common.sh@10 -- # set +x 00:18:50.097 15:29:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:50.097 15:29:07 -- target/tls.sh@263 -- # tgtcfg='{ 00:18:50.098 "subsystems": [ 00:18:50.098 { 00:18:50.098 "subsystem": "keyring", 00:18:50.098 "config": [ 00:18:50.098 { 00:18:50.098 "method": "keyring_file_add_key", 00:18:50.098 "params": { 00:18:50.098 "name": "key0", 00:18:50.098 "path": "/tmp/tmp.H90Cb48IfE" 00:18:50.098 } 00:18:50.098 } 00:18:50.098 ] 00:18:50.098 }, 00:18:50.098 { 00:18:50.098 "subsystem": "iobuf", 00:18:50.098 "config": [ 00:18:50.098 { 00:18:50.098 "method": "iobuf_set_options", 00:18:50.098 "params": { 00:18:50.098 "small_pool_count": 8192, 00:18:50.098 "large_pool_count": 1024, 00:18:50.098 "small_bufsize": 8192, 00:18:50.098 "large_bufsize": 135168 00:18:50.098 } 00:18:50.098 } 00:18:50.098 ] 00:18:50.098 }, 00:18:50.098 { 00:18:50.098 "subsystem": "sock", 00:18:50.098 "config": [ 00:18:50.098 { 00:18:50.098 "method": "sock_impl_set_options", 00:18:50.098 "params": { 00:18:50.098 "impl_name": "posix", 00:18:50.098 "recv_buf_size": 2097152, 00:18:50.098 "send_buf_size": 2097152, 00:18:50.098 "enable_recv_pipe": true, 00:18:50.098 "enable_quickack": false, 00:18:50.098 "enable_placement_id": 0, 00:18:50.098 "enable_zerocopy_send_server": true, 00:18:50.098 "enable_zerocopy_send_client": false, 00:18:50.098 "zerocopy_threshold": 0, 00:18:50.098 "tls_version": 0, 00:18:50.098 "enable_ktls": false 00:18:50.098 } 00:18:50.098 }, 00:18:50.098 { 00:18:50.098 "method": "sock_impl_set_options", 00:18:50.098 "params": { 00:18:50.098 "impl_name": "ssl", 00:18:50.098 "recv_buf_size": 4096, 00:18:50.098 "send_buf_size": 4096, 00:18:50.098 "enable_recv_pipe": true, 00:18:50.098 "enable_quickack": false, 00:18:50.098 "enable_placement_id": 0, 00:18:50.098 "enable_zerocopy_send_server": true, 00:18:50.098 "enable_zerocopy_send_client": false, 00:18:50.098 "zerocopy_threshold": 0, 00:18:50.098 "tls_version": 0, 00:18:50.098 "enable_ktls": false 00:18:50.098 } 00:18:50.098 } 00:18:50.098 ] 00:18:50.098 }, 00:18:50.098 { 00:18:50.098 "subsystem": "vmd", 00:18:50.098 "config": [] 00:18:50.098 }, 00:18:50.098 { 00:18:50.098 "subsystem": "accel", 00:18:50.098 "config": [ 00:18:50.098 { 00:18:50.098 "method": "accel_set_options", 00:18:50.098 "params": { 00:18:50.098 "small_cache_size": 128, 00:18:50.098 "large_cache_size": 16, 00:18:50.098 "task_count": 2048, 00:18:50.098 "sequence_count": 2048, 00:18:50.098 "buf_count": 2048 00:18:50.098 } 00:18:50.098 } 00:18:50.098 ] 00:18:50.098 }, 00:18:50.098 { 00:18:50.098 "subsystem": "bdev", 00:18:50.098 "config": [ 00:18:50.098 { 00:18:50.098 "method": "bdev_set_options", 00:18:50.098 "params": { 00:18:50.098 "bdev_io_pool_size": 65535, 00:18:50.098 "bdev_io_cache_size": 256, 00:18:50.098 "bdev_auto_examine": true, 00:18:50.098 "iobuf_small_cache_size": 128, 00:18:50.098 "iobuf_large_cache_size": 16 00:18:50.098 } 00:18:50.098 }, 00:18:50.098 { 00:18:50.098 "method": "bdev_raid_set_options", 00:18:50.098 "params": { 00:18:50.098 "process_window_size_kb": 1024 00:18:50.098 } 00:18:50.098 }, 00:18:50.098 { 00:18:50.098 "method": "bdev_iscsi_set_options", 00:18:50.098 "params": { 00:18:50.098 "timeout_sec": 30 00:18:50.098 } 00:18:50.098 }, 00:18:50.098 { 00:18:50.098 "method": "bdev_nvme_set_options", 00:18:50.098 "params": { 00:18:50.098 "action_on_timeout": "none", 00:18:50.098 "timeout_us": 0, 00:18:50.098 "timeout_admin_us": 0, 00:18:50.098 "keep_alive_timeout_ms": 10000, 00:18:50.098 "arbitration_burst": 0, 00:18:50.098 "low_priority_weight": 0, 00:18:50.098 "medium_priority_weight": 0, 00:18:50.098 "high_priority_weight": 0, 00:18:50.098 "nvme_adminq_poll_period_us": 10000, 00:18:50.098 "nvme_ioq_poll_period_us": 0, 00:18:50.098 "io_queue_requests": 0, 00:18:50.098 "delay_cmd_submit": true, 00:18:50.098 "transport_retry_count": 4, 00:18:50.098 "bdev_retry_count": 3, 00:18:50.098 "transport_ack_timeout": 0, 00:18:50.098 "ctrlr_loss_timeout_sec": 0, 00:18:50.098 "reconnect_delay_sec": 0, 00:18:50.098 "fast_io_fail_timeout_sec": 0, 00:18:50.098 "disable_auto_failback": false, 00:18:50.098 "generate_uuids": false, 00:18:50.098 "transport_tos": 0, 00:18:50.098 "nvme_error_stat": false, 00:18:50.098 "rdma_srq_size": 0, 00:18:50.098 "io_path_stat": false, 00:18:50.098 "allow_accel_sequence": false, 00:18:50.098 "rdma_max_cq_size": 0, 00:18:50.098 "rdma_cm_event_timeout_ms": 0, 00:18:50.098 "dhchap_digests": [ 00:18:50.098 "sha256", 00:18:50.098 "sha384", 00:18:50.098 "sha512" 00:18:50.098 ], 00:18:50.098 "dhchap_dhgroups": [ 00:18:50.098 "null", 00:18:50.098 "ffdhe2048", 00:18:50.098 "ffdhe3072", 00:18:50.098 "ffdhe4096", 00:18:50.098 "ffdhe6144", 00:18:50.098 "ffdhe8192" 00:18:50.098 ] 00:18:50.098 } 00:18:50.098 }, 00:18:50.098 { 00:18:50.098 "method": "bdev_nvme_set_hotplug", 00:18:50.098 "params": { 00:18:50.098 "period_us": 100000, 00:18:50.098 "enable": false 00:18:50.098 } 00:18:50.098 }, 00:18:50.098 { 00:18:50.098 "method": "bdev_malloc_create", 00:18:50.098 "params": { 00:18:50.098 "name": "malloc0", 00:18:50.098 "num_blocks": 8192, 00:18:50.098 "block_size": 4096, 00:18:50.098 "physical_block_size": 4096, 00:18:50.098 "uuid": "b3fcd04a-fd42-4d85-9451-4ab0094e80e9", 00:18:50.098 "optimal_io_boundary": 0 00:18:50.098 } 00:18:50.098 }, 00:18:50.098 { 00:18:50.098 "method": "bdev_wait_for_examine" 00:18:50.098 } 00:18:50.098 ] 00:18:50.098 }, 00:18:50.098 { 00:18:50.098 "subsystem": "nbd", 00:18:50.098 "config": [] 00:18:50.098 }, 00:18:50.098 { 00:18:50.098 "subsystem": "scheduler", 00:18:50.098 "config": [ 00:18:50.098 { 00:18:50.098 "method": "framework_set_scheduler", 00:18:50.098 "params": { 00:18:50.098 "name": "static" 00:18:50.098 } 00:18:50.098 } 00:18:50.098 ] 00:18:50.098 }, 00:18:50.098 { 00:18:50.098 "subsystem": "nvmf", 00:18:50.098 "config": [ 00:18:50.098 { 00:18:50.098 "method": "nvmf_set_config", 00:18:50.098 "params": { 00:18:50.098 "discovery_filter": "match_any", 00:18:50.098 "admin_cmd_passthru": { 00:18:50.098 "identify_ctrlr": false 00:18:50.098 } 00:18:50.098 } 00:18:50.098 }, 00:18:50.098 { 00:18:50.098 "method": "nvmf_set_max_subsystems", 00:18:50.098 "params": { 00:18:50.098 "max_subsystems": 1024 00:18:50.098 } 00:18:50.098 }, 00:18:50.098 { 00:18:50.098 "method": "nvmf_set_crdt", 00:18:50.098 "params": { 00:18:50.098 "crdt1": 0, 00:18:50.098 "crdt2": 0, 00:18:50.098 "crdt3": 0 00:18:50.098 } 00:18:50.098 }, 00:18:50.098 { 00:18:50.098 "method": "nvmf_create_transport", 00:18:50.098 "params": { 00:18:50.098 "trtype": "TCP", 00:18:50.098 "max_queue_depth": 128, 00:18:50.099 "max_io_qpairs_per_ctrlr": 127, 00:18:50.099 "in_capsule_data_size": 4096, 00:18:50.099 "max_io_size": 131072, 00:18:50.099 "io_unit_size": 131072, 00:18:50.099 "max_aq_depth": 128, 00:18:50.099 "num_shared_buffers": 511, 00:18:50.099 "buf_cache_size": 4294967295, 00:18:50.099 "dif_insert_or_strip": false, 00:18:50.099 "zcopy": false, 00:18:50.099 "c2h_success": false, 00:18:50.099 "sock_priority": 0, 00:18:50.099 "abort_timeout_sec": 1, 00:18:50.099 "ack_timeout": 0, 00:18:50.099 "data_wr_pool_size": 0 00:18:50.099 } 00:18:50.099 }, 00:18:50.099 { 00:18:50.099 "method": "nvmf_create_subsystem", 00:18:50.099 "params": { 00:18:50.099 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:50.099 "allow_any_host": false, 00:18:50.099 "serial_number": "00000000000000000000", 00:18:50.099 "model_number": "SPDK bdev Controller", 00:18:50.099 "max_namespaces": 32, 00:18:50.099 "min_cntlid": 1, 00:18:50.099 "max_cntlid": 65519, 00:18:50.099 "ana_reporting": false 00:18:50.099 } 00:18:50.099 }, 00:18:50.099 { 00:18:50.099 "method": "nvmf_subsystem_add_host", 00:18:50.099 "params": { 00:18:50.099 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:50.099 "host": "nqn.2016-06.io.spdk:host1", 00:18:50.099 "psk": "key0" 00:18:50.099 } 00:18:50.099 }, 00:18:50.099 { 00:18:50.099 "method": "nvmf_subsystem_add_ns", 00:18:50.099 "params": { 00:18:50.099 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:50.099 "namespace": { 00:18:50.099 "nsid": 1, 00:18:50.099 "bdev_name": "malloc0", 00:18:50.099 "nguid": "B3FCD04AFD424D8594514AB0094E80E9", 00:18:50.099 "uuid": "b3fcd04a-fd42-4d85-9451-4ab0094e80e9", 00:18:50.099 "no_auto_visible": false 00:18:50.099 } 00:18:50.099 } 00:18:50.099 }, 00:18:50.099 { 00:18:50.099 "method": "nvmf_subsystem_add_listener", 00:18:50.099 "params": { 00:18:50.099 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:50.099 "listen_address": { 00:18:50.099 "trtype": "TCP", 00:18:50.099 "adrfam": "IPv4", 00:18:50.099 "traddr": "10.0.0.2", 00:18:50.099 "trsvcid": "4420" 00:18:50.099 }, 00:18:50.099 "secure_channel": true 00:18:50.099 } 00:18:50.099 } 00:18:50.099 ] 00:18:50.099 } 00:18:50.099 ] 00:18:50.099 }' 00:18:50.099 15:29:07 -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:50.099 15:29:07 -- target/tls.sh@264 -- # bperfcfg='{ 00:18:50.099 "subsystems": [ 00:18:50.099 { 00:18:50.099 "subsystem": "keyring", 00:18:50.099 "config": [ 00:18:50.099 { 00:18:50.099 "method": "keyring_file_add_key", 00:18:50.099 "params": { 00:18:50.099 "name": "key0", 00:18:50.099 "path": "/tmp/tmp.H90Cb48IfE" 00:18:50.099 } 00:18:50.099 } 00:18:50.099 ] 00:18:50.099 }, 00:18:50.099 { 00:18:50.099 "subsystem": "iobuf", 00:18:50.099 "config": [ 00:18:50.099 { 00:18:50.099 "method": "iobuf_set_options", 00:18:50.099 "params": { 00:18:50.099 "small_pool_count": 8192, 00:18:50.099 "large_pool_count": 1024, 00:18:50.099 "small_bufsize": 8192, 00:18:50.099 "large_bufsize": 135168 00:18:50.099 } 00:18:50.099 } 00:18:50.099 ] 00:18:50.099 }, 00:18:50.099 { 00:18:50.099 "subsystem": "sock", 00:18:50.099 "config": [ 00:18:50.099 { 00:18:50.099 "method": "sock_impl_set_options", 00:18:50.099 "params": { 00:18:50.099 "impl_name": "posix", 00:18:50.099 "recv_buf_size": 2097152, 00:18:50.099 "send_buf_size": 2097152, 00:18:50.099 "enable_recv_pipe": true, 00:18:50.099 "enable_quickack": false, 00:18:50.099 "enable_placement_id": 0, 00:18:50.099 "enable_zerocopy_send_server": true, 00:18:50.099 "enable_zerocopy_send_client": false, 00:18:50.099 "zerocopy_threshold": 0, 00:18:50.099 "tls_version": 0, 00:18:50.099 "enable_ktls": false 00:18:50.099 } 00:18:50.099 }, 00:18:50.099 { 00:18:50.099 "method": "sock_impl_set_options", 00:18:50.099 "params": { 00:18:50.099 "impl_name": "ssl", 00:18:50.099 "recv_buf_size": 4096, 00:18:50.099 "send_buf_size": 4096, 00:18:50.099 "enable_recv_pipe": true, 00:18:50.099 "enable_quickack": false, 00:18:50.099 "enable_placement_id": 0, 00:18:50.099 "enable_zerocopy_send_server": true, 00:18:50.099 "enable_zerocopy_send_client": false, 00:18:50.099 "zerocopy_threshold": 0, 00:18:50.099 "tls_version": 0, 00:18:50.099 "enable_ktls": false 00:18:50.099 } 00:18:50.099 } 00:18:50.099 ] 00:18:50.099 }, 00:18:50.099 { 00:18:50.099 "subsystem": "vmd", 00:18:50.099 "config": [] 00:18:50.099 }, 00:18:50.099 { 00:18:50.099 "subsystem": "accel", 00:18:50.099 "config": [ 00:18:50.099 { 00:18:50.099 "method": "accel_set_options", 00:18:50.099 "params": { 00:18:50.099 "small_cache_size": 128, 00:18:50.099 "large_cache_size": 16, 00:18:50.099 "task_count": 2048, 00:18:50.099 "sequence_count": 2048, 00:18:50.099 "buf_count": 2048 00:18:50.099 } 00:18:50.099 } 00:18:50.099 ] 00:18:50.099 }, 00:18:50.099 { 00:18:50.099 "subsystem": "bdev", 00:18:50.099 "config": [ 00:18:50.099 { 00:18:50.099 "method": "bdev_set_options", 00:18:50.099 "params": { 00:18:50.099 "bdev_io_pool_size": 65535, 00:18:50.099 "bdev_io_cache_size": 256, 00:18:50.099 "bdev_auto_examine": true, 00:18:50.099 "iobuf_small_cache_size": 128, 00:18:50.099 "iobuf_large_cache_size": 16 00:18:50.099 } 00:18:50.099 }, 00:18:50.099 { 00:18:50.099 "method": "bdev_raid_set_options", 00:18:50.099 "params": { 00:18:50.099 "process_window_size_kb": 1024 00:18:50.099 } 00:18:50.099 }, 00:18:50.099 { 00:18:50.099 "method": "bdev_iscsi_set_options", 00:18:50.099 "params": { 00:18:50.099 "timeout_sec": 30 00:18:50.099 } 00:18:50.099 }, 00:18:50.099 { 00:18:50.099 "method": "bdev_nvme_set_options", 00:18:50.099 "params": { 00:18:50.099 "action_on_timeout": "none", 00:18:50.099 "timeout_us": 0, 00:18:50.099 "timeout_admin_us": 0, 00:18:50.099 "keep_alive_timeout_ms": 10000, 00:18:50.099 "arbitration_burst": 0, 00:18:50.099 "low_priority_weight": 0, 00:18:50.099 "medium_priority_weight": 0, 00:18:50.099 "high_priority_weight": 0, 00:18:50.099 "nvme_adminq_poll_period_us": 10000, 00:18:50.099 "nvme_ioq_poll_period_us": 0, 00:18:50.099 "io_queue_requests": 512, 00:18:50.099 "delay_cmd_submit": true, 00:18:50.099 "transport_retry_count": 4, 00:18:50.099 "bdev_retry_count": 3, 00:18:50.099 "transport_ack_timeout": 0, 00:18:50.099 "ctrlr_loss_timeout_sec": 0, 00:18:50.099 "reconnect_delay_sec": 0, 00:18:50.099 "fast_io_fail_timeout_sec": 0, 00:18:50.099 "disable_auto_failback": false, 00:18:50.099 "generate_uuids": false, 00:18:50.099 "transport_tos": 0, 00:18:50.099 "nvme_error_stat": false, 00:18:50.099 "rdma_srq_size": 0, 00:18:50.099 "io_path_stat": false, 00:18:50.099 "allow_accel_sequence": false, 00:18:50.099 "rdma_max_cq_size": 0, 00:18:50.099 "rdma_cm_event_timeout_ms": 0, 00:18:50.099 "dhchap_digests": [ 00:18:50.100 "sha256", 00:18:50.100 "sha384", 00:18:50.100 "sha512" 00:18:50.100 ], 00:18:50.100 "dhchap_dhgroups": [ 00:18:50.100 "null", 00:18:50.100 "ffdhe2048", 00:18:50.100 "ffdhe3072", 00:18:50.100 "ffdhe4096", 00:18:50.100 "ffdhe6144", 00:18:50.100 "ffdhe8192" 00:18:50.100 ] 00:18:50.100 } 00:18:50.100 }, 00:18:50.100 { 00:18:50.100 "method": "bdev_nvme_attach_controller", 00:18:50.100 "params": { 00:18:50.100 "name": "nvme0", 00:18:50.100 "trtype": "TCP", 00:18:50.100 "adrfam": "IPv4", 00:18:50.100 "traddr": "10.0.0.2", 00:18:50.100 "trsvcid": "4420", 00:18:50.100 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:50.100 "prchk_reftag": false, 00:18:50.100 "prchk_guard": false, 00:18:50.100 "ctrlr_loss_timeout_sec": 0, 00:18:50.100 "reconnect_delay_sec": 0, 00:18:50.100 "fast_io_fail_timeout_sec": 0, 00:18:50.100 "psk": "key0", 00:18:50.100 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:50.100 "hdgst": false, 00:18:50.100 "ddgst": false 00:18:50.100 } 00:18:50.100 }, 00:18:50.100 { 00:18:50.100 "method": "bdev_nvme_set_hotplug", 00:18:50.100 "params": { 00:18:50.100 "period_us": 100000, 00:18:50.100 "enable": false 00:18:50.100 } 00:18:50.100 }, 00:18:50.100 { 00:18:50.100 "method": "bdev_enable_histogram", 00:18:50.100 "params": { 00:18:50.100 "name": "nvme0n1", 00:18:50.100 "enable": true 00:18:50.100 } 00:18:50.100 }, 00:18:50.100 { 00:18:50.100 "method": "bdev_wait_for_examine" 00:18:50.100 } 00:18:50.100 ] 00:18:50.100 }, 00:18:50.100 { 00:18:50.100 "subsystem": "nbd", 00:18:50.100 "config": [] 00:18:50.100 } 00:18:50.100 ] 00:18:50.100 }' 00:18:50.100 15:29:07 -- target/tls.sh@266 -- # killprocess 1652195 00:18:50.100 15:29:07 -- common/autotest_common.sh@936 -- # '[' -z 1652195 ']' 00:18:50.100 15:29:07 -- common/autotest_common.sh@940 -- # kill -0 1652195 00:18:50.100 15:29:07 -- common/autotest_common.sh@941 -- # uname 00:18:50.100 15:29:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:50.100 15:29:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1652195 00:18:50.362 15:29:07 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:50.362 15:29:07 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:50.362 15:29:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1652195' 00:18:50.362 killing process with pid 1652195 00:18:50.362 15:29:07 -- common/autotest_common.sh@955 -- # kill 1652195 00:18:50.362 Received shutdown signal, test time was about 1.000000 seconds 00:18:50.362 00:18:50.362 Latency(us) 00:18:50.362 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:50.362 =================================================================================================================== 00:18:50.362 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:50.362 15:29:07 -- common/autotest_common.sh@960 -- # wait 1652195 00:18:50.362 15:29:07 -- target/tls.sh@267 -- # killprocess 1651848 00:18:50.362 15:29:07 -- common/autotest_common.sh@936 -- # '[' -z 1651848 ']' 00:18:50.362 15:29:07 -- common/autotest_common.sh@940 -- # kill -0 1651848 00:18:50.362 15:29:07 -- common/autotest_common.sh@941 -- # uname 00:18:50.362 15:29:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:50.362 15:29:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1651848 00:18:50.362 15:29:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:50.362 15:29:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:50.362 15:29:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1651848' 00:18:50.362 killing process with pid 1651848 00:18:50.362 15:29:07 -- common/autotest_common.sh@955 -- # kill 1651848 00:18:50.362 15:29:07 -- common/autotest_common.sh@960 -- # wait 1651848 00:18:50.623 15:29:07 -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:18:50.623 15:29:07 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:50.623 15:29:07 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:50.623 15:29:07 -- target/tls.sh@269 -- # echo '{ 00:18:50.623 "subsystems": [ 00:18:50.623 { 00:18:50.623 "subsystem": "keyring", 00:18:50.623 "config": [ 00:18:50.623 { 00:18:50.623 "method": "keyring_file_add_key", 00:18:50.623 "params": { 00:18:50.623 "name": "key0", 00:18:50.623 "path": "/tmp/tmp.H90Cb48IfE" 00:18:50.623 } 00:18:50.623 } 00:18:50.623 ] 00:18:50.623 }, 00:18:50.623 { 00:18:50.623 "subsystem": "iobuf", 00:18:50.623 "config": [ 00:18:50.623 { 00:18:50.623 "method": "iobuf_set_options", 00:18:50.623 "params": { 00:18:50.623 "small_pool_count": 8192, 00:18:50.623 "large_pool_count": 1024, 00:18:50.623 "small_bufsize": 8192, 00:18:50.623 "large_bufsize": 135168 00:18:50.623 } 00:18:50.623 } 00:18:50.623 ] 00:18:50.623 }, 00:18:50.623 { 00:18:50.623 "subsystem": "sock", 00:18:50.623 "config": [ 00:18:50.623 { 00:18:50.623 "method": "sock_impl_set_options", 00:18:50.623 "params": { 00:18:50.623 "impl_name": "posix", 00:18:50.623 "recv_buf_size": 2097152, 00:18:50.623 "send_buf_size": 2097152, 00:18:50.623 "enable_recv_pipe": true, 00:18:50.623 "enable_quickack": false, 00:18:50.623 "enable_placement_id": 0, 00:18:50.623 "enable_zerocopy_send_server": true, 00:18:50.623 "enable_zerocopy_send_client": false, 00:18:50.623 "zerocopy_threshold": 0, 00:18:50.623 "tls_version": 0, 00:18:50.623 "enable_ktls": false 00:18:50.623 } 00:18:50.623 }, 00:18:50.623 { 00:18:50.623 "method": "sock_impl_set_options", 00:18:50.623 "params": { 00:18:50.623 "impl_name": "ssl", 00:18:50.623 "recv_buf_size": 4096, 00:18:50.623 "send_buf_size": 4096, 00:18:50.623 "enable_recv_pipe": true, 00:18:50.623 "enable_quickack": false, 00:18:50.623 "enable_placement_id": 0, 00:18:50.623 "enable_zerocopy_send_server": true, 00:18:50.623 "enable_zerocopy_send_client": false, 00:18:50.623 "zerocopy_threshold": 0, 00:18:50.623 "tls_version": 0, 00:18:50.623 "enable_ktls": false 00:18:50.623 } 00:18:50.623 } 00:18:50.623 ] 00:18:50.623 }, 00:18:50.623 { 00:18:50.623 "subsystem": "vmd", 00:18:50.623 "config": [] 00:18:50.623 }, 00:18:50.623 { 00:18:50.623 "subsystem": "accel", 00:18:50.623 "config": [ 00:18:50.623 { 00:18:50.623 "method": "accel_set_options", 00:18:50.623 "params": { 00:18:50.623 "small_cache_size": 128, 00:18:50.623 "large_cache_size": 16, 00:18:50.623 "task_count": 2048, 00:18:50.623 "sequence_count": 2048, 00:18:50.623 "buf_count": 2048 00:18:50.623 } 00:18:50.623 } 00:18:50.623 ] 00:18:50.623 }, 00:18:50.623 { 00:18:50.623 "subsystem": "bdev", 00:18:50.623 "config": [ 00:18:50.623 { 00:18:50.623 "method": "bdev_set_options", 00:18:50.623 "params": { 00:18:50.623 "bdev_io_pool_size": 65535, 00:18:50.623 "bdev_io_cache_size": 256, 00:18:50.623 "bdev_auto_examine": true, 00:18:50.623 "iobuf_small_cache_size": 128, 00:18:50.623 "iobuf_large_cache_size": 16 00:18:50.623 } 00:18:50.623 }, 00:18:50.623 { 00:18:50.623 "method": "bdev_raid_set_options", 00:18:50.623 "params": { 00:18:50.623 "process_window_size_kb": 1024 00:18:50.623 } 00:18:50.623 }, 00:18:50.623 { 00:18:50.623 "method": "bdev_iscsi_set_options", 00:18:50.623 "params": { 00:18:50.623 "timeout_sec": 30 00:18:50.623 } 00:18:50.623 }, 00:18:50.623 { 00:18:50.623 "method": "bdev_nvme_set_options", 00:18:50.623 "params": { 00:18:50.623 "action_on_timeout": "none", 00:18:50.623 "timeout_us": 0, 00:18:50.623 "timeout_admin_us": 0, 00:18:50.623 "keep_alive_timeout_ms": 10000, 00:18:50.623 "arbitration_burst": 0, 00:18:50.623 "low_priority_weight": 0, 00:18:50.623 "medium_priority_weight": 0, 00:18:50.623 "high_priority_weight": 0, 00:18:50.623 "nvme_adminq_poll_period_us": 10000, 00:18:50.623 "nvme_ioq_poll_period_us": 0, 00:18:50.623 "io_queue_requests": 0, 00:18:50.623 "delay_cmd_submit": true, 00:18:50.623 "transport_retry_count": 4, 00:18:50.623 "bdev_retry_count": 3, 00:18:50.623 "transport_ack_timeout": 0, 00:18:50.623 "ctrlr_loss_timeout_sec": 0, 00:18:50.623 "reconnect_delay_sec": 0, 00:18:50.623 "fast_io_fail_timeout_sec": 0, 00:18:50.623 "disable_auto_failback": false, 00:18:50.623 "generate_uuids": false, 00:18:50.623 "transport_tos": 0, 00:18:50.623 "nvme_error_stat": false, 00:18:50.623 "rdma_srq_size": 0, 00:18:50.623 "io_path_stat": false, 00:18:50.623 "allow_accel_sequence": false, 00:18:50.623 "rdma_max_cq_size": 0, 00:18:50.623 "rdma_cm_event_timeout_ms": 0, 00:18:50.623 "dhchap_digests": [ 00:18:50.623 "sha256", 00:18:50.623 "sha384", 00:18:50.623 "sha512" 00:18:50.623 ], 00:18:50.623 "dhchap_dhgroups": [ 00:18:50.623 "null", 00:18:50.623 "ffdhe2048", 00:18:50.623 "ffdhe3072", 00:18:50.623 "ffdhe4096", 00:18:50.623 "ffdhe6144", 00:18:50.623 "ffdhe8192" 00:18:50.623 ] 00:18:50.623 } 00:18:50.623 }, 00:18:50.623 { 00:18:50.623 "method": "bdev_nvme_set_hotplug", 00:18:50.623 "params": { 00:18:50.623 "period_us": 100000, 00:18:50.623 "enable": false 00:18:50.623 } 00:18:50.623 }, 00:18:50.623 { 00:18:50.623 "method": "bdev_malloc_create", 00:18:50.623 "params": { 00:18:50.623 "name": "malloc0", 00:18:50.623 "num_blocks": 8192, 00:18:50.623 "block_size": 4096, 00:18:50.623 "physical_block_size": 4096, 00:18:50.623 "uuid": "b3fcd04a-fd42-4d85-9451-4ab0094e80e9", 00:18:50.623 "optimal_io_boundary": 0 00:18:50.623 } 00:18:50.623 }, 00:18:50.623 { 00:18:50.623 "method": "bdev_wait_for_examine" 00:18:50.623 } 00:18:50.623 ] 00:18:50.623 }, 00:18:50.623 { 00:18:50.623 "subsystem": "nbd", 00:18:50.623 "config": [] 00:18:50.623 }, 00:18:50.623 { 00:18:50.623 "subsystem": "scheduler", 00:18:50.623 "config": [ 00:18:50.623 { 00:18:50.623 "method": "framework_set_scheduler", 00:18:50.623 "params": { 00:18:50.623 "name": "static" 00:18:50.623 } 00:18:50.623 } 00:18:50.623 ] 00:18:50.623 }, 00:18:50.623 { 00:18:50.623 "subsystem": "nvmf", 00:18:50.623 "config": [ 00:18:50.623 { 00:18:50.623 "method": "nvmf_set_config", 00:18:50.623 "params": { 00:18:50.623 "discovery_filter": "match_any", 00:18:50.623 "admin_cmd_passthru": { 00:18:50.623 "identify_ctrlr": false 00:18:50.623 } 00:18:50.623 } 00:18:50.623 }, 00:18:50.623 { 00:18:50.623 "method": "nvmf_set_max_subsystems", 00:18:50.623 "params": { 00:18:50.623 "max_subsystems": 1024 00:18:50.623 } 00:18:50.623 }, 00:18:50.623 { 00:18:50.623 "method": "nvmf_set_crdt", 00:18:50.623 "params": { 00:18:50.624 "crdt1": 0, 00:18:50.624 "crdt2": 0, 00:18:50.624 "crdt3": 0 00:18:50.624 } 00:18:50.624 }, 00:18:50.624 { 00:18:50.624 "method": "nvmf_create_transport", 00:18:50.624 "params": { 00:18:50.624 "trtype": "TCP", 00:18:50.624 "max_queue_depth": 128, 00:18:50.624 "max_io_qpairs_per_ctrlr": 127, 00:18:50.624 "in_capsule_data_size": 4096, 00:18:50.624 "max_io_size": 131072, 00:18:50.624 "io_unit_size": 131072, 00:18:50.624 "max_aq_depth": 128, 00:18:50.624 "num_shared_buffers": 511, 00:18:50.624 "buf_cache_size": 4294967295, 00:18:50.624 "dif_insert_or_strip": false, 00:18:50.624 "zcopy": false, 00:18:50.624 "c2h_success": false, 00:18:50.624 "sock_priority": 0, 00:18:50.624 "abort_timeout_sec": 1, 00:18:50.624 "ack_timeout": 0, 00:18:50.624 "data_wr_pool_size": 0 00:18:50.624 } 00:18:50.624 }, 00:18:50.624 { 00:18:50.624 "method": "nvmf_create_subsystem", 00:18:50.624 "params": { 00:18:50.624 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:50.624 "allow_any_host": false, 00:18:50.624 "serial_number": "00000000000000000000", 00:18:50.624 "model_number": "SPDK bdev Controller", 00:18:50.624 "max_namespaces": 32, 00:18:50.624 "min_cntlid": 1, 00:18:50.624 "max_cntlid": 65519, 00:18:50.624 "ana_reporting": false 00:18:50.624 } 00:18:50.624 }, 00:18:50.624 { 00:18:50.624 "method": "nvmf_subsystem_add_host", 00:18:50.624 "params": { 00:18:50.624 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:50.624 "host": "nqn.2016-06.io.spdk:host1", 00:18:50.624 "psk": "key0" 00:18:50.624 } 00:18:50.624 }, 00:18:50.624 { 00:18:50.624 "method": "nvmf_subsystem_add_ns", 00:18:50.624 "params": { 00:18:50.624 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:50.624 "namespace": { 00:18:50.624 "nsid": 1, 00:18:50.624 "bdev_name": "malloc0", 00:18:50.624 "nguid": "B3FCD04AFD424D8594514AB0094E80E9", 00:18:50.624 "uuid": "b3fcd04a-fd42-4d85-9451-4ab0094e80e9", 00:18:50.624 "no_auto_visible": false 00:18:50.624 } 00:18:50.624 } 00:18:50.624 }, 00:18:50.624 { 00:18:50.624 "method": "nvmf_subsystem_add_listener", 00:18:50.624 "params": { 00:18:50.624 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:50.624 "listen_address": { 00:18:50.624 "trtype": "TCP", 00:18:50.624 "adrfam": "IPv4", 00:18:50.624 "traddr": "10.0.0.2", 00:18:50.624 "trsvcid": "4420" 00:18:50.624 }, 00:18:50.624 "secure_channel": true 00:18:50.624 } 00:18:50.624 } 00:18:50.624 ] 00:18:50.624 } 00:18:50.624 ] 00:18:50.624 }' 00:18:50.624 15:29:07 -- common/autotest_common.sh@10 -- # set +x 00:18:50.624 15:29:07 -- nvmf/common.sh@470 -- # nvmfpid=1652682 00:18:50.624 15:29:07 -- nvmf/common.sh@471 -- # waitforlisten 1652682 00:18:50.624 15:29:07 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:18:50.624 15:29:07 -- common/autotest_common.sh@817 -- # '[' -z 1652682 ']' 00:18:50.624 15:29:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:50.624 15:29:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:50.624 15:29:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:50.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:50.624 15:29:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:50.624 15:29:07 -- common/autotest_common.sh@10 -- # set +x 00:18:50.624 [2024-04-26 15:29:07.959300] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:18:50.624 [2024-04-26 15:29:07.959359] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:50.624 EAL: No free 2048 kB hugepages reported on node 1 00:18:50.624 [2024-04-26 15:29:08.025889] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.886 [2024-04-26 15:29:08.090157] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:50.886 [2024-04-26 15:29:08.090194] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:50.886 [2024-04-26 15:29:08.090202] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:50.886 [2024-04-26 15:29:08.090208] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:50.886 [2024-04-26 15:29:08.090214] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:50.886 [2024-04-26 15:29:08.090265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:50.886 [2024-04-26 15:29:08.279594] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:50.886 [2024-04-26 15:29:08.311605] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:50.886 [2024-04-26 15:29:08.324161] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:51.457 15:29:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:51.457 15:29:08 -- common/autotest_common.sh@850 -- # return 0 00:18:51.457 15:29:08 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:51.457 15:29:08 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:51.457 15:29:08 -- common/autotest_common.sh@10 -- # set +x 00:18:51.457 15:29:08 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:51.457 15:29:08 -- target/tls.sh@272 -- # bdevperf_pid=1652908 00:18:51.457 15:29:08 -- target/tls.sh@273 -- # waitforlisten 1652908 /var/tmp/bdevperf.sock 00:18:51.457 15:29:08 -- common/autotest_common.sh@817 -- # '[' -z 1652908 ']' 00:18:51.457 15:29:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:51.457 15:29:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:51.457 15:29:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:51.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:51.457 15:29:08 -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:18:51.457 15:29:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:51.457 15:29:08 -- common/autotest_common.sh@10 -- # set +x 00:18:51.457 15:29:08 -- target/tls.sh@270 -- # echo '{ 00:18:51.457 "subsystems": [ 00:18:51.457 { 00:18:51.457 "subsystem": "keyring", 00:18:51.457 "config": [ 00:18:51.457 { 00:18:51.457 "method": "keyring_file_add_key", 00:18:51.457 "params": { 00:18:51.457 "name": "key0", 00:18:51.457 "path": "/tmp/tmp.H90Cb48IfE" 00:18:51.457 } 00:18:51.457 } 00:18:51.457 ] 00:18:51.457 }, 00:18:51.457 { 00:18:51.457 "subsystem": "iobuf", 00:18:51.457 "config": [ 00:18:51.457 { 00:18:51.457 "method": "iobuf_set_options", 00:18:51.457 "params": { 00:18:51.457 "small_pool_count": 8192, 00:18:51.457 "large_pool_count": 1024, 00:18:51.457 "small_bufsize": 8192, 00:18:51.457 "large_bufsize": 135168 00:18:51.457 } 00:18:51.457 } 00:18:51.457 ] 00:18:51.457 }, 00:18:51.457 { 00:18:51.457 "subsystem": "sock", 00:18:51.457 "config": [ 00:18:51.457 { 00:18:51.457 "method": "sock_impl_set_options", 00:18:51.457 "params": { 00:18:51.457 "impl_name": "posix", 00:18:51.457 "recv_buf_size": 2097152, 00:18:51.457 "send_buf_size": 2097152, 00:18:51.457 "enable_recv_pipe": true, 00:18:51.457 "enable_quickack": false, 00:18:51.457 "enable_placement_id": 0, 00:18:51.457 "enable_zerocopy_send_server": true, 00:18:51.457 "enable_zerocopy_send_client": false, 00:18:51.457 "zerocopy_threshold": 0, 00:18:51.457 "tls_version": 0, 00:18:51.457 "enable_ktls": false 00:18:51.457 } 00:18:51.457 }, 00:18:51.457 { 00:18:51.457 "method": "sock_impl_set_options", 00:18:51.457 "params": { 00:18:51.457 "impl_name": "ssl", 00:18:51.457 "recv_buf_size": 4096, 00:18:51.457 "send_buf_size": 4096, 00:18:51.457 "enable_recv_pipe": true, 00:18:51.457 "enable_quickack": false, 00:18:51.457 "enable_placement_id": 0, 00:18:51.457 "enable_zerocopy_send_server": true, 00:18:51.457 "enable_zerocopy_send_client": false, 00:18:51.457 "zerocopy_threshold": 0, 00:18:51.457 "tls_version": 0, 00:18:51.457 "enable_ktls": false 00:18:51.457 } 00:18:51.457 } 00:18:51.457 ] 00:18:51.457 }, 00:18:51.457 { 00:18:51.457 "subsystem": "vmd", 00:18:51.457 "config": [] 00:18:51.457 }, 00:18:51.457 { 00:18:51.457 "subsystem": "accel", 00:18:51.457 "config": [ 00:18:51.457 { 00:18:51.457 "method": "accel_set_options", 00:18:51.457 "params": { 00:18:51.457 "small_cache_size": 128, 00:18:51.457 "large_cache_size": 16, 00:18:51.457 "task_count": 2048, 00:18:51.457 "sequence_count": 2048, 00:18:51.457 "buf_count": 2048 00:18:51.457 } 00:18:51.457 } 00:18:51.457 ] 00:18:51.457 }, 00:18:51.457 { 00:18:51.457 "subsystem": "bdev", 00:18:51.457 "config": [ 00:18:51.457 { 00:18:51.457 "method": "bdev_set_options", 00:18:51.457 "params": { 00:18:51.457 "bdev_io_pool_size": 65535, 00:18:51.457 "bdev_io_cache_size": 256, 00:18:51.457 "bdev_auto_examine": true, 00:18:51.457 "iobuf_small_cache_size": 128, 00:18:51.457 "iobuf_large_cache_size": 16 00:18:51.457 } 00:18:51.457 }, 00:18:51.457 { 00:18:51.458 "method": "bdev_raid_set_options", 00:18:51.458 "params": { 00:18:51.458 "process_window_size_kb": 1024 00:18:51.458 } 00:18:51.458 }, 00:18:51.458 { 00:18:51.458 "method": "bdev_iscsi_set_options", 00:18:51.458 "params": { 00:18:51.458 "timeout_sec": 30 00:18:51.458 } 00:18:51.458 }, 00:18:51.458 { 00:18:51.458 "method": "bdev_nvme_set_options", 00:18:51.458 "params": { 00:18:51.458 "action_on_timeout": "none", 00:18:51.458 "timeout_us": 0, 00:18:51.458 "timeout_admin_us": 0, 00:18:51.458 "keep_alive_timeout_ms": 10000, 00:18:51.458 "arbitration_burst": 0, 00:18:51.458 "low_priority_weight": 0, 00:18:51.458 "medium_priority_weight": 0, 00:18:51.458 "high_priority_weight": 0, 00:18:51.458 "nvme_adminq_poll_period_us": 10000, 00:18:51.458 "nvme_ioq_poll_period_us": 0, 00:18:51.458 "io_queue_requests": 512, 00:18:51.458 "delay_cmd_submit": true, 00:18:51.458 "transport_retry_count": 4, 00:18:51.458 "bdev_retry_count": 3, 00:18:51.458 "transport_ack_timeout": 0, 00:18:51.458 "ctrlr_loss_timeout_sec": 0, 00:18:51.458 "reconnect_delay_sec": 0, 00:18:51.458 "fast_io_fail_timeout_sec": 0, 00:18:51.458 "disable_auto_failback": false, 00:18:51.458 "generate_uuids": false, 00:18:51.458 "transport_tos": 0, 00:18:51.458 "nvme_error_stat": false, 00:18:51.458 "rdma_srq_size": 0, 00:18:51.458 "io_path_stat": false, 00:18:51.458 "allow_accel_sequence": false, 00:18:51.458 "rdma_max_cq_size": 0, 00:18:51.458 "rdma_cm_event_timeout_ms": 0, 00:18:51.458 "dhchap_digests": [ 00:18:51.458 "sha256", 00:18:51.458 "sha384", 00:18:51.458 "sha512" 00:18:51.458 ], 00:18:51.458 "dhchap_dhgroups": [ 00:18:51.458 "null", 00:18:51.458 "ffdhe2048", 00:18:51.458 "ffdhe3072", 00:18:51.458 "ffdhe4096", 00:18:51.458 "ffdhe6144", 00:18:51.458 "ffdhe8192" 00:18:51.458 ] 00:18:51.458 } 00:18:51.458 }, 00:18:51.458 { 00:18:51.458 "method": "bdev_nvme_attach_controller", 00:18:51.458 "params": { 00:18:51.458 "name": "nvme0", 00:18:51.458 "trtype": "TCP", 00:18:51.458 "adrfam": "IPv4", 00:18:51.458 "traddr": "10.0.0.2", 00:18:51.458 "trsvcid": "4420", 00:18:51.458 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:51.458 "prchk_reftag": false, 00:18:51.458 "prchk_guard": false, 00:18:51.458 "ctrlr_loss_timeout_sec": 0, 00:18:51.458 "reconnect_delay_sec": 0, 00:18:51.458 "fast_io_fail_timeout_sec": 0, 00:18:51.458 "psk": "key0", 00:18:51.458 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:51.458 "hdgst": false, 00:18:51.458 "ddgst": false 00:18:51.458 } 00:18:51.458 }, 00:18:51.458 { 00:18:51.458 "method": "bdev_nvme_set_hotplug", 00:18:51.458 "params": { 00:18:51.458 "period_us": 100000, 00:18:51.458 "enable": false 00:18:51.458 } 00:18:51.458 }, 00:18:51.458 { 00:18:51.458 "method": "bdev_enable_histogram", 00:18:51.458 "params": { 00:18:51.458 "name": "nvme0n1", 00:18:51.458 "enable": true 00:18:51.458 } 00:18:51.458 }, 00:18:51.458 { 00:18:51.458 "method": "bdev_wait_for_examine" 00:18:51.458 } 00:18:51.458 ] 00:18:51.458 }, 00:18:51.458 { 00:18:51.458 "subsystem": "nbd", 00:18:51.458 "config": [] 00:18:51.458 } 00:18:51.458 ] 00:18:51.458 }' 00:18:51.458 [2024-04-26 15:29:08.799876] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:18:51.458 [2024-04-26 15:29:08.799928] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1652908 ] 00:18:51.458 EAL: No free 2048 kB hugepages reported on node 1 00:18:51.458 [2024-04-26 15:29:08.873736] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:51.719 [2024-04-26 15:29:08.926978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:51.719 [2024-04-26 15:29:09.052605] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:52.290 15:29:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:52.290 15:29:09 -- common/autotest_common.sh@850 -- # return 0 00:18:52.290 15:29:09 -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:52.290 15:29:09 -- target/tls.sh@275 -- # jq -r '.[].name' 00:18:52.290 15:29:09 -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.290 15:29:09 -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:52.550 Running I/O for 1 seconds... 00:18:53.490 00:18:53.490 Latency(us) 00:18:53.490 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:53.490 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:53.490 Verification LBA range: start 0x0 length 0x2000 00:18:53.490 nvme0n1 : 1.03 4737.03 18.50 0.00 0.00 26615.38 8301.23 51118.08 00:18:53.490 =================================================================================================================== 00:18:53.490 Total : 4737.03 18.50 0.00 0.00 26615.38 8301.23 51118.08 00:18:53.490 0 00:18:53.490 15:29:10 -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:18:53.490 15:29:10 -- target/tls.sh@279 -- # cleanup 00:18:53.490 15:29:10 -- target/tls.sh@15 -- # process_shm --id 0 00:18:53.490 15:29:10 -- common/autotest_common.sh@794 -- # type=--id 00:18:53.490 15:29:10 -- common/autotest_common.sh@795 -- # id=0 00:18:53.490 15:29:10 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:18:53.490 15:29:10 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:53.490 15:29:10 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:18:53.490 15:29:10 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:18:53.490 15:29:10 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:18:53.490 15:29:10 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:53.490 nvmf_trace.0 00:18:53.490 15:29:10 -- common/autotest_common.sh@809 -- # return 0 00:18:53.490 15:29:10 -- target/tls.sh@16 -- # killprocess 1652908 00:18:53.491 15:29:10 -- common/autotest_common.sh@936 -- # '[' -z 1652908 ']' 00:18:53.491 15:29:10 -- common/autotest_common.sh@940 -- # kill -0 1652908 00:18:53.491 15:29:10 -- common/autotest_common.sh@941 -- # uname 00:18:53.491 15:29:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:53.491 15:29:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1652908 00:18:53.751 15:29:10 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:53.751 15:29:10 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:53.751 15:29:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1652908' 00:18:53.751 killing process with pid 1652908 00:18:53.751 15:29:10 -- common/autotest_common.sh@955 -- # kill 1652908 00:18:53.751 Received shutdown signal, test time was about 1.000000 seconds 00:18:53.751 00:18:53.751 Latency(us) 00:18:53.751 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:53.751 =================================================================================================================== 00:18:53.751 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:53.751 15:29:10 -- common/autotest_common.sh@960 -- # wait 1652908 00:18:53.751 15:29:11 -- target/tls.sh@17 -- # nvmftestfini 00:18:53.751 15:29:11 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:53.751 15:29:11 -- nvmf/common.sh@117 -- # sync 00:18:53.751 15:29:11 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:53.751 15:29:11 -- nvmf/common.sh@120 -- # set +e 00:18:53.751 15:29:11 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:53.751 15:29:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:53.751 rmmod nvme_tcp 00:18:53.751 rmmod nvme_fabrics 00:18:53.751 rmmod nvme_keyring 00:18:53.751 15:29:11 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:53.751 15:29:11 -- nvmf/common.sh@124 -- # set -e 00:18:53.751 15:29:11 -- nvmf/common.sh@125 -- # return 0 00:18:53.751 15:29:11 -- nvmf/common.sh@478 -- # '[' -n 1652682 ']' 00:18:53.751 15:29:11 -- nvmf/common.sh@479 -- # killprocess 1652682 00:18:53.751 15:29:11 -- common/autotest_common.sh@936 -- # '[' -z 1652682 ']' 00:18:53.751 15:29:11 -- common/autotest_common.sh@940 -- # kill -0 1652682 00:18:53.751 15:29:11 -- common/autotest_common.sh@941 -- # uname 00:18:53.751 15:29:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:53.751 15:29:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1652682 00:18:54.011 15:29:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:54.011 15:29:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:54.011 15:29:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1652682' 00:18:54.011 killing process with pid 1652682 00:18:54.011 15:29:11 -- common/autotest_common.sh@955 -- # kill 1652682 00:18:54.011 15:29:11 -- common/autotest_common.sh@960 -- # wait 1652682 00:18:54.011 15:29:11 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:54.011 15:29:11 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:54.011 15:29:11 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:54.011 15:29:11 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:54.011 15:29:11 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:54.011 15:29:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:54.011 15:29:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:54.011 15:29:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:56.553 15:29:13 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:56.553 15:29:13 -- target/tls.sh@18 -- # rm -f /tmp/tmp.rHm1ep33Qn /tmp/tmp.CSyOIW5qRK /tmp/tmp.H90Cb48IfE 00:18:56.553 00:18:56.553 real 1m23.020s 00:18:56.553 user 2m9.316s 00:18:56.553 sys 0m24.983s 00:18:56.553 15:29:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:56.553 15:29:13 -- common/autotest_common.sh@10 -- # set +x 00:18:56.553 ************************************ 00:18:56.553 END TEST nvmf_tls 00:18:56.553 ************************************ 00:18:56.553 15:29:13 -- nvmf/nvmf.sh@61 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:56.553 15:29:13 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:56.553 15:29:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:56.553 15:29:13 -- common/autotest_common.sh@10 -- # set +x 00:18:56.553 ************************************ 00:18:56.553 START TEST nvmf_fips 00:18:56.553 ************************************ 00:18:56.553 15:29:13 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:56.553 * Looking for test storage... 00:18:56.553 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:18:56.553 15:29:13 -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:56.553 15:29:13 -- nvmf/common.sh@7 -- # uname -s 00:18:56.553 15:29:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:56.553 15:29:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:56.553 15:29:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:56.553 15:29:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:56.553 15:29:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:56.553 15:29:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:56.553 15:29:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:56.553 15:29:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:56.553 15:29:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:56.553 15:29:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:56.553 15:29:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:56.553 15:29:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:56.553 15:29:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:56.554 15:29:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:56.554 15:29:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:56.554 15:29:13 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:56.554 15:29:13 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:56.554 15:29:13 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:56.554 15:29:13 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:56.554 15:29:13 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:56.554 15:29:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.554 15:29:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.554 15:29:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.554 15:29:13 -- paths/export.sh@5 -- # export PATH 00:18:56.554 15:29:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.554 15:29:13 -- nvmf/common.sh@47 -- # : 0 00:18:56.554 15:29:13 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:56.554 15:29:13 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:56.554 15:29:13 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:56.554 15:29:13 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:56.554 15:29:13 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:56.554 15:29:13 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:56.554 15:29:13 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:56.554 15:29:13 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:56.554 15:29:13 -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:56.554 15:29:13 -- fips/fips.sh@89 -- # check_openssl_version 00:18:56.554 15:29:13 -- fips/fips.sh@83 -- # local target=3.0.0 00:18:56.554 15:29:13 -- fips/fips.sh@85 -- # openssl version 00:18:56.554 15:29:13 -- fips/fips.sh@85 -- # awk '{print $2}' 00:18:56.554 15:29:13 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:18:56.554 15:29:13 -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:18:56.554 15:29:13 -- scripts/common.sh@330 -- # local ver1 ver1_l 00:18:56.554 15:29:13 -- scripts/common.sh@331 -- # local ver2 ver2_l 00:18:56.554 15:29:13 -- scripts/common.sh@333 -- # IFS=.-: 00:18:56.554 15:29:13 -- scripts/common.sh@333 -- # read -ra ver1 00:18:56.554 15:29:13 -- scripts/common.sh@334 -- # IFS=.-: 00:18:56.554 15:29:13 -- scripts/common.sh@334 -- # read -ra ver2 00:18:56.554 15:29:13 -- scripts/common.sh@335 -- # local 'op=>=' 00:18:56.554 15:29:13 -- scripts/common.sh@337 -- # ver1_l=3 00:18:56.554 15:29:13 -- scripts/common.sh@338 -- # ver2_l=3 00:18:56.554 15:29:13 -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:18:56.554 15:29:13 -- scripts/common.sh@341 -- # case "$op" in 00:18:56.554 15:29:13 -- scripts/common.sh@345 -- # : 1 00:18:56.554 15:29:13 -- scripts/common.sh@361 -- # (( v = 0 )) 00:18:56.554 15:29:13 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:56.554 15:29:13 -- scripts/common.sh@362 -- # decimal 3 00:18:56.554 15:29:13 -- scripts/common.sh@350 -- # local d=3 00:18:56.554 15:29:13 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:56.554 15:29:13 -- scripts/common.sh@352 -- # echo 3 00:18:56.554 15:29:13 -- scripts/common.sh@362 -- # ver1[v]=3 00:18:56.554 15:29:13 -- scripts/common.sh@363 -- # decimal 3 00:18:56.554 15:29:13 -- scripts/common.sh@350 -- # local d=3 00:18:56.554 15:29:13 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:56.554 15:29:13 -- scripts/common.sh@352 -- # echo 3 00:18:56.554 15:29:13 -- scripts/common.sh@363 -- # ver2[v]=3 00:18:56.554 15:29:13 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:56.554 15:29:13 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:18:56.554 15:29:13 -- scripts/common.sh@361 -- # (( v++ )) 00:18:56.554 15:29:13 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:56.554 15:29:13 -- scripts/common.sh@362 -- # decimal 0 00:18:56.554 15:29:13 -- scripts/common.sh@350 -- # local d=0 00:18:56.554 15:29:13 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:56.554 15:29:13 -- scripts/common.sh@352 -- # echo 0 00:18:56.554 15:29:13 -- scripts/common.sh@362 -- # ver1[v]=0 00:18:56.554 15:29:13 -- scripts/common.sh@363 -- # decimal 0 00:18:56.554 15:29:13 -- scripts/common.sh@350 -- # local d=0 00:18:56.554 15:29:13 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:56.554 15:29:13 -- scripts/common.sh@352 -- # echo 0 00:18:56.554 15:29:13 -- scripts/common.sh@363 -- # ver2[v]=0 00:18:56.554 15:29:13 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:56.554 15:29:13 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:18:56.554 15:29:13 -- scripts/common.sh@361 -- # (( v++ )) 00:18:56.554 15:29:13 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:56.554 15:29:13 -- scripts/common.sh@362 -- # decimal 9 00:18:56.554 15:29:13 -- scripts/common.sh@350 -- # local d=9 00:18:56.554 15:29:13 -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:18:56.554 15:29:13 -- scripts/common.sh@352 -- # echo 9 00:18:56.554 15:29:13 -- scripts/common.sh@362 -- # ver1[v]=9 00:18:56.554 15:29:13 -- scripts/common.sh@363 -- # decimal 0 00:18:56.554 15:29:13 -- scripts/common.sh@350 -- # local d=0 00:18:56.554 15:29:13 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:56.554 15:29:13 -- scripts/common.sh@352 -- # echo 0 00:18:56.554 15:29:13 -- scripts/common.sh@363 -- # ver2[v]=0 00:18:56.554 15:29:13 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:56.554 15:29:13 -- scripts/common.sh@364 -- # return 0 00:18:56.554 15:29:13 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:18:56.554 15:29:13 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:56.554 15:29:13 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:18:56.554 15:29:13 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:56.554 15:29:13 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:56.554 15:29:13 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:18:56.554 15:29:13 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:18:56.554 15:29:13 -- fips/fips.sh@113 -- # build_openssl_config 00:18:56.554 15:29:13 -- fips/fips.sh@37 -- # cat 00:18:56.554 15:29:13 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:18:56.554 15:29:13 -- fips/fips.sh@58 -- # cat - 00:18:56.554 15:29:13 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:56.554 15:29:13 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:18:56.554 15:29:13 -- fips/fips.sh@116 -- # mapfile -t providers 00:18:56.554 15:29:13 -- fips/fips.sh@116 -- # openssl list -providers 00:18:56.554 15:29:13 -- fips/fips.sh@116 -- # grep name 00:18:56.554 15:29:13 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:18:56.554 15:29:13 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:18:56.554 15:29:13 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:56.554 15:29:13 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:18:56.554 15:29:13 -- common/autotest_common.sh@638 -- # local es=0 00:18:56.554 15:29:13 -- common/autotest_common.sh@640 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:56.554 15:29:13 -- fips/fips.sh@127 -- # : 00:18:56.554 15:29:13 -- common/autotest_common.sh@626 -- # local arg=openssl 00:18:56.554 15:29:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:56.554 15:29:13 -- common/autotest_common.sh@630 -- # type -t openssl 00:18:56.554 15:29:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:56.554 15:29:13 -- common/autotest_common.sh@632 -- # type -P openssl 00:18:56.554 15:29:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:56.554 15:29:13 -- common/autotest_common.sh@632 -- # arg=/usr/bin/openssl 00:18:56.554 15:29:13 -- common/autotest_common.sh@632 -- # [[ -x /usr/bin/openssl ]] 00:18:56.554 15:29:13 -- common/autotest_common.sh@641 -- # openssl md5 /dev/fd/62 00:18:56.554 Error setting digest 00:18:56.554 00024077D97F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:18:56.554 00024077D97F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:18:56.554 15:29:13 -- common/autotest_common.sh@641 -- # es=1 00:18:56.554 15:29:13 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:56.554 15:29:13 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:56.554 15:29:13 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:56.554 15:29:13 -- fips/fips.sh@130 -- # nvmftestinit 00:18:56.554 15:29:13 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:56.554 15:29:13 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:56.554 15:29:13 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:56.554 15:29:13 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:56.554 15:29:13 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:56.554 15:29:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:56.554 15:29:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:56.554 15:29:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:56.554 15:29:13 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:56.554 15:29:13 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:56.554 15:29:13 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:56.554 15:29:13 -- common/autotest_common.sh@10 -- # set +x 00:19:04.702 15:29:20 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:04.702 15:29:20 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:04.702 15:29:20 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:04.702 15:29:20 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:04.703 15:29:20 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:04.703 15:29:20 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:04.703 15:29:20 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:04.703 15:29:20 -- nvmf/common.sh@295 -- # net_devs=() 00:19:04.703 15:29:20 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:04.703 15:29:20 -- nvmf/common.sh@296 -- # e810=() 00:19:04.703 15:29:20 -- nvmf/common.sh@296 -- # local -ga e810 00:19:04.703 15:29:20 -- nvmf/common.sh@297 -- # x722=() 00:19:04.703 15:29:20 -- nvmf/common.sh@297 -- # local -ga x722 00:19:04.703 15:29:20 -- nvmf/common.sh@298 -- # mlx=() 00:19:04.703 15:29:20 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:04.703 15:29:20 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:04.703 15:29:20 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:04.703 15:29:20 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:04.703 15:29:20 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:04.703 15:29:20 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:04.703 15:29:20 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:04.703 15:29:20 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:04.703 15:29:20 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:04.703 15:29:20 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:04.703 15:29:20 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:04.703 15:29:20 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:04.703 15:29:20 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:04.703 15:29:20 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:04.703 15:29:20 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:04.703 15:29:20 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:04.703 15:29:20 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:04.703 15:29:20 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:04.703 15:29:20 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:04.703 15:29:20 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:04.703 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:04.703 15:29:20 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:04.703 15:29:20 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:04.703 15:29:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:04.703 15:29:20 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:04.703 15:29:20 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:04.703 15:29:20 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:04.703 15:29:20 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:04.703 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:04.703 15:29:20 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:04.703 15:29:20 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:04.703 15:29:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:04.703 15:29:20 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:04.703 15:29:20 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:04.703 15:29:20 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:04.703 15:29:20 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:04.703 15:29:20 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:04.703 15:29:20 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:04.703 15:29:20 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:04.703 15:29:20 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:04.703 15:29:20 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:04.703 15:29:20 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:04.703 Found net devices under 0000:31:00.0: cvl_0_0 00:19:04.703 15:29:20 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:04.703 15:29:20 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:04.703 15:29:20 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:04.703 15:29:20 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:04.703 15:29:20 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:04.703 15:29:20 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:04.703 Found net devices under 0000:31:00.1: cvl_0_1 00:19:04.703 15:29:20 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:04.703 15:29:20 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:04.703 15:29:20 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:04.703 15:29:20 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:04.703 15:29:20 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:04.703 15:29:20 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:04.703 15:29:20 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:04.703 15:29:20 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:04.703 15:29:20 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:04.703 15:29:20 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:04.703 15:29:20 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:04.703 15:29:20 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:04.703 15:29:20 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:04.703 15:29:20 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:04.703 15:29:20 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:04.703 15:29:20 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:04.703 15:29:20 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:04.703 15:29:20 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:04.703 15:29:20 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:04.703 15:29:20 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:04.703 15:29:20 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:04.703 15:29:20 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:04.703 15:29:20 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:04.703 15:29:21 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:04.703 15:29:21 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:04.703 15:29:21 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:04.703 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:04.703 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.509 ms 00:19:04.703 00:19:04.703 --- 10.0.0.2 ping statistics --- 00:19:04.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:04.703 rtt min/avg/max/mdev = 0.509/0.509/0.509/0.000 ms 00:19:04.703 15:29:21 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:04.703 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:04.703 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:19:04.703 00:19:04.703 --- 10.0.0.1 ping statistics --- 00:19:04.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:04.703 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:19:04.704 15:29:21 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:04.704 15:29:21 -- nvmf/common.sh@411 -- # return 0 00:19:04.704 15:29:21 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:04.704 15:29:21 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:04.704 15:29:21 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:04.704 15:29:21 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:04.704 15:29:21 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:04.704 15:29:21 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:04.704 15:29:21 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:04.704 15:29:21 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:19:04.704 15:29:21 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:04.704 15:29:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:04.704 15:29:21 -- common/autotest_common.sh@10 -- # set +x 00:19:04.704 15:29:21 -- nvmf/common.sh@470 -- # nvmfpid=1657679 00:19:04.704 15:29:21 -- nvmf/common.sh@471 -- # waitforlisten 1657679 00:19:04.704 15:29:21 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:04.704 15:29:21 -- common/autotest_common.sh@817 -- # '[' -z 1657679 ']' 00:19:04.704 15:29:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:04.704 15:29:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:04.704 15:29:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:04.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:04.704 15:29:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:04.704 15:29:21 -- common/autotest_common.sh@10 -- # set +x 00:19:04.704 [2024-04-26 15:29:21.183985] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:19:04.704 [2024-04-26 15:29:21.184056] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:04.704 EAL: No free 2048 kB hugepages reported on node 1 00:19:04.704 [2024-04-26 15:29:21.271142] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.704 [2024-04-26 15:29:21.361497] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:04.704 [2024-04-26 15:29:21.361560] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:04.704 [2024-04-26 15:29:21.361569] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:04.704 [2024-04-26 15:29:21.361577] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:04.704 [2024-04-26 15:29:21.361585] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:04.704 [2024-04-26 15:29:21.361618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:04.704 15:29:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:04.704 15:29:21 -- common/autotest_common.sh@850 -- # return 0 00:19:04.704 15:29:21 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:04.704 15:29:21 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:04.704 15:29:21 -- common/autotest_common.sh@10 -- # set +x 00:19:04.704 15:29:21 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:04.704 15:29:21 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:19:04.704 15:29:21 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:04.704 15:29:21 -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:04.704 15:29:21 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:04.704 15:29:21 -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:04.704 15:29:21 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:04.704 15:29:21 -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:04.704 15:29:21 -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:04.704 [2024-04-26 15:29:22.136782] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:04.965 [2024-04-26 15:29:22.152793] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:04.965 [2024-04-26 15:29:22.153072] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:04.965 [2024-04-26 15:29:22.183048] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:04.965 malloc0 00:19:04.965 15:29:22 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:04.965 15:29:22 -- fips/fips.sh@147 -- # bdevperf_pid=1657925 00:19:04.965 15:29:22 -- fips/fips.sh@148 -- # waitforlisten 1657925 /var/tmp/bdevperf.sock 00:19:04.965 15:29:22 -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:04.965 15:29:22 -- common/autotest_common.sh@817 -- # '[' -z 1657925 ']' 00:19:04.965 15:29:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:04.965 15:29:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:04.965 15:29:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:04.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:04.965 15:29:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:04.965 15:29:22 -- common/autotest_common.sh@10 -- # set +x 00:19:04.965 [2024-04-26 15:29:22.284880] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:19:04.965 [2024-04-26 15:29:22.284950] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1657925 ] 00:19:04.965 EAL: No free 2048 kB hugepages reported on node 1 00:19:04.965 [2024-04-26 15:29:22.340658] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.965 [2024-04-26 15:29:22.403226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:05.906 15:29:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:05.906 15:29:23 -- common/autotest_common.sh@850 -- # return 0 00:19:05.906 15:29:23 -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:05.906 [2024-04-26 15:29:23.155029] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:05.906 [2024-04-26 15:29:23.155088] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:05.906 TLSTESTn1 00:19:05.906 15:29:23 -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:05.906 Running I/O for 10 seconds... 00:19:18.149 00:19:18.149 Latency(us) 00:19:18.149 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:18.149 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:18.149 Verification LBA range: start 0x0 length 0x2000 00:19:18.149 TLSTESTn1 : 10.01 5570.87 21.76 0.00 0.00 22946.27 4614.83 39976.96 00:19:18.149 =================================================================================================================== 00:19:18.149 Total : 5570.87 21.76 0.00 0.00 22946.27 4614.83 39976.96 00:19:18.149 0 00:19:18.149 15:29:33 -- fips/fips.sh@1 -- # cleanup 00:19:18.149 15:29:33 -- fips/fips.sh@15 -- # process_shm --id 0 00:19:18.149 15:29:33 -- common/autotest_common.sh@794 -- # type=--id 00:19:18.149 15:29:33 -- common/autotest_common.sh@795 -- # id=0 00:19:18.149 15:29:33 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:19:18.149 15:29:33 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:18.149 15:29:33 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:19:18.149 15:29:33 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:19:18.149 15:29:33 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:19:18.149 15:29:33 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:18.149 nvmf_trace.0 00:19:18.149 15:29:33 -- common/autotest_common.sh@809 -- # return 0 00:19:18.149 15:29:33 -- fips/fips.sh@16 -- # killprocess 1657925 00:19:18.149 15:29:33 -- common/autotest_common.sh@936 -- # '[' -z 1657925 ']' 00:19:18.149 15:29:33 -- common/autotest_common.sh@940 -- # kill -0 1657925 00:19:18.149 15:29:33 -- common/autotest_common.sh@941 -- # uname 00:19:18.149 15:29:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:18.149 15:29:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1657925 00:19:18.149 15:29:33 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:19:18.149 15:29:33 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:19:18.149 15:29:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1657925' 00:19:18.149 killing process with pid 1657925 00:19:18.149 15:29:33 -- common/autotest_common.sh@955 -- # kill 1657925 00:19:18.149 Received shutdown signal, test time was about 10.000000 seconds 00:19:18.149 00:19:18.149 Latency(us) 00:19:18.149 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:18.149 =================================================================================================================== 00:19:18.149 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:18.149 [2024-04-26 15:29:33.510321] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:18.149 15:29:33 -- common/autotest_common.sh@960 -- # wait 1657925 00:19:18.149 15:29:33 -- fips/fips.sh@17 -- # nvmftestfini 00:19:18.149 15:29:33 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:18.149 15:29:33 -- nvmf/common.sh@117 -- # sync 00:19:18.149 15:29:33 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:18.149 15:29:33 -- nvmf/common.sh@120 -- # set +e 00:19:18.149 15:29:33 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:18.149 15:29:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:18.149 rmmod nvme_tcp 00:19:18.149 rmmod nvme_fabrics 00:19:18.149 rmmod nvme_keyring 00:19:18.149 15:29:33 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:18.149 15:29:33 -- nvmf/common.sh@124 -- # set -e 00:19:18.149 15:29:33 -- nvmf/common.sh@125 -- # return 0 00:19:18.149 15:29:33 -- nvmf/common.sh@478 -- # '[' -n 1657679 ']' 00:19:18.149 15:29:33 -- nvmf/common.sh@479 -- # killprocess 1657679 00:19:18.149 15:29:33 -- common/autotest_common.sh@936 -- # '[' -z 1657679 ']' 00:19:18.149 15:29:33 -- common/autotest_common.sh@940 -- # kill -0 1657679 00:19:18.149 15:29:33 -- common/autotest_common.sh@941 -- # uname 00:19:18.149 15:29:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:18.149 15:29:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1657679 00:19:18.149 15:29:33 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:18.149 15:29:33 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:18.149 15:29:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1657679' 00:19:18.149 killing process with pid 1657679 00:19:18.149 15:29:33 -- common/autotest_common.sh@955 -- # kill 1657679 00:19:18.149 [2024-04-26 15:29:33.749889] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:18.149 15:29:33 -- common/autotest_common.sh@960 -- # wait 1657679 00:19:18.149 15:29:33 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:18.149 15:29:33 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:18.149 15:29:33 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:18.149 15:29:33 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:18.149 15:29:33 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:18.149 15:29:33 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:18.149 15:29:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:18.150 15:29:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:18.721 15:29:35 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:18.721 15:29:35 -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:18.721 00:19:18.721 real 0m22.306s 00:19:18.721 user 0m23.806s 00:19:18.721 sys 0m9.024s 00:19:18.722 15:29:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:18.722 15:29:35 -- common/autotest_common.sh@10 -- # set +x 00:19:18.722 ************************************ 00:19:18.722 END TEST nvmf_fips 00:19:18.722 ************************************ 00:19:18.722 15:29:35 -- nvmf/nvmf.sh@64 -- # '[' 0 -eq 1 ']' 00:19:18.722 15:29:35 -- nvmf/nvmf.sh@70 -- # [[ phy == phy ]] 00:19:18.722 15:29:35 -- nvmf/nvmf.sh@71 -- # '[' tcp = tcp ']' 00:19:18.722 15:29:35 -- nvmf/nvmf.sh@72 -- # gather_supported_nvmf_pci_devs 00:19:18.722 15:29:35 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:18.722 15:29:35 -- common/autotest_common.sh@10 -- # set +x 00:19:25.313 15:29:42 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:25.313 15:29:42 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:25.313 15:29:42 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:25.313 15:29:42 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:25.313 15:29:42 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:25.313 15:29:42 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:25.313 15:29:42 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:25.313 15:29:42 -- nvmf/common.sh@295 -- # net_devs=() 00:19:25.313 15:29:42 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:25.313 15:29:42 -- nvmf/common.sh@296 -- # e810=() 00:19:25.313 15:29:42 -- nvmf/common.sh@296 -- # local -ga e810 00:19:25.313 15:29:42 -- nvmf/common.sh@297 -- # x722=() 00:19:25.313 15:29:42 -- nvmf/common.sh@297 -- # local -ga x722 00:19:25.313 15:29:42 -- nvmf/common.sh@298 -- # mlx=() 00:19:25.313 15:29:42 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:25.313 15:29:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:25.313 15:29:42 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:25.313 15:29:42 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:25.313 15:29:42 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:25.313 15:29:42 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:25.313 15:29:42 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:25.313 15:29:42 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:25.313 15:29:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:25.314 15:29:42 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:25.314 15:29:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:25.314 15:29:42 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:25.314 15:29:42 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:25.314 15:29:42 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:25.314 15:29:42 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:25.314 15:29:42 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:25.314 15:29:42 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:25.314 15:29:42 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:25.314 15:29:42 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:25.314 15:29:42 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:25.314 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:25.314 15:29:42 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:25.314 15:29:42 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:25.314 15:29:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:25.314 15:29:42 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:25.314 15:29:42 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:25.314 15:29:42 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:25.314 15:29:42 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:25.314 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:25.314 15:29:42 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:25.314 15:29:42 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:25.314 15:29:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:25.314 15:29:42 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:25.314 15:29:42 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:25.314 15:29:42 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:25.314 15:29:42 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:25.314 15:29:42 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:25.314 15:29:42 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:25.314 15:29:42 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:25.314 15:29:42 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:25.314 15:29:42 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:25.314 15:29:42 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:25.314 Found net devices under 0000:31:00.0: cvl_0_0 00:19:25.314 15:29:42 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:25.314 15:29:42 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:25.314 15:29:42 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:25.314 15:29:42 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:25.314 15:29:42 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:25.314 15:29:42 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:25.314 Found net devices under 0000:31:00.1: cvl_0_1 00:19:25.314 15:29:42 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:25.314 15:29:42 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:25.314 15:29:42 -- nvmf/nvmf.sh@73 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:25.314 15:29:42 -- nvmf/nvmf.sh@74 -- # (( 2 > 0 )) 00:19:25.314 15:29:42 -- nvmf/nvmf.sh@75 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:25.314 15:29:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:25.314 15:29:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:25.314 15:29:42 -- common/autotest_common.sh@10 -- # set +x 00:19:25.575 ************************************ 00:19:25.575 START TEST nvmf_perf_adq 00:19:25.575 ************************************ 00:19:25.575 15:29:42 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:25.575 * Looking for test storage... 00:19:25.575 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:25.575 15:29:42 -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:25.575 15:29:42 -- nvmf/common.sh@7 -- # uname -s 00:19:25.576 15:29:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:25.576 15:29:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:25.576 15:29:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:25.576 15:29:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:25.576 15:29:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:25.576 15:29:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:25.576 15:29:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:25.576 15:29:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:25.576 15:29:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:25.576 15:29:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:25.576 15:29:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:25.576 15:29:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:25.576 15:29:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:25.576 15:29:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:25.576 15:29:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:25.576 15:29:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:25.576 15:29:42 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:25.576 15:29:42 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:25.576 15:29:42 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:25.576 15:29:42 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:25.576 15:29:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.576 15:29:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.576 15:29:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.576 15:29:42 -- paths/export.sh@5 -- # export PATH 00:19:25.576 15:29:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.576 15:29:42 -- nvmf/common.sh@47 -- # : 0 00:19:25.576 15:29:42 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:25.576 15:29:42 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:25.576 15:29:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:25.576 15:29:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:25.576 15:29:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:25.576 15:29:42 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:25.576 15:29:42 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:25.576 15:29:42 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:25.576 15:29:42 -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:19:25.576 15:29:42 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:25.576 15:29:42 -- common/autotest_common.sh@10 -- # set +x 00:19:32.168 15:29:49 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:32.168 15:29:49 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:32.168 15:29:49 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:32.168 15:29:49 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:32.168 15:29:49 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:32.168 15:29:49 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:32.168 15:29:49 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:32.168 15:29:49 -- nvmf/common.sh@295 -- # net_devs=() 00:19:32.168 15:29:49 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:32.168 15:29:49 -- nvmf/common.sh@296 -- # e810=() 00:19:32.168 15:29:49 -- nvmf/common.sh@296 -- # local -ga e810 00:19:32.168 15:29:49 -- nvmf/common.sh@297 -- # x722=() 00:19:32.168 15:29:49 -- nvmf/common.sh@297 -- # local -ga x722 00:19:32.168 15:29:49 -- nvmf/common.sh@298 -- # mlx=() 00:19:32.168 15:29:49 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:32.168 15:29:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:32.168 15:29:49 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:32.168 15:29:49 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:32.168 15:29:49 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:32.168 15:29:49 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:32.168 15:29:49 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:32.168 15:29:49 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:32.168 15:29:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:32.168 15:29:49 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:32.168 15:29:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:32.168 15:29:49 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:32.168 15:29:49 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:32.168 15:29:49 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:32.168 15:29:49 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:32.168 15:29:49 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:32.168 15:29:49 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:32.168 15:29:49 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:32.169 15:29:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:32.169 15:29:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:32.169 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:32.169 15:29:49 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:32.169 15:29:49 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:32.169 15:29:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:32.169 15:29:49 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:32.169 15:29:49 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:32.169 15:29:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:32.169 15:29:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:32.169 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:32.169 15:29:49 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:32.169 15:29:49 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:32.169 15:29:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:32.169 15:29:49 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:32.169 15:29:49 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:32.169 15:29:49 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:32.169 15:29:49 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:32.169 15:29:49 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:32.169 15:29:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:32.169 15:29:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:32.169 15:29:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:32.169 15:29:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:32.169 15:29:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:32.169 Found net devices under 0000:31:00.0: cvl_0_0 00:19:32.169 15:29:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:32.169 15:29:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:32.169 15:29:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:32.169 15:29:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:32.169 15:29:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:32.169 15:29:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:32.169 Found net devices under 0000:31:00.1: cvl_0_1 00:19:32.169 15:29:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:32.169 15:29:49 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:32.169 15:29:49 -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:32.169 15:29:49 -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:19:32.169 15:29:49 -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:32.169 15:29:49 -- target/perf_adq.sh@59 -- # adq_reload_driver 00:19:32.169 15:29:49 -- target/perf_adq.sh@52 -- # rmmod ice 00:19:34.083 15:29:51 -- target/perf_adq.sh@53 -- # modprobe ice 00:19:36.012 15:29:53 -- target/perf_adq.sh@54 -- # sleep 5 00:19:41.372 15:29:58 -- target/perf_adq.sh@67 -- # nvmftestinit 00:19:41.372 15:29:58 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:41.372 15:29:58 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:41.372 15:29:58 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:41.372 15:29:58 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:41.372 15:29:58 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:41.372 15:29:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:41.372 15:29:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:41.372 15:29:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:41.372 15:29:58 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:41.372 15:29:58 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:41.372 15:29:58 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:41.372 15:29:58 -- common/autotest_common.sh@10 -- # set +x 00:19:41.372 15:29:58 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:41.372 15:29:58 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:41.372 15:29:58 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:41.372 15:29:58 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:41.372 15:29:58 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:41.372 15:29:58 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:41.372 15:29:58 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:41.372 15:29:58 -- nvmf/common.sh@295 -- # net_devs=() 00:19:41.372 15:29:58 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:41.372 15:29:58 -- nvmf/common.sh@296 -- # e810=() 00:19:41.372 15:29:58 -- nvmf/common.sh@296 -- # local -ga e810 00:19:41.372 15:29:58 -- nvmf/common.sh@297 -- # x722=() 00:19:41.372 15:29:58 -- nvmf/common.sh@297 -- # local -ga x722 00:19:41.372 15:29:58 -- nvmf/common.sh@298 -- # mlx=() 00:19:41.372 15:29:58 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:41.372 15:29:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:41.372 15:29:58 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:41.372 15:29:58 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:41.372 15:29:58 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:41.372 15:29:58 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:41.372 15:29:58 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:41.372 15:29:58 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:41.372 15:29:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:41.372 15:29:58 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:41.372 15:29:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:41.372 15:29:58 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:41.372 15:29:58 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:41.372 15:29:58 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:41.372 15:29:58 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:41.372 15:29:58 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:41.372 15:29:58 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:41.372 15:29:58 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:41.372 15:29:58 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:41.372 15:29:58 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:41.372 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:41.372 15:29:58 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:41.372 15:29:58 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:41.372 15:29:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:41.372 15:29:58 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:41.372 15:29:58 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:41.372 15:29:58 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:41.372 15:29:58 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:41.372 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:41.372 15:29:58 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:41.372 15:29:58 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:41.372 15:29:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:41.372 15:29:58 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:41.372 15:29:58 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:41.372 15:29:58 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:41.372 15:29:58 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:41.372 15:29:58 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:41.372 15:29:58 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:41.372 15:29:58 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:41.372 15:29:58 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:41.372 15:29:58 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:41.372 15:29:58 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:41.372 Found net devices under 0000:31:00.0: cvl_0_0 00:19:41.372 15:29:58 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:41.372 15:29:58 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:41.372 15:29:58 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:41.372 15:29:58 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:41.372 15:29:58 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:41.372 15:29:58 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:41.372 Found net devices under 0000:31:00.1: cvl_0_1 00:19:41.372 15:29:58 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:41.373 15:29:58 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:41.373 15:29:58 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:41.373 15:29:58 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:41.373 15:29:58 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:41.373 15:29:58 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:41.373 15:29:58 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:41.373 15:29:58 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:41.373 15:29:58 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:41.373 15:29:58 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:41.373 15:29:58 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:41.373 15:29:58 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:41.373 15:29:58 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:41.373 15:29:58 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:41.373 15:29:58 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:41.373 15:29:58 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:41.373 15:29:58 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:41.373 15:29:58 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:41.373 15:29:58 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:41.373 15:29:58 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:41.373 15:29:58 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:41.373 15:29:58 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:41.373 15:29:58 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:41.373 15:29:58 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:41.373 15:29:58 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:41.373 15:29:58 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:41.373 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:41.373 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.531 ms 00:19:41.373 00:19:41.373 --- 10.0.0.2 ping statistics --- 00:19:41.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:41.373 rtt min/avg/max/mdev = 0.531/0.531/0.531/0.000 ms 00:19:41.373 15:29:58 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:41.373 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:41.373 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.256 ms 00:19:41.373 00:19:41.373 --- 10.0.0.1 ping statistics --- 00:19:41.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:41.373 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:19:41.373 15:29:58 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:41.373 15:29:58 -- nvmf/common.sh@411 -- # return 0 00:19:41.373 15:29:58 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:41.373 15:29:58 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:41.373 15:29:58 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:41.373 15:29:58 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:41.373 15:29:58 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:41.373 15:29:58 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:41.373 15:29:58 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:41.373 15:29:58 -- target/perf_adq.sh@68 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:41.373 15:29:58 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:41.373 15:29:58 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:41.373 15:29:58 -- common/autotest_common.sh@10 -- # set +x 00:19:41.373 15:29:58 -- nvmf/common.sh@470 -- # nvmfpid=1669920 00:19:41.373 15:29:58 -- nvmf/common.sh@471 -- # waitforlisten 1669920 00:19:41.373 15:29:58 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:41.373 15:29:58 -- common/autotest_common.sh@817 -- # '[' -z 1669920 ']' 00:19:41.373 15:29:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:41.373 15:29:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:41.373 15:29:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:41.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:41.373 15:29:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:41.373 15:29:58 -- common/autotest_common.sh@10 -- # set +x 00:19:41.373 [2024-04-26 15:29:58.663445] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:19:41.373 [2024-04-26 15:29:58.663506] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:41.373 EAL: No free 2048 kB hugepages reported on node 1 00:19:41.373 [2024-04-26 15:29:58.737144] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:41.373 [2024-04-26 15:29:58.813263] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:41.373 [2024-04-26 15:29:58.813305] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:41.373 [2024-04-26 15:29:58.813314] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:41.373 [2024-04-26 15:29:58.813322] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:41.373 [2024-04-26 15:29:58.813329] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:41.373 [2024-04-26 15:29:58.813538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:41.373 [2024-04-26 15:29:58.813615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:41.373 [2024-04-26 15:29:58.813734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:41.373 [2024-04-26 15:29:58.813735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:42.317 15:29:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:42.317 15:29:59 -- common/autotest_common.sh@850 -- # return 0 00:19:42.317 15:29:59 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:42.317 15:29:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:42.317 15:29:59 -- common/autotest_common.sh@10 -- # set +x 00:19:42.317 15:29:59 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:42.317 15:29:59 -- target/perf_adq.sh@69 -- # adq_configure_nvmf_target 0 00:19:42.318 15:29:59 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:19:42.318 15:29:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:42.318 15:29:59 -- common/autotest_common.sh@10 -- # set +x 00:19:42.318 15:29:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:42.318 15:29:59 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:19:42.318 15:29:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:42.318 15:29:59 -- common/autotest_common.sh@10 -- # set +x 00:19:42.318 15:29:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:42.318 15:29:59 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:19:42.318 15:29:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:42.318 15:29:59 -- common/autotest_common.sh@10 -- # set +x 00:19:42.318 [2024-04-26 15:29:59.580871] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:42.318 15:29:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:42.318 15:29:59 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:42.318 15:29:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:42.318 15:29:59 -- common/autotest_common.sh@10 -- # set +x 00:19:42.318 Malloc1 00:19:42.318 15:29:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:42.318 15:29:59 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:42.318 15:29:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:42.318 15:29:59 -- common/autotest_common.sh@10 -- # set +x 00:19:42.318 15:29:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:42.318 15:29:59 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:42.318 15:29:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:42.318 15:29:59 -- common/autotest_common.sh@10 -- # set +x 00:19:42.318 15:29:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:42.318 15:29:59 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:42.318 15:29:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:42.318 15:29:59 -- common/autotest_common.sh@10 -- # set +x 00:19:42.318 [2024-04-26 15:29:59.636178] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:42.318 15:29:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:42.318 15:29:59 -- target/perf_adq.sh@73 -- # perfpid=1670089 00:19:42.318 15:29:59 -- target/perf_adq.sh@74 -- # sleep 2 00:19:42.318 15:29:59 -- target/perf_adq.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:42.318 EAL: No free 2048 kB hugepages reported on node 1 00:19:44.234 15:30:01 -- target/perf_adq.sh@76 -- # rpc_cmd nvmf_get_stats 00:19:44.234 15:30:01 -- target/perf_adq.sh@76 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:19:44.234 15:30:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:44.234 15:30:01 -- target/perf_adq.sh@76 -- # wc -l 00:19:44.234 15:30:01 -- common/autotest_common.sh@10 -- # set +x 00:19:44.234 15:30:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:44.494 15:30:01 -- target/perf_adq.sh@76 -- # count=4 00:19:44.494 15:30:01 -- target/perf_adq.sh@77 -- # [[ 4 -ne 4 ]] 00:19:44.494 15:30:01 -- target/perf_adq.sh@81 -- # wait 1670089 00:19:52.628 [2024-04-26 15:30:09.786782] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166a0c0 is same with the state(5) to be set 00:19:52.628 Initializing NVMe Controllers 00:19:52.628 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:52.628 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:19:52.628 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:19:52.628 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:19:52.628 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:19:52.628 Initialization complete. Launching workers. 00:19:52.628 ======================================================== 00:19:52.628 Latency(us) 00:19:52.628 Device Information : IOPS MiB/s Average min max 00:19:52.628 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10757.50 42.02 5950.69 1466.81 9002.56 00:19:52.628 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 14332.90 55.99 4465.31 1311.16 8490.87 00:19:52.628 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13643.70 53.30 4690.43 1266.76 11243.62 00:19:52.628 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 12594.40 49.20 5080.73 1207.68 11412.34 00:19:52.628 ======================================================== 00:19:52.628 Total : 51328.49 200.50 4987.46 1207.68 11412.34 00:19:52.628 00:19:52.628 15:30:09 -- target/perf_adq.sh@82 -- # nvmftestfini 00:19:52.628 15:30:09 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:52.628 15:30:09 -- nvmf/common.sh@117 -- # sync 00:19:52.628 15:30:09 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:52.628 15:30:09 -- nvmf/common.sh@120 -- # set +e 00:19:52.628 15:30:09 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:52.628 15:30:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:52.628 rmmod nvme_tcp 00:19:52.628 rmmod nvme_fabrics 00:19:52.628 rmmod nvme_keyring 00:19:52.628 15:30:09 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:52.628 15:30:09 -- nvmf/common.sh@124 -- # set -e 00:19:52.628 15:30:09 -- nvmf/common.sh@125 -- # return 0 00:19:52.628 15:30:09 -- nvmf/common.sh@478 -- # '[' -n 1669920 ']' 00:19:52.628 15:30:09 -- nvmf/common.sh@479 -- # killprocess 1669920 00:19:52.628 15:30:09 -- common/autotest_common.sh@936 -- # '[' -z 1669920 ']' 00:19:52.628 15:30:09 -- common/autotest_common.sh@940 -- # kill -0 1669920 00:19:52.628 15:30:09 -- common/autotest_common.sh@941 -- # uname 00:19:52.628 15:30:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:52.628 15:30:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1669920 00:19:52.628 15:30:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:52.628 15:30:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:52.628 15:30:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1669920' 00:19:52.628 killing process with pid 1669920 00:19:52.628 15:30:09 -- common/autotest_common.sh@955 -- # kill 1669920 00:19:52.628 15:30:09 -- common/autotest_common.sh@960 -- # wait 1669920 00:19:52.890 15:30:10 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:52.890 15:30:10 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:52.890 15:30:10 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:52.890 15:30:10 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:52.890 15:30:10 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:52.890 15:30:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:52.890 15:30:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:52.890 15:30:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:54.802 15:30:12 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:54.802 15:30:12 -- target/perf_adq.sh@84 -- # adq_reload_driver 00:19:54.802 15:30:12 -- target/perf_adq.sh@52 -- # rmmod ice 00:19:56.185 15:30:13 -- target/perf_adq.sh@53 -- # modprobe ice 00:19:58.727 15:30:15 -- target/perf_adq.sh@54 -- # sleep 5 00:20:04.018 15:30:20 -- target/perf_adq.sh@87 -- # nvmftestinit 00:20:04.018 15:30:20 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:04.019 15:30:20 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:04.019 15:30:20 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:04.019 15:30:20 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:04.019 15:30:20 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:04.019 15:30:20 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:04.019 15:30:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:04.019 15:30:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:04.019 15:30:20 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:04.019 15:30:20 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:04.019 15:30:20 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:04.019 15:30:20 -- common/autotest_common.sh@10 -- # set +x 00:20:04.019 15:30:20 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:04.019 15:30:20 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:04.019 15:30:20 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:04.019 15:30:20 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:04.019 15:30:20 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:04.019 15:30:20 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:04.019 15:30:20 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:04.019 15:30:20 -- nvmf/common.sh@295 -- # net_devs=() 00:20:04.019 15:30:20 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:04.019 15:30:20 -- nvmf/common.sh@296 -- # e810=() 00:20:04.019 15:30:20 -- nvmf/common.sh@296 -- # local -ga e810 00:20:04.019 15:30:20 -- nvmf/common.sh@297 -- # x722=() 00:20:04.019 15:30:20 -- nvmf/common.sh@297 -- # local -ga x722 00:20:04.019 15:30:20 -- nvmf/common.sh@298 -- # mlx=() 00:20:04.019 15:30:20 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:04.019 15:30:20 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:04.019 15:30:20 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:04.019 15:30:20 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:04.019 15:30:20 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:04.019 15:30:20 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:04.019 15:30:20 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:04.019 15:30:20 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:04.019 15:30:20 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:04.019 15:30:20 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:04.019 15:30:20 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:04.019 15:30:20 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:04.019 15:30:20 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:04.019 15:30:20 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:04.019 15:30:20 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:04.019 15:30:20 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:04.019 15:30:20 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:04.019 15:30:20 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:04.019 15:30:20 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:04.019 15:30:20 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:04.019 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:04.019 15:30:20 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:04.019 15:30:20 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:04.019 15:30:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:04.019 15:30:20 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:04.019 15:30:20 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:04.019 15:30:20 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:04.019 15:30:20 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:04.019 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:04.019 15:30:20 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:04.019 15:30:20 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:04.019 15:30:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:04.019 15:30:20 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:04.019 15:30:20 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:04.019 15:30:20 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:04.019 15:30:20 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:04.019 15:30:20 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:04.019 15:30:20 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:04.019 15:30:20 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:04.019 15:30:20 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:04.019 15:30:20 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:04.019 15:30:20 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:04.019 Found net devices under 0000:31:00.0: cvl_0_0 00:20:04.019 15:30:20 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:04.019 15:30:20 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:04.019 15:30:20 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:04.019 15:30:20 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:04.019 15:30:20 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:04.019 15:30:20 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:04.019 Found net devices under 0000:31:00.1: cvl_0_1 00:20:04.019 15:30:20 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:04.019 15:30:20 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:04.019 15:30:20 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:04.019 15:30:20 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:04.019 15:30:20 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:04.019 15:30:20 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:04.019 15:30:20 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:04.019 15:30:20 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:04.019 15:30:20 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:04.019 15:30:20 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:04.019 15:30:20 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:04.019 15:30:20 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:04.019 15:30:20 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:04.019 15:30:20 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:04.019 15:30:20 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:04.019 15:30:20 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:04.019 15:30:20 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:04.019 15:30:20 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:04.019 15:30:20 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:04.019 15:30:20 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:04.019 15:30:20 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:04.019 15:30:20 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:04.019 15:30:20 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:04.019 15:30:20 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:04.019 15:30:20 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:04.019 15:30:20 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:04.019 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:04.019 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.702 ms 00:20:04.019 00:20:04.019 --- 10.0.0.2 ping statistics --- 00:20:04.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:04.019 rtt min/avg/max/mdev = 0.702/0.702/0.702/0.000 ms 00:20:04.019 15:30:20 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:04.019 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:04.019 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:20:04.019 00:20:04.019 --- 10.0.0.1 ping statistics --- 00:20:04.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:04.019 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:20:04.019 15:30:20 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:04.019 15:30:20 -- nvmf/common.sh@411 -- # return 0 00:20:04.019 15:30:20 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:04.019 15:30:20 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:04.019 15:30:20 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:04.019 15:30:20 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:04.019 15:30:20 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:04.019 15:30:20 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:04.019 15:30:20 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:04.019 15:30:21 -- target/perf_adq.sh@88 -- # adq_configure_driver 00:20:04.019 15:30:21 -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:20:04.019 15:30:21 -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:20:04.019 15:30:21 -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:20:04.019 net.core.busy_poll = 1 00:20:04.019 15:30:21 -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:20:04.019 net.core.busy_read = 1 00:20:04.019 15:30:21 -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:20:04.019 15:30:21 -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:20:04.019 15:30:21 -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:20:04.019 15:30:21 -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:20:04.019 15:30:21 -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:20:04.019 15:30:21 -- target/perf_adq.sh@89 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:04.019 15:30:21 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:04.020 15:30:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:04.020 15:30:21 -- common/autotest_common.sh@10 -- # set +x 00:20:04.020 15:30:21 -- nvmf/common.sh@470 -- # nvmfpid=1675313 00:20:04.020 15:30:21 -- nvmf/common.sh@471 -- # waitforlisten 1675313 00:20:04.020 15:30:21 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:04.020 15:30:21 -- common/autotest_common.sh@817 -- # '[' -z 1675313 ']' 00:20:04.020 15:30:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:04.020 15:30:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:04.020 15:30:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:04.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:04.020 15:30:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:04.020 15:30:21 -- common/autotest_common.sh@10 -- # set +x 00:20:04.020 [2024-04-26 15:30:21.378254] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:20:04.020 [2024-04-26 15:30:21.378340] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:04.020 EAL: No free 2048 kB hugepages reported on node 1 00:20:04.020 [2024-04-26 15:30:21.451892] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:04.282 [2024-04-26 15:30:21.524727] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:04.282 [2024-04-26 15:30:21.524773] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:04.282 [2024-04-26 15:30:21.524781] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:04.282 [2024-04-26 15:30:21.524788] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:04.282 [2024-04-26 15:30:21.524794] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:04.282 [2024-04-26 15:30:21.524931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:04.282 [2024-04-26 15:30:21.525152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:04.282 [2024-04-26 15:30:21.525308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:04.282 [2024-04-26 15:30:21.525309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:04.852 15:30:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:04.852 15:30:22 -- common/autotest_common.sh@850 -- # return 0 00:20:04.852 15:30:22 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:04.852 15:30:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:04.852 15:30:22 -- common/autotest_common.sh@10 -- # set +x 00:20:04.852 15:30:22 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:04.852 15:30:22 -- target/perf_adq.sh@90 -- # adq_configure_nvmf_target 1 00:20:04.852 15:30:22 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:20:04.852 15:30:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:04.852 15:30:22 -- common/autotest_common.sh@10 -- # set +x 00:20:04.852 15:30:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:04.852 15:30:22 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:20:04.852 15:30:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:04.852 15:30:22 -- common/autotest_common.sh@10 -- # set +x 00:20:04.852 15:30:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:04.852 15:30:22 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:20:04.852 15:30:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:04.852 15:30:22 -- common/autotest_common.sh@10 -- # set +x 00:20:04.852 [2024-04-26 15:30:22.269785] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:04.852 15:30:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:04.852 15:30:22 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:04.852 15:30:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:04.852 15:30:22 -- common/autotest_common.sh@10 -- # set +x 00:20:04.852 Malloc1 00:20:04.852 15:30:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:04.852 15:30:22 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:04.852 15:30:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:04.852 15:30:22 -- common/autotest_common.sh@10 -- # set +x 00:20:05.112 15:30:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:05.112 15:30:22 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:05.112 15:30:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:05.112 15:30:22 -- common/autotest_common.sh@10 -- # set +x 00:20:05.112 15:30:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:05.112 15:30:22 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:05.112 15:30:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:05.112 15:30:22 -- common/autotest_common.sh@10 -- # set +x 00:20:05.112 [2024-04-26 15:30:22.325082] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:05.112 15:30:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:05.112 15:30:22 -- target/perf_adq.sh@94 -- # perfpid=1675479 00:20:05.112 15:30:22 -- target/perf_adq.sh@95 -- # sleep 2 00:20:05.113 15:30:22 -- target/perf_adq.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:05.113 EAL: No free 2048 kB hugepages reported on node 1 00:20:07.026 15:30:24 -- target/perf_adq.sh@97 -- # rpc_cmd nvmf_get_stats 00:20:07.026 15:30:24 -- target/perf_adq.sh@97 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:20:07.026 15:30:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:07.026 15:30:24 -- target/perf_adq.sh@97 -- # wc -l 00:20:07.026 15:30:24 -- common/autotest_common.sh@10 -- # set +x 00:20:07.026 15:30:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:07.026 15:30:24 -- target/perf_adq.sh@97 -- # count=2 00:20:07.026 15:30:24 -- target/perf_adq.sh@98 -- # [[ 2 -lt 2 ]] 00:20:07.026 15:30:24 -- target/perf_adq.sh@103 -- # wait 1675479 00:20:15.160 Initializing NVMe Controllers 00:20:15.160 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:15.160 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:15.160 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:15.160 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:15.160 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:15.160 Initialization complete. Launching workers. 00:20:15.160 ======================================================== 00:20:15.160 Latency(us) 00:20:15.160 Device Information : IOPS MiB/s Average min max 00:20:15.160 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 20311.30 79.34 3150.91 1030.81 45962.75 00:20:15.160 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6080.11 23.75 10527.10 1276.76 58536.18 00:20:15.160 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7541.39 29.46 8514.57 1136.59 56989.95 00:20:15.160 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5772.02 22.55 11110.76 1541.42 59034.51 00:20:15.160 ======================================================== 00:20:15.160 Total : 39704.82 155.10 6456.35 1030.81 59034.51 00:20:15.160 00:20:15.160 15:30:32 -- target/perf_adq.sh@104 -- # nvmftestfini 00:20:15.160 15:30:32 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:15.160 15:30:32 -- nvmf/common.sh@117 -- # sync 00:20:15.160 15:30:32 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:15.160 15:30:32 -- nvmf/common.sh@120 -- # set +e 00:20:15.160 15:30:32 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:15.160 15:30:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:15.160 rmmod nvme_tcp 00:20:15.160 rmmod nvme_fabrics 00:20:15.160 rmmod nvme_keyring 00:20:15.160 15:30:32 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:15.160 15:30:32 -- nvmf/common.sh@124 -- # set -e 00:20:15.160 15:30:32 -- nvmf/common.sh@125 -- # return 0 00:20:15.160 15:30:32 -- nvmf/common.sh@478 -- # '[' -n 1675313 ']' 00:20:15.160 15:30:32 -- nvmf/common.sh@479 -- # killprocess 1675313 00:20:15.160 15:30:32 -- common/autotest_common.sh@936 -- # '[' -z 1675313 ']' 00:20:15.160 15:30:32 -- common/autotest_common.sh@940 -- # kill -0 1675313 00:20:15.160 15:30:32 -- common/autotest_common.sh@941 -- # uname 00:20:15.160 15:30:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:15.160 15:30:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1675313 00:20:15.421 15:30:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:15.421 15:30:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:15.421 15:30:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1675313' 00:20:15.421 killing process with pid 1675313 00:20:15.421 15:30:32 -- common/autotest_common.sh@955 -- # kill 1675313 00:20:15.421 15:30:32 -- common/autotest_common.sh@960 -- # wait 1675313 00:20:15.421 15:30:32 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:15.421 15:30:32 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:15.421 15:30:32 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:15.421 15:30:32 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:15.421 15:30:32 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:15.421 15:30:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:15.421 15:30:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:15.421 15:30:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:17.967 15:30:34 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:17.967 15:30:34 -- target/perf_adq.sh@106 -- # trap - SIGINT SIGTERM EXIT 00:20:17.967 00:20:17.967 real 0m51.960s 00:20:17.967 user 2m47.498s 00:20:17.967 sys 0m10.880s 00:20:17.967 15:30:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:17.967 15:30:34 -- common/autotest_common.sh@10 -- # set +x 00:20:17.967 ************************************ 00:20:17.967 END TEST nvmf_perf_adq 00:20:17.967 ************************************ 00:20:17.967 15:30:34 -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:17.967 15:30:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:17.967 15:30:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:17.967 15:30:34 -- common/autotest_common.sh@10 -- # set +x 00:20:17.967 ************************************ 00:20:17.967 START TEST nvmf_shutdown 00:20:17.967 ************************************ 00:20:17.967 15:30:35 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:17.967 * Looking for test storage... 00:20:17.967 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:17.967 15:30:35 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:17.967 15:30:35 -- nvmf/common.sh@7 -- # uname -s 00:20:17.967 15:30:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:17.967 15:30:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:17.967 15:30:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:17.967 15:30:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:17.967 15:30:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:17.967 15:30:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:17.967 15:30:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:17.967 15:30:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:17.967 15:30:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:17.967 15:30:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:17.967 15:30:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:17.967 15:30:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:17.967 15:30:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:17.967 15:30:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:17.967 15:30:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:17.967 15:30:35 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:17.967 15:30:35 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:17.967 15:30:35 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:17.967 15:30:35 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:17.967 15:30:35 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:17.967 15:30:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.967 15:30:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.967 15:30:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.967 15:30:35 -- paths/export.sh@5 -- # export PATH 00:20:17.967 15:30:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.967 15:30:35 -- nvmf/common.sh@47 -- # : 0 00:20:17.967 15:30:35 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:17.967 15:30:35 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:17.967 15:30:35 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:17.967 15:30:35 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:17.967 15:30:35 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:17.967 15:30:35 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:17.967 15:30:35 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:17.967 15:30:35 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:17.967 15:30:35 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:17.967 15:30:35 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:17.967 15:30:35 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:20:17.967 15:30:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:17.967 15:30:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:17.967 15:30:35 -- common/autotest_common.sh@10 -- # set +x 00:20:17.967 ************************************ 00:20:17.968 START TEST nvmf_shutdown_tc1 00:20:17.968 ************************************ 00:20:17.968 15:30:35 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc1 00:20:17.968 15:30:35 -- target/shutdown.sh@74 -- # starttarget 00:20:17.968 15:30:35 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:17.968 15:30:35 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:17.968 15:30:35 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:17.968 15:30:35 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:17.968 15:30:35 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:17.968 15:30:35 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:17.968 15:30:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:17.968 15:30:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:17.968 15:30:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:17.968 15:30:35 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:17.968 15:30:35 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:17.968 15:30:35 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:17.968 15:30:35 -- common/autotest_common.sh@10 -- # set +x 00:20:26.106 15:30:42 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:26.106 15:30:42 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:26.106 15:30:42 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:26.106 15:30:42 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:26.106 15:30:42 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:26.106 15:30:42 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:26.106 15:30:42 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:26.106 15:30:42 -- nvmf/common.sh@295 -- # net_devs=() 00:20:26.106 15:30:42 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:26.106 15:30:42 -- nvmf/common.sh@296 -- # e810=() 00:20:26.106 15:30:42 -- nvmf/common.sh@296 -- # local -ga e810 00:20:26.106 15:30:42 -- nvmf/common.sh@297 -- # x722=() 00:20:26.106 15:30:42 -- nvmf/common.sh@297 -- # local -ga x722 00:20:26.106 15:30:42 -- nvmf/common.sh@298 -- # mlx=() 00:20:26.106 15:30:42 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:26.106 15:30:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:26.106 15:30:42 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:26.106 15:30:42 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:26.106 15:30:42 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:26.106 15:30:42 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:26.106 15:30:42 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:26.106 15:30:42 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:26.106 15:30:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:26.106 15:30:42 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:26.106 15:30:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:26.106 15:30:42 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:26.106 15:30:42 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:26.106 15:30:42 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:26.106 15:30:42 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:26.106 15:30:42 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:26.106 15:30:42 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:26.106 15:30:42 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:26.106 15:30:42 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:26.106 15:30:42 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:26.106 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:26.106 15:30:42 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:26.106 15:30:42 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:26.106 15:30:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:26.106 15:30:42 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:26.106 15:30:42 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:26.106 15:30:42 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:26.106 15:30:42 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:26.106 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:26.106 15:30:42 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:26.106 15:30:42 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:26.106 15:30:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:26.106 15:30:42 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:26.106 15:30:42 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:26.106 15:30:42 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:26.106 15:30:42 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:26.106 15:30:42 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:26.106 15:30:42 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:26.106 15:30:42 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:26.106 15:30:42 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:26.106 15:30:42 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:26.106 15:30:42 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:26.106 Found net devices under 0000:31:00.0: cvl_0_0 00:20:26.106 15:30:42 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:26.106 15:30:42 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:26.106 15:30:42 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:26.106 15:30:42 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:26.106 15:30:42 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:26.106 15:30:42 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:26.106 Found net devices under 0000:31:00.1: cvl_0_1 00:20:26.106 15:30:42 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:26.106 15:30:42 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:26.106 15:30:42 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:26.106 15:30:42 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:26.106 15:30:42 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:26.106 15:30:42 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:26.106 15:30:42 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:26.106 15:30:42 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:26.106 15:30:42 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:26.106 15:30:42 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:26.106 15:30:42 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:26.106 15:30:42 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:26.106 15:30:42 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:26.106 15:30:42 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:26.106 15:30:42 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:26.106 15:30:42 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:26.106 15:30:42 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:26.106 15:30:42 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:26.106 15:30:42 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:26.106 15:30:42 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:26.106 15:30:42 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:26.106 15:30:42 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:26.106 15:30:42 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:26.106 15:30:42 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:26.106 15:30:42 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:26.106 15:30:42 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:26.106 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:26.106 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.658 ms 00:20:26.106 00:20:26.106 --- 10.0.0.2 ping statistics --- 00:20:26.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.106 rtt min/avg/max/mdev = 0.658/0.658/0.658/0.000 ms 00:20:26.106 15:30:42 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:26.106 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:26.106 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.335 ms 00:20:26.106 00:20:26.106 --- 10.0.0.1 ping statistics --- 00:20:26.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.106 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:20:26.106 15:30:42 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:26.106 15:30:42 -- nvmf/common.sh@411 -- # return 0 00:20:26.106 15:30:42 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:26.106 15:30:42 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:26.106 15:30:42 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:26.106 15:30:42 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:26.106 15:30:42 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:26.106 15:30:42 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:26.106 15:30:42 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:26.106 15:30:42 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:26.106 15:30:42 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:26.106 15:30:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:26.106 15:30:42 -- common/autotest_common.sh@10 -- # set +x 00:20:26.106 15:30:42 -- nvmf/common.sh@470 -- # nvmfpid=1681999 00:20:26.106 15:30:42 -- nvmf/common.sh@471 -- # waitforlisten 1681999 00:20:26.106 15:30:42 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:26.106 15:30:42 -- common/autotest_common.sh@817 -- # '[' -z 1681999 ']' 00:20:26.106 15:30:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:26.106 15:30:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:26.106 15:30:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:26.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:26.106 15:30:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:26.106 15:30:42 -- common/autotest_common.sh@10 -- # set +x 00:20:26.106 [2024-04-26 15:30:42.615733] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:20:26.106 [2024-04-26 15:30:42.615798] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:26.106 EAL: No free 2048 kB hugepages reported on node 1 00:20:26.106 [2024-04-26 15:30:42.703873] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:26.106 [2024-04-26 15:30:42.795467] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:26.106 [2024-04-26 15:30:42.795532] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:26.106 [2024-04-26 15:30:42.795540] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:26.106 [2024-04-26 15:30:42.795547] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:26.106 [2024-04-26 15:30:42.795554] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:26.107 [2024-04-26 15:30:42.795706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:26.107 [2024-04-26 15:30:42.795871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:26.107 [2024-04-26 15:30:42.795976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:26.107 [2024-04-26 15:30:42.796191] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:26.107 15:30:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:26.107 15:30:43 -- common/autotest_common.sh@850 -- # return 0 00:20:26.107 15:30:43 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:26.107 15:30:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:26.107 15:30:43 -- common/autotest_common.sh@10 -- # set +x 00:20:26.107 15:30:43 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:26.107 15:30:43 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:26.107 15:30:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:26.107 15:30:43 -- common/autotest_common.sh@10 -- # set +x 00:20:26.107 [2024-04-26 15:30:43.439340] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:26.107 15:30:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:26.107 15:30:43 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:26.107 15:30:43 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:26.107 15:30:43 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:26.107 15:30:43 -- common/autotest_common.sh@10 -- # set +x 00:20:26.107 15:30:43 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:26.107 15:30:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:26.107 15:30:43 -- target/shutdown.sh@28 -- # cat 00:20:26.107 15:30:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:26.107 15:30:43 -- target/shutdown.sh@28 -- # cat 00:20:26.107 15:30:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:26.107 15:30:43 -- target/shutdown.sh@28 -- # cat 00:20:26.107 15:30:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:26.107 15:30:43 -- target/shutdown.sh@28 -- # cat 00:20:26.107 15:30:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:26.107 15:30:43 -- target/shutdown.sh@28 -- # cat 00:20:26.107 15:30:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:26.107 15:30:43 -- target/shutdown.sh@28 -- # cat 00:20:26.107 15:30:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:26.107 15:30:43 -- target/shutdown.sh@28 -- # cat 00:20:26.107 15:30:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:26.107 15:30:43 -- target/shutdown.sh@28 -- # cat 00:20:26.107 15:30:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:26.107 15:30:43 -- target/shutdown.sh@28 -- # cat 00:20:26.107 15:30:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:26.107 15:30:43 -- target/shutdown.sh@28 -- # cat 00:20:26.107 15:30:43 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:26.107 15:30:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:26.107 15:30:43 -- common/autotest_common.sh@10 -- # set +x 00:20:26.107 Malloc1 00:20:26.107 [2024-04-26 15:30:43.542866] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:26.367 Malloc2 00:20:26.367 Malloc3 00:20:26.367 Malloc4 00:20:26.367 Malloc5 00:20:26.367 Malloc6 00:20:26.367 Malloc7 00:20:26.367 Malloc8 00:20:26.653 Malloc9 00:20:26.653 Malloc10 00:20:26.653 15:30:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:26.653 15:30:43 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:26.653 15:30:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:26.653 15:30:43 -- common/autotest_common.sh@10 -- # set +x 00:20:26.653 15:30:43 -- target/shutdown.sh@78 -- # perfpid=1682262 00:20:26.653 15:30:43 -- target/shutdown.sh@79 -- # waitforlisten 1682262 /var/tmp/bdevperf.sock 00:20:26.653 15:30:43 -- common/autotest_common.sh@817 -- # '[' -z 1682262 ']' 00:20:26.653 15:30:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:26.653 15:30:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:26.653 15:30:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:26.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:26.653 15:30:43 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:20:26.653 15:30:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:26.653 15:30:43 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:26.653 15:30:43 -- common/autotest_common.sh@10 -- # set +x 00:20:26.653 15:30:43 -- nvmf/common.sh@521 -- # config=() 00:20:26.653 15:30:43 -- nvmf/common.sh@521 -- # local subsystem config 00:20:26.653 15:30:43 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:26.653 15:30:43 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:26.653 { 00:20:26.653 "params": { 00:20:26.653 "name": "Nvme$subsystem", 00:20:26.653 "trtype": "$TEST_TRANSPORT", 00:20:26.653 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:26.653 "adrfam": "ipv4", 00:20:26.653 "trsvcid": "$NVMF_PORT", 00:20:26.653 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:26.653 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:26.653 "hdgst": ${hdgst:-false}, 00:20:26.653 "ddgst": ${ddgst:-false} 00:20:26.653 }, 00:20:26.653 "method": "bdev_nvme_attach_controller" 00:20:26.653 } 00:20:26.653 EOF 00:20:26.653 )") 00:20:26.653 15:30:43 -- nvmf/common.sh@543 -- # cat 00:20:26.653 15:30:43 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:26.653 15:30:43 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:26.653 { 00:20:26.653 "params": { 00:20:26.653 "name": "Nvme$subsystem", 00:20:26.653 "trtype": "$TEST_TRANSPORT", 00:20:26.653 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:26.653 "adrfam": "ipv4", 00:20:26.653 "trsvcid": "$NVMF_PORT", 00:20:26.653 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:26.653 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:26.653 "hdgst": ${hdgst:-false}, 00:20:26.653 "ddgst": ${ddgst:-false} 00:20:26.653 }, 00:20:26.653 "method": "bdev_nvme_attach_controller" 00:20:26.653 } 00:20:26.653 EOF 00:20:26.653 )") 00:20:26.653 15:30:43 -- nvmf/common.sh@543 -- # cat 00:20:26.653 15:30:43 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:26.653 15:30:43 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:26.653 { 00:20:26.653 "params": { 00:20:26.653 "name": "Nvme$subsystem", 00:20:26.653 "trtype": "$TEST_TRANSPORT", 00:20:26.653 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:26.653 "adrfam": "ipv4", 00:20:26.653 "trsvcid": "$NVMF_PORT", 00:20:26.653 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:26.653 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:26.653 "hdgst": ${hdgst:-false}, 00:20:26.653 "ddgst": ${ddgst:-false} 00:20:26.653 }, 00:20:26.653 "method": "bdev_nvme_attach_controller" 00:20:26.653 } 00:20:26.653 EOF 00:20:26.653 )") 00:20:26.653 15:30:43 -- nvmf/common.sh@543 -- # cat 00:20:26.653 15:30:43 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:26.653 15:30:43 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:26.653 { 00:20:26.653 "params": { 00:20:26.653 "name": "Nvme$subsystem", 00:20:26.654 "trtype": "$TEST_TRANSPORT", 00:20:26.654 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:26.654 "adrfam": "ipv4", 00:20:26.654 "trsvcid": "$NVMF_PORT", 00:20:26.654 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:26.654 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:26.654 "hdgst": ${hdgst:-false}, 00:20:26.654 "ddgst": ${ddgst:-false} 00:20:26.654 }, 00:20:26.654 "method": "bdev_nvme_attach_controller" 00:20:26.654 } 00:20:26.654 EOF 00:20:26.654 )") 00:20:26.654 15:30:43 -- nvmf/common.sh@543 -- # cat 00:20:26.654 15:30:43 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:26.654 15:30:43 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:26.654 { 00:20:26.654 "params": { 00:20:26.654 "name": "Nvme$subsystem", 00:20:26.654 "trtype": "$TEST_TRANSPORT", 00:20:26.654 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:26.654 "adrfam": "ipv4", 00:20:26.654 "trsvcid": "$NVMF_PORT", 00:20:26.654 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:26.654 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:26.654 "hdgst": ${hdgst:-false}, 00:20:26.654 "ddgst": ${ddgst:-false} 00:20:26.654 }, 00:20:26.654 "method": "bdev_nvme_attach_controller" 00:20:26.654 } 00:20:26.654 EOF 00:20:26.654 )") 00:20:26.654 15:30:43 -- nvmf/common.sh@543 -- # cat 00:20:26.654 15:30:43 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:26.654 15:30:43 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:26.654 { 00:20:26.654 "params": { 00:20:26.654 "name": "Nvme$subsystem", 00:20:26.654 "trtype": "$TEST_TRANSPORT", 00:20:26.654 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:26.654 "adrfam": "ipv4", 00:20:26.654 "trsvcid": "$NVMF_PORT", 00:20:26.654 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:26.654 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:26.654 "hdgst": ${hdgst:-false}, 00:20:26.654 "ddgst": ${ddgst:-false} 00:20:26.654 }, 00:20:26.654 "method": "bdev_nvme_attach_controller" 00:20:26.654 } 00:20:26.654 EOF 00:20:26.654 )") 00:20:26.654 15:30:43 -- nvmf/common.sh@543 -- # cat 00:20:26.654 [2024-04-26 15:30:44.000069] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:20:26.654 [2024-04-26 15:30:44.000121] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:20:26.654 15:30:44 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:26.654 15:30:44 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:26.654 { 00:20:26.654 "params": { 00:20:26.654 "name": "Nvme$subsystem", 00:20:26.654 "trtype": "$TEST_TRANSPORT", 00:20:26.654 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:26.654 "adrfam": "ipv4", 00:20:26.654 "trsvcid": "$NVMF_PORT", 00:20:26.654 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:26.654 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:26.654 "hdgst": ${hdgst:-false}, 00:20:26.654 "ddgst": ${ddgst:-false} 00:20:26.654 }, 00:20:26.654 "method": "bdev_nvme_attach_controller" 00:20:26.654 } 00:20:26.654 EOF 00:20:26.654 )") 00:20:26.654 15:30:44 -- nvmf/common.sh@543 -- # cat 00:20:26.654 15:30:44 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:26.654 15:30:44 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:26.654 { 00:20:26.654 "params": { 00:20:26.654 "name": "Nvme$subsystem", 00:20:26.654 "trtype": "$TEST_TRANSPORT", 00:20:26.654 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:26.654 "adrfam": "ipv4", 00:20:26.654 "trsvcid": "$NVMF_PORT", 00:20:26.654 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:26.654 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:26.654 "hdgst": ${hdgst:-false}, 00:20:26.654 "ddgst": ${ddgst:-false} 00:20:26.654 }, 00:20:26.654 "method": "bdev_nvme_attach_controller" 00:20:26.654 } 00:20:26.654 EOF 00:20:26.654 )") 00:20:26.654 15:30:44 -- nvmf/common.sh@543 -- # cat 00:20:26.654 15:30:44 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:26.654 15:30:44 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:26.654 { 00:20:26.654 "params": { 00:20:26.654 "name": "Nvme$subsystem", 00:20:26.654 "trtype": "$TEST_TRANSPORT", 00:20:26.654 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:26.654 "adrfam": "ipv4", 00:20:26.654 "trsvcid": "$NVMF_PORT", 00:20:26.654 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:26.654 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:26.654 "hdgst": ${hdgst:-false}, 00:20:26.654 "ddgst": ${ddgst:-false} 00:20:26.654 }, 00:20:26.654 "method": "bdev_nvme_attach_controller" 00:20:26.654 } 00:20:26.654 EOF 00:20:26.654 )") 00:20:26.654 15:30:44 -- nvmf/common.sh@543 -- # cat 00:20:26.654 15:30:44 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:26.654 15:30:44 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:26.654 { 00:20:26.654 "params": { 00:20:26.654 "name": "Nvme$subsystem", 00:20:26.654 "trtype": "$TEST_TRANSPORT", 00:20:26.654 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:26.654 "adrfam": "ipv4", 00:20:26.654 "trsvcid": "$NVMF_PORT", 00:20:26.654 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:26.654 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:26.654 "hdgst": ${hdgst:-false}, 00:20:26.654 "ddgst": ${ddgst:-false} 00:20:26.654 }, 00:20:26.654 "method": "bdev_nvme_attach_controller" 00:20:26.654 } 00:20:26.654 EOF 00:20:26.654 )") 00:20:26.654 15:30:44 -- nvmf/common.sh@543 -- # cat 00:20:26.654 15:30:44 -- nvmf/common.sh@545 -- # jq . 00:20:26.654 15:30:44 -- nvmf/common.sh@546 -- # IFS=, 00:20:26.654 15:30:44 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:26.654 "params": { 00:20:26.654 "name": "Nvme1", 00:20:26.654 "trtype": "tcp", 00:20:26.654 "traddr": "10.0.0.2", 00:20:26.654 "adrfam": "ipv4", 00:20:26.654 "trsvcid": "4420", 00:20:26.654 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:26.654 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:26.654 "hdgst": false, 00:20:26.654 "ddgst": false 00:20:26.654 }, 00:20:26.654 "method": "bdev_nvme_attach_controller" 00:20:26.654 },{ 00:20:26.654 "params": { 00:20:26.654 "name": "Nvme2", 00:20:26.654 "trtype": "tcp", 00:20:26.654 "traddr": "10.0.0.2", 00:20:26.654 "adrfam": "ipv4", 00:20:26.654 "trsvcid": "4420", 00:20:26.654 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:26.654 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:26.654 "hdgst": false, 00:20:26.654 "ddgst": false 00:20:26.654 }, 00:20:26.654 "method": "bdev_nvme_attach_controller" 00:20:26.654 },{ 00:20:26.654 "params": { 00:20:26.654 "name": "Nvme3", 00:20:26.654 "trtype": "tcp", 00:20:26.654 "traddr": "10.0.0.2", 00:20:26.654 "adrfam": "ipv4", 00:20:26.654 "trsvcid": "4420", 00:20:26.654 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:26.654 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:26.654 "hdgst": false, 00:20:26.654 "ddgst": false 00:20:26.654 }, 00:20:26.654 "method": "bdev_nvme_attach_controller" 00:20:26.654 },{ 00:20:26.654 "params": { 00:20:26.654 "name": "Nvme4", 00:20:26.654 "trtype": "tcp", 00:20:26.654 "traddr": "10.0.0.2", 00:20:26.654 "adrfam": "ipv4", 00:20:26.654 "trsvcid": "4420", 00:20:26.654 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:26.654 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:26.654 "hdgst": false, 00:20:26.654 "ddgst": false 00:20:26.654 }, 00:20:26.654 "method": "bdev_nvme_attach_controller" 00:20:26.654 },{ 00:20:26.654 "params": { 00:20:26.654 "name": "Nvme5", 00:20:26.654 "trtype": "tcp", 00:20:26.654 "traddr": "10.0.0.2", 00:20:26.654 "adrfam": "ipv4", 00:20:26.654 "trsvcid": "4420", 00:20:26.654 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:26.654 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:26.654 "hdgst": false, 00:20:26.654 "ddgst": false 00:20:26.654 }, 00:20:26.654 "method": "bdev_nvme_attach_controller" 00:20:26.654 },{ 00:20:26.654 "params": { 00:20:26.654 "name": "Nvme6", 00:20:26.654 "trtype": "tcp", 00:20:26.654 "traddr": "10.0.0.2", 00:20:26.654 "adrfam": "ipv4", 00:20:26.654 "trsvcid": "4420", 00:20:26.654 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:26.654 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:26.654 "hdgst": false, 00:20:26.654 "ddgst": false 00:20:26.654 }, 00:20:26.654 "method": "bdev_nvme_attach_controller" 00:20:26.654 },{ 00:20:26.654 "params": { 00:20:26.654 "name": "Nvme7", 00:20:26.654 "trtype": "tcp", 00:20:26.654 "traddr": "10.0.0.2", 00:20:26.654 "adrfam": "ipv4", 00:20:26.654 "trsvcid": "4420", 00:20:26.654 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:26.654 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:26.654 "hdgst": false, 00:20:26.654 "ddgst": false 00:20:26.654 }, 00:20:26.654 "method": "bdev_nvme_attach_controller" 00:20:26.654 },{ 00:20:26.654 "params": { 00:20:26.654 "name": "Nvme8", 00:20:26.654 "trtype": "tcp", 00:20:26.654 "traddr": "10.0.0.2", 00:20:26.654 "adrfam": "ipv4", 00:20:26.654 "trsvcid": "4420", 00:20:26.654 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:26.654 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:26.654 "hdgst": false, 00:20:26.654 "ddgst": false 00:20:26.654 }, 00:20:26.654 "method": "bdev_nvme_attach_controller" 00:20:26.654 },{ 00:20:26.654 "params": { 00:20:26.654 "name": "Nvme9", 00:20:26.654 "trtype": "tcp", 00:20:26.654 "traddr": "10.0.0.2", 00:20:26.654 "adrfam": "ipv4", 00:20:26.654 "trsvcid": "4420", 00:20:26.654 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:26.654 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:26.654 "hdgst": false, 00:20:26.654 "ddgst": false 00:20:26.654 }, 00:20:26.654 "method": "bdev_nvme_attach_controller" 00:20:26.654 },{ 00:20:26.654 "params": { 00:20:26.654 "name": "Nvme10", 00:20:26.654 "trtype": "tcp", 00:20:26.654 "traddr": "10.0.0.2", 00:20:26.654 "adrfam": "ipv4", 00:20:26.654 "trsvcid": "4420", 00:20:26.654 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:26.654 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:26.654 "hdgst": false, 00:20:26.654 "ddgst": false 00:20:26.654 }, 00:20:26.654 "method": "bdev_nvme_attach_controller" 00:20:26.654 }' 00:20:26.654 EAL: No free 2048 kB hugepages reported on node 1 00:20:26.949 [2024-04-26 15:30:44.079179] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:26.949 [2024-04-26 15:30:44.142518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:28.329 15:30:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:28.329 15:30:45 -- common/autotest_common.sh@850 -- # return 0 00:20:28.329 15:30:45 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:28.329 15:30:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:28.329 15:30:45 -- common/autotest_common.sh@10 -- # set +x 00:20:28.329 15:30:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:28.329 15:30:45 -- target/shutdown.sh@83 -- # kill -9 1682262 00:20:28.329 15:30:45 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:20:28.329 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1682262 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:20:28.329 15:30:45 -- target/shutdown.sh@87 -- # sleep 1 00:20:29.273 15:30:46 -- target/shutdown.sh@88 -- # kill -0 1681999 00:20:29.273 15:30:46 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:20:29.273 15:30:46 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:29.273 15:30:46 -- nvmf/common.sh@521 -- # config=() 00:20:29.273 15:30:46 -- nvmf/common.sh@521 -- # local subsystem config 00:20:29.273 15:30:46 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:29.273 15:30:46 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:29.273 { 00:20:29.273 "params": { 00:20:29.273 "name": "Nvme$subsystem", 00:20:29.273 "trtype": "$TEST_TRANSPORT", 00:20:29.273 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:29.273 "adrfam": "ipv4", 00:20:29.273 "trsvcid": "$NVMF_PORT", 00:20:29.273 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:29.273 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:29.273 "hdgst": ${hdgst:-false}, 00:20:29.273 "ddgst": ${ddgst:-false} 00:20:29.273 }, 00:20:29.273 "method": "bdev_nvme_attach_controller" 00:20:29.273 } 00:20:29.273 EOF 00:20:29.273 )") 00:20:29.273 15:30:46 -- nvmf/common.sh@543 -- # cat 00:20:29.273 15:30:46 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:29.273 15:30:46 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:29.273 { 00:20:29.273 "params": { 00:20:29.273 "name": "Nvme$subsystem", 00:20:29.273 "trtype": "$TEST_TRANSPORT", 00:20:29.273 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:29.273 "adrfam": "ipv4", 00:20:29.273 "trsvcid": "$NVMF_PORT", 00:20:29.273 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:29.273 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:29.273 "hdgst": ${hdgst:-false}, 00:20:29.273 "ddgst": ${ddgst:-false} 00:20:29.273 }, 00:20:29.273 "method": "bdev_nvme_attach_controller" 00:20:29.273 } 00:20:29.273 EOF 00:20:29.273 )") 00:20:29.273 15:30:46 -- nvmf/common.sh@543 -- # cat 00:20:29.273 15:30:46 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:29.273 15:30:46 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:29.273 { 00:20:29.273 "params": { 00:20:29.273 "name": "Nvme$subsystem", 00:20:29.273 "trtype": "$TEST_TRANSPORT", 00:20:29.273 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:29.273 "adrfam": "ipv4", 00:20:29.273 "trsvcid": "$NVMF_PORT", 00:20:29.273 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:29.273 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:29.273 "hdgst": ${hdgst:-false}, 00:20:29.273 "ddgst": ${ddgst:-false} 00:20:29.273 }, 00:20:29.273 "method": "bdev_nvme_attach_controller" 00:20:29.273 } 00:20:29.273 EOF 00:20:29.273 )") 00:20:29.273 15:30:46 -- nvmf/common.sh@543 -- # cat 00:20:29.273 15:30:46 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:29.273 15:30:46 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:29.273 { 00:20:29.273 "params": { 00:20:29.273 "name": "Nvme$subsystem", 00:20:29.273 "trtype": "$TEST_TRANSPORT", 00:20:29.273 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:29.273 "adrfam": "ipv4", 00:20:29.273 "trsvcid": "$NVMF_PORT", 00:20:29.273 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:29.273 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:29.273 "hdgst": ${hdgst:-false}, 00:20:29.273 "ddgst": ${ddgst:-false} 00:20:29.273 }, 00:20:29.273 "method": "bdev_nvme_attach_controller" 00:20:29.273 } 00:20:29.273 EOF 00:20:29.273 )") 00:20:29.273 15:30:46 -- nvmf/common.sh@543 -- # cat 00:20:29.273 15:30:46 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:29.273 15:30:46 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:29.273 { 00:20:29.273 "params": { 00:20:29.273 "name": "Nvme$subsystem", 00:20:29.273 "trtype": "$TEST_TRANSPORT", 00:20:29.273 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:29.273 "adrfam": "ipv4", 00:20:29.273 "trsvcid": "$NVMF_PORT", 00:20:29.273 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:29.273 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:29.273 "hdgst": ${hdgst:-false}, 00:20:29.273 "ddgst": ${ddgst:-false} 00:20:29.273 }, 00:20:29.273 "method": "bdev_nvme_attach_controller" 00:20:29.273 } 00:20:29.273 EOF 00:20:29.273 )") 00:20:29.274 15:30:46 -- nvmf/common.sh@543 -- # cat 00:20:29.274 15:30:46 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:29.274 15:30:46 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:29.274 { 00:20:29.274 "params": { 00:20:29.274 "name": "Nvme$subsystem", 00:20:29.274 "trtype": "$TEST_TRANSPORT", 00:20:29.274 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:29.274 "adrfam": "ipv4", 00:20:29.274 "trsvcid": "$NVMF_PORT", 00:20:29.274 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:29.274 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:29.274 "hdgst": ${hdgst:-false}, 00:20:29.274 "ddgst": ${ddgst:-false} 00:20:29.274 }, 00:20:29.274 "method": "bdev_nvme_attach_controller" 00:20:29.274 } 00:20:29.274 EOF 00:20:29.274 )") 00:20:29.274 15:30:46 -- nvmf/common.sh@543 -- # cat 00:20:29.274 [2024-04-26 15:30:46.537377] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:20:29.274 [2024-04-26 15:30:46.537429] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1682758 ] 00:20:29.274 15:30:46 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:29.274 15:30:46 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:29.274 { 00:20:29.274 "params": { 00:20:29.274 "name": "Nvme$subsystem", 00:20:29.274 "trtype": "$TEST_TRANSPORT", 00:20:29.274 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:29.274 "adrfam": "ipv4", 00:20:29.274 "trsvcid": "$NVMF_PORT", 00:20:29.274 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:29.274 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:29.274 "hdgst": ${hdgst:-false}, 00:20:29.274 "ddgst": ${ddgst:-false} 00:20:29.274 }, 00:20:29.274 "method": "bdev_nvme_attach_controller" 00:20:29.274 } 00:20:29.274 EOF 00:20:29.274 )") 00:20:29.274 15:30:46 -- nvmf/common.sh@543 -- # cat 00:20:29.274 15:30:46 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:29.274 15:30:46 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:29.274 { 00:20:29.274 "params": { 00:20:29.274 "name": "Nvme$subsystem", 00:20:29.274 "trtype": "$TEST_TRANSPORT", 00:20:29.274 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:29.274 "adrfam": "ipv4", 00:20:29.274 "trsvcid": "$NVMF_PORT", 00:20:29.274 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:29.274 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:29.274 "hdgst": ${hdgst:-false}, 00:20:29.274 "ddgst": ${ddgst:-false} 00:20:29.274 }, 00:20:29.274 "method": "bdev_nvme_attach_controller" 00:20:29.274 } 00:20:29.274 EOF 00:20:29.274 )") 00:20:29.274 15:30:46 -- nvmf/common.sh@543 -- # cat 00:20:29.274 15:30:46 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:29.274 15:30:46 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:29.274 { 00:20:29.274 "params": { 00:20:29.274 "name": "Nvme$subsystem", 00:20:29.274 "trtype": "$TEST_TRANSPORT", 00:20:29.274 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:29.274 "adrfam": "ipv4", 00:20:29.274 "trsvcid": "$NVMF_PORT", 00:20:29.274 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:29.274 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:29.274 "hdgst": ${hdgst:-false}, 00:20:29.274 "ddgst": ${ddgst:-false} 00:20:29.274 }, 00:20:29.274 "method": "bdev_nvme_attach_controller" 00:20:29.274 } 00:20:29.274 EOF 00:20:29.274 )") 00:20:29.274 15:30:46 -- nvmf/common.sh@543 -- # cat 00:20:29.274 15:30:46 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:29.274 15:30:46 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:29.274 { 00:20:29.274 "params": { 00:20:29.274 "name": "Nvme$subsystem", 00:20:29.274 "trtype": "$TEST_TRANSPORT", 00:20:29.274 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:29.274 "adrfam": "ipv4", 00:20:29.274 "trsvcid": "$NVMF_PORT", 00:20:29.274 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:29.274 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:29.274 "hdgst": ${hdgst:-false}, 00:20:29.274 "ddgst": ${ddgst:-false} 00:20:29.274 }, 00:20:29.274 "method": "bdev_nvme_attach_controller" 00:20:29.274 } 00:20:29.274 EOF 00:20:29.274 )") 00:20:29.274 15:30:46 -- nvmf/common.sh@543 -- # cat 00:20:29.274 EAL: No free 2048 kB hugepages reported on node 1 00:20:29.274 15:30:46 -- nvmf/common.sh@545 -- # jq . 00:20:29.274 15:30:46 -- nvmf/common.sh@546 -- # IFS=, 00:20:29.274 15:30:46 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:29.274 "params": { 00:20:29.274 "name": "Nvme1", 00:20:29.274 "trtype": "tcp", 00:20:29.274 "traddr": "10.0.0.2", 00:20:29.274 "adrfam": "ipv4", 00:20:29.274 "trsvcid": "4420", 00:20:29.274 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:29.274 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:29.274 "hdgst": false, 00:20:29.274 "ddgst": false 00:20:29.274 }, 00:20:29.274 "method": "bdev_nvme_attach_controller" 00:20:29.274 },{ 00:20:29.274 "params": { 00:20:29.274 "name": "Nvme2", 00:20:29.274 "trtype": "tcp", 00:20:29.274 "traddr": "10.0.0.2", 00:20:29.274 "adrfam": "ipv4", 00:20:29.274 "trsvcid": "4420", 00:20:29.274 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:29.274 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:29.274 "hdgst": false, 00:20:29.274 "ddgst": false 00:20:29.274 }, 00:20:29.274 "method": "bdev_nvme_attach_controller" 00:20:29.274 },{ 00:20:29.274 "params": { 00:20:29.274 "name": "Nvme3", 00:20:29.274 "trtype": "tcp", 00:20:29.274 "traddr": "10.0.0.2", 00:20:29.274 "adrfam": "ipv4", 00:20:29.274 "trsvcid": "4420", 00:20:29.274 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:29.274 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:29.274 "hdgst": false, 00:20:29.274 "ddgst": false 00:20:29.274 }, 00:20:29.274 "method": "bdev_nvme_attach_controller" 00:20:29.274 },{ 00:20:29.274 "params": { 00:20:29.274 "name": "Nvme4", 00:20:29.274 "trtype": "tcp", 00:20:29.274 "traddr": "10.0.0.2", 00:20:29.274 "adrfam": "ipv4", 00:20:29.274 "trsvcid": "4420", 00:20:29.274 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:29.274 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:29.274 "hdgst": false, 00:20:29.274 "ddgst": false 00:20:29.274 }, 00:20:29.274 "method": "bdev_nvme_attach_controller" 00:20:29.274 },{ 00:20:29.274 "params": { 00:20:29.274 "name": "Nvme5", 00:20:29.274 "trtype": "tcp", 00:20:29.274 "traddr": "10.0.0.2", 00:20:29.274 "adrfam": "ipv4", 00:20:29.274 "trsvcid": "4420", 00:20:29.274 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:29.274 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:29.274 "hdgst": false, 00:20:29.274 "ddgst": false 00:20:29.274 }, 00:20:29.274 "method": "bdev_nvme_attach_controller" 00:20:29.274 },{ 00:20:29.274 "params": { 00:20:29.274 "name": "Nvme6", 00:20:29.274 "trtype": "tcp", 00:20:29.274 "traddr": "10.0.0.2", 00:20:29.274 "adrfam": "ipv4", 00:20:29.274 "trsvcid": "4420", 00:20:29.274 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:29.274 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:29.274 "hdgst": false, 00:20:29.274 "ddgst": false 00:20:29.274 }, 00:20:29.274 "method": "bdev_nvme_attach_controller" 00:20:29.274 },{ 00:20:29.274 "params": { 00:20:29.274 "name": "Nvme7", 00:20:29.274 "trtype": "tcp", 00:20:29.274 "traddr": "10.0.0.2", 00:20:29.274 "adrfam": "ipv4", 00:20:29.274 "trsvcid": "4420", 00:20:29.274 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:29.274 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:29.274 "hdgst": false, 00:20:29.274 "ddgst": false 00:20:29.274 }, 00:20:29.274 "method": "bdev_nvme_attach_controller" 00:20:29.274 },{ 00:20:29.274 "params": { 00:20:29.274 "name": "Nvme8", 00:20:29.274 "trtype": "tcp", 00:20:29.274 "traddr": "10.0.0.2", 00:20:29.274 "adrfam": "ipv4", 00:20:29.274 "trsvcid": "4420", 00:20:29.274 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:29.274 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:29.274 "hdgst": false, 00:20:29.274 "ddgst": false 00:20:29.274 }, 00:20:29.274 "method": "bdev_nvme_attach_controller" 00:20:29.274 },{ 00:20:29.274 "params": { 00:20:29.274 "name": "Nvme9", 00:20:29.274 "trtype": "tcp", 00:20:29.274 "traddr": "10.0.0.2", 00:20:29.274 "adrfam": "ipv4", 00:20:29.274 "trsvcid": "4420", 00:20:29.274 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:29.274 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:29.274 "hdgst": false, 00:20:29.274 "ddgst": false 00:20:29.274 }, 00:20:29.274 "method": "bdev_nvme_attach_controller" 00:20:29.274 },{ 00:20:29.274 "params": { 00:20:29.274 "name": "Nvme10", 00:20:29.274 "trtype": "tcp", 00:20:29.274 "traddr": "10.0.0.2", 00:20:29.274 "adrfam": "ipv4", 00:20:29.274 "trsvcid": "4420", 00:20:29.274 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:29.274 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:29.274 "hdgst": false, 00:20:29.274 "ddgst": false 00:20:29.274 }, 00:20:29.274 "method": "bdev_nvme_attach_controller" 00:20:29.274 }' 00:20:29.274 [2024-04-26 15:30:46.600063] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:29.274 [2024-04-26 15:30:46.661951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:30.660 Running I/O for 1 seconds... 00:20:32.044 00:20:32.044 Latency(us) 00:20:32.044 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:32.044 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:32.044 Verification LBA range: start 0x0 length 0x400 00:20:32.044 Nvme1n1 : 1.05 182.86 11.43 0.00 0.00 346188.80 26542.08 363506.35 00:20:32.044 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:32.044 Verification LBA range: start 0x0 length 0x400 00:20:32.044 Nvme2n1 : 1.09 176.06 11.00 0.00 0.00 353270.61 26651.31 311077.55 00:20:32.044 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:32.044 Verification LBA range: start 0x0 length 0x400 00:20:32.044 Nvme3n1 : 1.04 183.79 11.49 0.00 0.00 331512.60 52647.25 330301.44 00:20:32.044 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:32.044 Verification LBA range: start 0x0 length 0x400 00:20:32.044 Nvme4n1 : 1.18 220.71 13.79 0.00 0.00 272265.11 19005.44 307582.29 00:20:32.044 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:32.044 Verification LBA range: start 0x0 length 0x400 00:20:32.044 Nvme5n1 : 1.10 174.87 10.93 0.00 0.00 336261.97 22719.15 346030.08 00:20:32.044 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:32.044 Verification LBA range: start 0x0 length 0x400 00:20:32.044 Nvme6n1 : 1.13 169.55 10.60 0.00 0.00 333551.50 30146.56 337291.95 00:20:32.044 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:32.044 Verification LBA range: start 0x0 length 0x400 00:20:32.044 Nvme7n1 : 1.14 224.29 14.02 0.00 0.00 253358.72 15837.87 346030.08 00:20:32.044 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:32.044 Verification LBA range: start 0x0 length 0x400 00:20:32.044 Nvme8n1 : 1.18 219.49 13.72 0.00 0.00 255214.30 1815.89 283115.52 00:20:32.044 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:32.044 Verification LBA range: start 0x0 length 0x400 00:20:32.044 Nvme9n1 : 1.19 214.63 13.41 0.00 0.00 256627.95 12615.68 342534.83 00:20:32.044 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:32.044 Verification LBA range: start 0x0 length 0x400 00:20:32.044 Nvme10n1 : 1.20 213.38 13.34 0.00 0.00 253661.76 12233.39 361758.72 00:20:32.044 =================================================================================================================== 00:20:32.044 Total : 1979.62 123.73 0.00 0.00 293233.75 1815.89 363506.35 00:20:32.044 15:30:49 -- target/shutdown.sh@94 -- # stoptarget 00:20:32.044 15:30:49 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:32.044 15:30:49 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:32.044 15:30:49 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:32.044 15:30:49 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:32.044 15:30:49 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:32.044 15:30:49 -- nvmf/common.sh@117 -- # sync 00:20:32.044 15:30:49 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:32.044 15:30:49 -- nvmf/common.sh@120 -- # set +e 00:20:32.044 15:30:49 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:32.044 15:30:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:32.044 rmmod nvme_tcp 00:20:32.044 rmmod nvme_fabrics 00:20:32.044 rmmod nvme_keyring 00:20:32.044 15:30:49 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:32.044 15:30:49 -- nvmf/common.sh@124 -- # set -e 00:20:32.044 15:30:49 -- nvmf/common.sh@125 -- # return 0 00:20:32.044 15:30:49 -- nvmf/common.sh@478 -- # '[' -n 1681999 ']' 00:20:32.044 15:30:49 -- nvmf/common.sh@479 -- # killprocess 1681999 00:20:32.044 15:30:49 -- common/autotest_common.sh@936 -- # '[' -z 1681999 ']' 00:20:32.044 15:30:49 -- common/autotest_common.sh@940 -- # kill -0 1681999 00:20:32.044 15:30:49 -- common/autotest_common.sh@941 -- # uname 00:20:32.044 15:30:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:32.044 15:30:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1681999 00:20:32.044 15:30:49 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:32.044 15:30:49 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:32.044 15:30:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1681999' 00:20:32.044 killing process with pid 1681999 00:20:32.044 15:30:49 -- common/autotest_common.sh@955 -- # kill 1681999 00:20:32.044 15:30:49 -- common/autotest_common.sh@960 -- # wait 1681999 00:20:32.305 15:30:49 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:32.305 15:30:49 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:32.305 15:30:49 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:32.305 15:30:49 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:32.305 15:30:49 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:32.305 15:30:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:32.305 15:30:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:32.305 15:30:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:34.849 15:30:51 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:34.849 00:20:34.849 real 0m16.385s 00:20:34.849 user 0m33.497s 00:20:34.849 sys 0m6.431s 00:20:34.849 15:30:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:34.849 15:30:51 -- common/autotest_common.sh@10 -- # set +x 00:20:34.849 ************************************ 00:20:34.849 END TEST nvmf_shutdown_tc1 00:20:34.849 ************************************ 00:20:34.849 15:30:51 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:20:34.849 15:30:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:34.849 15:30:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:34.849 15:30:51 -- common/autotest_common.sh@10 -- # set +x 00:20:34.849 ************************************ 00:20:34.849 START TEST nvmf_shutdown_tc2 00:20:34.849 ************************************ 00:20:34.849 15:30:51 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc2 00:20:34.849 15:30:51 -- target/shutdown.sh@99 -- # starttarget 00:20:34.849 15:30:51 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:34.849 15:30:51 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:34.849 15:30:51 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:34.849 15:30:51 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:34.849 15:30:51 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:34.849 15:30:51 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:34.849 15:30:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:34.849 15:30:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:34.849 15:30:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:34.849 15:30:51 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:34.849 15:30:51 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:34.849 15:30:51 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:34.849 15:30:51 -- common/autotest_common.sh@10 -- # set +x 00:20:34.849 15:30:51 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:34.849 15:30:51 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:34.849 15:30:51 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:34.849 15:30:51 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:34.849 15:30:51 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:34.849 15:30:51 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:34.849 15:30:51 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:34.849 15:30:51 -- nvmf/common.sh@295 -- # net_devs=() 00:20:34.849 15:30:51 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:34.849 15:30:51 -- nvmf/common.sh@296 -- # e810=() 00:20:34.849 15:30:51 -- nvmf/common.sh@296 -- # local -ga e810 00:20:34.849 15:30:51 -- nvmf/common.sh@297 -- # x722=() 00:20:34.849 15:30:51 -- nvmf/common.sh@297 -- # local -ga x722 00:20:34.849 15:30:51 -- nvmf/common.sh@298 -- # mlx=() 00:20:34.849 15:30:51 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:34.849 15:30:51 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:34.849 15:30:51 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:34.849 15:30:51 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:34.849 15:30:51 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:34.849 15:30:51 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:34.849 15:30:51 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:34.849 15:30:51 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:34.849 15:30:51 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:34.849 15:30:51 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:34.849 15:30:51 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:34.849 15:30:51 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:34.849 15:30:51 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:34.849 15:30:51 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:34.849 15:30:51 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:34.849 15:30:51 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:34.849 15:30:51 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:34.849 15:30:51 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:34.849 15:30:51 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:34.849 15:30:51 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:34.849 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:34.849 15:30:51 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:34.849 15:30:51 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:34.849 15:30:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:34.849 15:30:51 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:34.849 15:30:51 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:34.849 15:30:51 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:34.849 15:30:51 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:34.849 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:34.849 15:30:51 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:34.849 15:30:51 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:34.849 15:30:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:34.849 15:30:51 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:34.849 15:30:51 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:34.849 15:30:51 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:34.849 15:30:51 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:34.849 15:30:51 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:34.849 15:30:51 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:34.849 15:30:51 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:34.849 15:30:51 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:34.849 15:30:51 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:34.849 15:30:51 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:34.849 Found net devices under 0000:31:00.0: cvl_0_0 00:20:34.849 15:30:51 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:34.849 15:30:51 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:34.849 15:30:51 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:34.849 15:30:51 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:34.849 15:30:51 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:34.849 15:30:51 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:34.849 Found net devices under 0000:31:00.1: cvl_0_1 00:20:34.849 15:30:51 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:34.849 15:30:51 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:34.849 15:30:51 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:34.849 15:30:51 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:34.849 15:30:51 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:34.849 15:30:51 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:34.849 15:30:51 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:34.849 15:30:51 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:34.849 15:30:51 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:34.849 15:30:51 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:34.849 15:30:51 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:34.849 15:30:51 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:34.849 15:30:51 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:34.849 15:30:51 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:34.849 15:30:51 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:34.849 15:30:51 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:34.849 15:30:51 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:34.849 15:30:51 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:34.849 15:30:51 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:34.849 15:30:52 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:34.849 15:30:52 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:34.849 15:30:52 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:34.849 15:30:52 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:34.849 15:30:52 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:34.849 15:30:52 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:34.849 15:30:52 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:34.849 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:34.849 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.561 ms 00:20:34.849 00:20:34.849 --- 10.0.0.2 ping statistics --- 00:20:34.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:34.849 rtt min/avg/max/mdev = 0.561/0.561/0.561/0.000 ms 00:20:34.850 15:30:52 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:34.850 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:34.850 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:20:34.850 00:20:34.850 --- 10.0.0.1 ping statistics --- 00:20:34.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:34.850 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:20:34.850 15:30:52 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:34.850 15:30:52 -- nvmf/common.sh@411 -- # return 0 00:20:34.850 15:30:52 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:34.850 15:30:52 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:34.850 15:30:52 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:34.850 15:30:52 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:34.850 15:30:52 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:34.850 15:30:52 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:34.850 15:30:52 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:35.110 15:30:52 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:35.110 15:30:52 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:35.110 15:30:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:35.110 15:30:52 -- common/autotest_common.sh@10 -- # set +x 00:20:35.110 15:30:52 -- nvmf/common.sh@470 -- # nvmfpid=1683924 00:20:35.110 15:30:52 -- nvmf/common.sh@471 -- # waitforlisten 1683924 00:20:35.110 15:30:52 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:35.110 15:30:52 -- common/autotest_common.sh@817 -- # '[' -z 1683924 ']' 00:20:35.110 15:30:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:35.110 15:30:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:35.110 15:30:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:35.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:35.110 15:30:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:35.110 15:30:52 -- common/autotest_common.sh@10 -- # set +x 00:20:35.110 [2024-04-26 15:30:52.379121] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:20:35.110 [2024-04-26 15:30:52.379185] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:35.110 EAL: No free 2048 kB hugepages reported on node 1 00:20:35.110 [2024-04-26 15:30:52.467717] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:35.110 [2024-04-26 15:30:52.527537] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:35.111 [2024-04-26 15:30:52.527574] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:35.111 [2024-04-26 15:30:52.527580] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:35.111 [2024-04-26 15:30:52.527585] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:35.111 [2024-04-26 15:30:52.527589] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:35.111 [2024-04-26 15:30:52.527701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:35.111 [2024-04-26 15:30:52.527877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:35.111 [2024-04-26 15:30:52.527993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:35.111 [2024-04-26 15:30:52.527994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:36.052 15:30:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:36.052 15:30:53 -- common/autotest_common.sh@850 -- # return 0 00:20:36.052 15:30:53 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:36.052 15:30:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:36.052 15:30:53 -- common/autotest_common.sh@10 -- # set +x 00:20:36.052 15:30:53 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:36.052 15:30:53 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:36.052 15:30:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.052 15:30:53 -- common/autotest_common.sh@10 -- # set +x 00:20:36.052 [2024-04-26 15:30:53.203064] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:36.052 15:30:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.052 15:30:53 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:36.052 15:30:53 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:36.052 15:30:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:36.052 15:30:53 -- common/autotest_common.sh@10 -- # set +x 00:20:36.052 15:30:53 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:36.052 15:30:53 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:36.052 15:30:53 -- target/shutdown.sh@28 -- # cat 00:20:36.052 15:30:53 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:36.052 15:30:53 -- target/shutdown.sh@28 -- # cat 00:20:36.052 15:30:53 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:36.052 15:30:53 -- target/shutdown.sh@28 -- # cat 00:20:36.052 15:30:53 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:36.052 15:30:53 -- target/shutdown.sh@28 -- # cat 00:20:36.052 15:30:53 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:36.052 15:30:53 -- target/shutdown.sh@28 -- # cat 00:20:36.052 15:30:53 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:36.052 15:30:53 -- target/shutdown.sh@28 -- # cat 00:20:36.052 15:30:53 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:36.052 15:30:53 -- target/shutdown.sh@28 -- # cat 00:20:36.052 15:30:53 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:36.052 15:30:53 -- target/shutdown.sh@28 -- # cat 00:20:36.052 15:30:53 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:36.052 15:30:53 -- target/shutdown.sh@28 -- # cat 00:20:36.052 15:30:53 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:36.052 15:30:53 -- target/shutdown.sh@28 -- # cat 00:20:36.052 15:30:53 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:36.052 15:30:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.052 15:30:53 -- common/autotest_common.sh@10 -- # set +x 00:20:36.052 Malloc1 00:20:36.052 [2024-04-26 15:30:53.301811] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:36.052 Malloc2 00:20:36.052 Malloc3 00:20:36.052 Malloc4 00:20:36.052 Malloc5 00:20:36.052 Malloc6 00:20:36.313 Malloc7 00:20:36.313 Malloc8 00:20:36.313 Malloc9 00:20:36.313 Malloc10 00:20:36.313 15:30:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.313 15:30:53 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:36.313 15:30:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:36.313 15:30:53 -- common/autotest_common.sh@10 -- # set +x 00:20:36.313 15:30:53 -- target/shutdown.sh@103 -- # perfpid=1684267 00:20:36.313 15:30:53 -- target/shutdown.sh@104 -- # waitforlisten 1684267 /var/tmp/bdevperf.sock 00:20:36.313 15:30:53 -- common/autotest_common.sh@817 -- # '[' -z 1684267 ']' 00:20:36.313 15:30:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:36.313 15:30:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:36.313 15:30:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:36.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:36.313 15:30:53 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:36.313 15:30:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:36.313 15:30:53 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:36.313 15:30:53 -- common/autotest_common.sh@10 -- # set +x 00:20:36.314 15:30:53 -- nvmf/common.sh@521 -- # config=() 00:20:36.314 15:30:53 -- nvmf/common.sh@521 -- # local subsystem config 00:20:36.314 15:30:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:36.314 15:30:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:36.314 { 00:20:36.314 "params": { 00:20:36.314 "name": "Nvme$subsystem", 00:20:36.314 "trtype": "$TEST_TRANSPORT", 00:20:36.314 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.314 "adrfam": "ipv4", 00:20:36.314 "trsvcid": "$NVMF_PORT", 00:20:36.314 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.314 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.314 "hdgst": ${hdgst:-false}, 00:20:36.314 "ddgst": ${ddgst:-false} 00:20:36.314 }, 00:20:36.314 "method": "bdev_nvme_attach_controller" 00:20:36.314 } 00:20:36.314 EOF 00:20:36.314 )") 00:20:36.314 15:30:53 -- nvmf/common.sh@543 -- # cat 00:20:36.314 15:30:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:36.314 15:30:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:36.314 { 00:20:36.314 "params": { 00:20:36.314 "name": "Nvme$subsystem", 00:20:36.314 "trtype": "$TEST_TRANSPORT", 00:20:36.314 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.314 "adrfam": "ipv4", 00:20:36.314 "trsvcid": "$NVMF_PORT", 00:20:36.314 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.314 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.314 "hdgst": ${hdgst:-false}, 00:20:36.314 "ddgst": ${ddgst:-false} 00:20:36.314 }, 00:20:36.314 "method": "bdev_nvme_attach_controller" 00:20:36.314 } 00:20:36.314 EOF 00:20:36.314 )") 00:20:36.314 15:30:53 -- nvmf/common.sh@543 -- # cat 00:20:36.314 15:30:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:36.314 15:30:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:36.314 { 00:20:36.314 "params": { 00:20:36.314 "name": "Nvme$subsystem", 00:20:36.314 "trtype": "$TEST_TRANSPORT", 00:20:36.314 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.314 "adrfam": "ipv4", 00:20:36.314 "trsvcid": "$NVMF_PORT", 00:20:36.314 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.314 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.314 "hdgst": ${hdgst:-false}, 00:20:36.314 "ddgst": ${ddgst:-false} 00:20:36.314 }, 00:20:36.314 "method": "bdev_nvme_attach_controller" 00:20:36.314 } 00:20:36.314 EOF 00:20:36.314 )") 00:20:36.314 15:30:53 -- nvmf/common.sh@543 -- # cat 00:20:36.314 15:30:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:36.314 15:30:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:36.314 { 00:20:36.314 "params": { 00:20:36.314 "name": "Nvme$subsystem", 00:20:36.314 "trtype": "$TEST_TRANSPORT", 00:20:36.314 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.314 "adrfam": "ipv4", 00:20:36.314 "trsvcid": "$NVMF_PORT", 00:20:36.314 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.314 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.314 "hdgst": ${hdgst:-false}, 00:20:36.314 "ddgst": ${ddgst:-false} 00:20:36.314 }, 00:20:36.314 "method": "bdev_nvme_attach_controller" 00:20:36.314 } 00:20:36.314 EOF 00:20:36.314 )") 00:20:36.314 15:30:53 -- nvmf/common.sh@543 -- # cat 00:20:36.314 15:30:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:36.314 15:30:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:36.314 { 00:20:36.314 "params": { 00:20:36.314 "name": "Nvme$subsystem", 00:20:36.314 "trtype": "$TEST_TRANSPORT", 00:20:36.314 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.314 "adrfam": "ipv4", 00:20:36.314 "trsvcid": "$NVMF_PORT", 00:20:36.314 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.314 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.314 "hdgst": ${hdgst:-false}, 00:20:36.314 "ddgst": ${ddgst:-false} 00:20:36.314 }, 00:20:36.314 "method": "bdev_nvme_attach_controller" 00:20:36.314 } 00:20:36.314 EOF 00:20:36.314 )") 00:20:36.314 15:30:53 -- nvmf/common.sh@543 -- # cat 00:20:36.314 15:30:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:36.314 15:30:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:36.314 { 00:20:36.314 "params": { 00:20:36.314 "name": "Nvme$subsystem", 00:20:36.314 "trtype": "$TEST_TRANSPORT", 00:20:36.314 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.314 "adrfam": "ipv4", 00:20:36.314 "trsvcid": "$NVMF_PORT", 00:20:36.314 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.314 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.314 "hdgst": ${hdgst:-false}, 00:20:36.314 "ddgst": ${ddgst:-false} 00:20:36.314 }, 00:20:36.314 "method": "bdev_nvme_attach_controller" 00:20:36.314 } 00:20:36.314 EOF 00:20:36.314 )") 00:20:36.314 15:30:53 -- nvmf/common.sh@543 -- # cat 00:20:36.314 [2024-04-26 15:30:53.744549] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:20:36.314 [2024-04-26 15:30:53.744599] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1684267 ] 00:20:36.314 15:30:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:36.314 15:30:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:36.314 { 00:20:36.314 "params": { 00:20:36.314 "name": "Nvme$subsystem", 00:20:36.314 "trtype": "$TEST_TRANSPORT", 00:20:36.314 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.314 "adrfam": "ipv4", 00:20:36.314 "trsvcid": "$NVMF_PORT", 00:20:36.314 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.314 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.314 "hdgst": ${hdgst:-false}, 00:20:36.314 "ddgst": ${ddgst:-false} 00:20:36.314 }, 00:20:36.314 "method": "bdev_nvme_attach_controller" 00:20:36.314 } 00:20:36.314 EOF 00:20:36.314 )") 00:20:36.314 15:30:53 -- nvmf/common.sh@543 -- # cat 00:20:36.314 15:30:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:36.314 15:30:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:36.314 { 00:20:36.314 "params": { 00:20:36.314 "name": "Nvme$subsystem", 00:20:36.314 "trtype": "$TEST_TRANSPORT", 00:20:36.314 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.314 "adrfam": "ipv4", 00:20:36.314 "trsvcid": "$NVMF_PORT", 00:20:36.314 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.314 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.314 "hdgst": ${hdgst:-false}, 00:20:36.314 "ddgst": ${ddgst:-false} 00:20:36.314 }, 00:20:36.314 "method": "bdev_nvme_attach_controller" 00:20:36.314 } 00:20:36.314 EOF 00:20:36.314 )") 00:20:36.314 15:30:53 -- nvmf/common.sh@543 -- # cat 00:20:36.314 15:30:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:36.314 15:30:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:36.314 { 00:20:36.314 "params": { 00:20:36.314 "name": "Nvme$subsystem", 00:20:36.314 "trtype": "$TEST_TRANSPORT", 00:20:36.314 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.314 "adrfam": "ipv4", 00:20:36.314 "trsvcid": "$NVMF_PORT", 00:20:36.314 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.314 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.314 "hdgst": ${hdgst:-false}, 00:20:36.314 "ddgst": ${ddgst:-false} 00:20:36.314 }, 00:20:36.314 "method": "bdev_nvme_attach_controller" 00:20:36.314 } 00:20:36.314 EOF 00:20:36.314 )") 00:20:36.576 15:30:53 -- nvmf/common.sh@543 -- # cat 00:20:36.576 15:30:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:36.576 15:30:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:36.576 { 00:20:36.576 "params": { 00:20:36.576 "name": "Nvme$subsystem", 00:20:36.576 "trtype": "$TEST_TRANSPORT", 00:20:36.576 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.576 "adrfam": "ipv4", 00:20:36.576 "trsvcid": "$NVMF_PORT", 00:20:36.576 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.576 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.576 "hdgst": ${hdgst:-false}, 00:20:36.576 "ddgst": ${ddgst:-false} 00:20:36.576 }, 00:20:36.576 "method": "bdev_nvme_attach_controller" 00:20:36.576 } 00:20:36.576 EOF 00:20:36.576 )") 00:20:36.576 EAL: No free 2048 kB hugepages reported on node 1 00:20:36.576 15:30:53 -- nvmf/common.sh@543 -- # cat 00:20:36.576 15:30:53 -- nvmf/common.sh@545 -- # jq . 00:20:36.576 15:30:53 -- nvmf/common.sh@546 -- # IFS=, 00:20:36.576 15:30:53 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:36.576 "params": { 00:20:36.576 "name": "Nvme1", 00:20:36.576 "trtype": "tcp", 00:20:36.576 "traddr": "10.0.0.2", 00:20:36.576 "adrfam": "ipv4", 00:20:36.576 "trsvcid": "4420", 00:20:36.576 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:36.576 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:36.576 "hdgst": false, 00:20:36.576 "ddgst": false 00:20:36.576 }, 00:20:36.576 "method": "bdev_nvme_attach_controller" 00:20:36.576 },{ 00:20:36.576 "params": { 00:20:36.576 "name": "Nvme2", 00:20:36.576 "trtype": "tcp", 00:20:36.576 "traddr": "10.0.0.2", 00:20:36.576 "adrfam": "ipv4", 00:20:36.576 "trsvcid": "4420", 00:20:36.576 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:36.576 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:36.576 "hdgst": false, 00:20:36.576 "ddgst": false 00:20:36.576 }, 00:20:36.576 "method": "bdev_nvme_attach_controller" 00:20:36.576 },{ 00:20:36.576 "params": { 00:20:36.576 "name": "Nvme3", 00:20:36.576 "trtype": "tcp", 00:20:36.576 "traddr": "10.0.0.2", 00:20:36.576 "adrfam": "ipv4", 00:20:36.576 "trsvcid": "4420", 00:20:36.576 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:36.576 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:36.576 "hdgst": false, 00:20:36.576 "ddgst": false 00:20:36.576 }, 00:20:36.576 "method": "bdev_nvme_attach_controller" 00:20:36.576 },{ 00:20:36.576 "params": { 00:20:36.576 "name": "Nvme4", 00:20:36.576 "trtype": "tcp", 00:20:36.576 "traddr": "10.0.0.2", 00:20:36.576 "adrfam": "ipv4", 00:20:36.576 "trsvcid": "4420", 00:20:36.576 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:36.576 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:36.576 "hdgst": false, 00:20:36.576 "ddgst": false 00:20:36.576 }, 00:20:36.576 "method": "bdev_nvme_attach_controller" 00:20:36.576 },{ 00:20:36.576 "params": { 00:20:36.576 "name": "Nvme5", 00:20:36.576 "trtype": "tcp", 00:20:36.576 "traddr": "10.0.0.2", 00:20:36.576 "adrfam": "ipv4", 00:20:36.576 "trsvcid": "4420", 00:20:36.576 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:36.576 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:36.576 "hdgst": false, 00:20:36.576 "ddgst": false 00:20:36.576 }, 00:20:36.576 "method": "bdev_nvme_attach_controller" 00:20:36.576 },{ 00:20:36.576 "params": { 00:20:36.576 "name": "Nvme6", 00:20:36.576 "trtype": "tcp", 00:20:36.576 "traddr": "10.0.0.2", 00:20:36.576 "adrfam": "ipv4", 00:20:36.576 "trsvcid": "4420", 00:20:36.576 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:36.576 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:36.576 "hdgst": false, 00:20:36.576 "ddgst": false 00:20:36.576 }, 00:20:36.576 "method": "bdev_nvme_attach_controller" 00:20:36.576 },{ 00:20:36.576 "params": { 00:20:36.576 "name": "Nvme7", 00:20:36.576 "trtype": "tcp", 00:20:36.576 "traddr": "10.0.0.2", 00:20:36.576 "adrfam": "ipv4", 00:20:36.576 "trsvcid": "4420", 00:20:36.576 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:36.576 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:36.576 "hdgst": false, 00:20:36.576 "ddgst": false 00:20:36.576 }, 00:20:36.576 "method": "bdev_nvme_attach_controller" 00:20:36.576 },{ 00:20:36.576 "params": { 00:20:36.576 "name": "Nvme8", 00:20:36.576 "trtype": "tcp", 00:20:36.576 "traddr": "10.0.0.2", 00:20:36.576 "adrfam": "ipv4", 00:20:36.576 "trsvcid": "4420", 00:20:36.576 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:36.576 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:36.576 "hdgst": false, 00:20:36.576 "ddgst": false 00:20:36.576 }, 00:20:36.576 "method": "bdev_nvme_attach_controller" 00:20:36.576 },{ 00:20:36.576 "params": { 00:20:36.576 "name": "Nvme9", 00:20:36.576 "trtype": "tcp", 00:20:36.576 "traddr": "10.0.0.2", 00:20:36.576 "adrfam": "ipv4", 00:20:36.576 "trsvcid": "4420", 00:20:36.576 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:36.576 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:36.576 "hdgst": false, 00:20:36.576 "ddgst": false 00:20:36.576 }, 00:20:36.576 "method": "bdev_nvme_attach_controller" 00:20:36.576 },{ 00:20:36.576 "params": { 00:20:36.576 "name": "Nvme10", 00:20:36.576 "trtype": "tcp", 00:20:36.576 "traddr": "10.0.0.2", 00:20:36.576 "adrfam": "ipv4", 00:20:36.576 "trsvcid": "4420", 00:20:36.576 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:36.576 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:36.576 "hdgst": false, 00:20:36.576 "ddgst": false 00:20:36.576 }, 00:20:36.576 "method": "bdev_nvme_attach_controller" 00:20:36.576 }' 00:20:36.576 [2024-04-26 15:30:53.804994] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:36.576 [2024-04-26 15:30:53.868620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:37.965 Running I/O for 10 seconds... 00:20:37.965 15:30:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:37.965 15:30:55 -- common/autotest_common.sh@850 -- # return 0 00:20:37.965 15:30:55 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:37.965 15:30:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.965 15:30:55 -- common/autotest_common.sh@10 -- # set +x 00:20:38.226 15:30:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:38.226 15:30:55 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:38.226 15:30:55 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:38.226 15:30:55 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:20:38.226 15:30:55 -- target/shutdown.sh@57 -- # local ret=1 00:20:38.226 15:30:55 -- target/shutdown.sh@58 -- # local i 00:20:38.226 15:30:55 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:20:38.226 15:30:55 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:38.226 15:30:55 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:38.226 15:30:55 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:38.226 15:30:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:38.226 15:30:55 -- common/autotest_common.sh@10 -- # set +x 00:20:38.226 15:30:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:38.226 15:30:55 -- target/shutdown.sh@60 -- # read_io_count=3 00:20:38.226 15:30:55 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:20:38.226 15:30:55 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:38.488 15:30:55 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:38.488 15:30:55 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:38.488 15:30:55 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:38.488 15:30:55 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:38.488 15:30:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:38.488 15:30:55 -- common/autotest_common.sh@10 -- # set +x 00:20:38.488 15:30:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:38.488 15:30:55 -- target/shutdown.sh@60 -- # read_io_count=67 00:20:38.488 15:30:55 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:20:38.488 15:30:55 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:38.750 15:30:56 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:38.750 15:30:56 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:38.750 15:30:56 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:38.750 15:30:56 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:38.750 15:30:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:38.750 15:30:56 -- common/autotest_common.sh@10 -- # set +x 00:20:38.750 15:30:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:38.750 15:30:56 -- target/shutdown.sh@60 -- # read_io_count=131 00:20:38.750 15:30:56 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:20:38.750 15:30:56 -- target/shutdown.sh@64 -- # ret=0 00:20:38.750 15:30:56 -- target/shutdown.sh@65 -- # break 00:20:38.750 15:30:56 -- target/shutdown.sh@69 -- # return 0 00:20:38.750 15:30:56 -- target/shutdown.sh@110 -- # killprocess 1684267 00:20:38.750 15:30:56 -- common/autotest_common.sh@936 -- # '[' -z 1684267 ']' 00:20:38.750 15:30:56 -- common/autotest_common.sh@940 -- # kill -0 1684267 00:20:38.750 15:30:56 -- common/autotest_common.sh@941 -- # uname 00:20:38.750 15:30:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:38.750 15:30:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1684267 00:20:38.750 15:30:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:38.750 15:30:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:38.750 15:30:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1684267' 00:20:38.750 killing process with pid 1684267 00:20:38.750 15:30:56 -- common/autotest_common.sh@955 -- # kill 1684267 00:20:38.750 15:30:56 -- common/autotest_common.sh@960 -- # wait 1684267 00:20:39.011 Received shutdown signal, test time was about 0.952384 seconds 00:20:39.011 00:20:39.011 Latency(us) 00:20:39.011 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:39.011 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:39.011 Verification LBA range: start 0x0 length 0x400 00:20:39.011 Nvme1n1 : 0.92 209.61 13.10 0.00 0.00 301599.29 31894.19 225443.84 00:20:39.011 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:39.011 Verification LBA range: start 0x0 length 0x400 00:20:39.011 Nvme2n1 : 0.95 269.12 16.82 0.00 0.00 229949.65 15837.87 249910.61 00:20:39.011 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:39.011 Verification LBA range: start 0x0 length 0x400 00:20:39.011 Nvme3n1 : 0.94 271.19 16.95 0.00 0.00 223342.93 18786.99 246415.36 00:20:39.011 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:39.011 Verification LBA range: start 0x0 length 0x400 00:20:39.011 Nvme4n1 : 0.94 270.91 16.93 0.00 0.00 218411.31 18022.40 246415.36 00:20:39.011 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:39.011 Verification LBA range: start 0x0 length 0x400 00:20:39.011 Nvme5n1 : 0.94 204.78 12.80 0.00 0.00 283020.80 21189.97 302339.41 00:20:39.011 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:39.011 Verification LBA range: start 0x0 length 0x400 00:20:39.011 Nvme6n1 : 0.93 207.19 12.95 0.00 0.00 272621.51 22500.69 291853.65 00:20:39.011 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:39.011 Verification LBA range: start 0x0 length 0x400 00:20:39.011 Nvme7n1 : 0.92 208.77 13.05 0.00 0.00 264075.24 13707.95 232434.35 00:20:39.011 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:39.011 Verification LBA range: start 0x0 length 0x400 00:20:39.011 Nvme8n1 : 0.93 275.17 17.20 0.00 0.00 195832.96 19114.67 209715.20 00:20:39.011 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:39.011 Verification LBA range: start 0x0 length 0x400 00:20:39.011 Nvme9n1 : 0.95 269.74 16.86 0.00 0.00 195709.65 17148.59 249910.61 00:20:39.011 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:39.011 Verification LBA range: start 0x0 length 0x400 00:20:39.011 Nvme10n1 : 0.94 210.89 13.18 0.00 0.00 242297.16 4696.75 267386.88 00:20:39.011 =================================================================================================================== 00:20:39.011 Total : 2397.38 149.84 0.00 0.00 238405.51 4696.75 302339.41 00:20:39.011 15:30:56 -- target/shutdown.sh@113 -- # sleep 1 00:20:39.952 15:30:57 -- target/shutdown.sh@114 -- # kill -0 1683924 00:20:39.952 15:30:57 -- target/shutdown.sh@116 -- # stoptarget 00:20:39.952 15:30:57 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:39.952 15:30:57 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:39.952 15:30:57 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:39.952 15:30:57 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:39.952 15:30:57 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:39.952 15:30:57 -- nvmf/common.sh@117 -- # sync 00:20:39.952 15:30:57 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:39.952 15:30:57 -- nvmf/common.sh@120 -- # set +e 00:20:39.952 15:30:57 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:39.952 15:30:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:40.212 rmmod nvme_tcp 00:20:40.212 rmmod nvme_fabrics 00:20:40.212 rmmod nvme_keyring 00:20:40.212 15:30:57 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:40.212 15:30:57 -- nvmf/common.sh@124 -- # set -e 00:20:40.212 15:30:57 -- nvmf/common.sh@125 -- # return 0 00:20:40.212 15:30:57 -- nvmf/common.sh@478 -- # '[' -n 1683924 ']' 00:20:40.212 15:30:57 -- nvmf/common.sh@479 -- # killprocess 1683924 00:20:40.212 15:30:57 -- common/autotest_common.sh@936 -- # '[' -z 1683924 ']' 00:20:40.212 15:30:57 -- common/autotest_common.sh@940 -- # kill -0 1683924 00:20:40.212 15:30:57 -- common/autotest_common.sh@941 -- # uname 00:20:40.212 15:30:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:40.212 15:30:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1683924 00:20:40.212 15:30:57 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:40.212 15:30:57 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:40.212 15:30:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1683924' 00:20:40.212 killing process with pid 1683924 00:20:40.212 15:30:57 -- common/autotest_common.sh@955 -- # kill 1683924 00:20:40.212 15:30:57 -- common/autotest_common.sh@960 -- # wait 1683924 00:20:40.473 15:30:57 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:40.473 15:30:57 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:40.473 15:30:57 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:40.473 15:30:57 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:40.473 15:30:57 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:40.473 15:30:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:40.473 15:30:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:40.473 15:30:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:42.385 15:30:59 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:42.385 00:20:42.385 real 0m7.888s 00:20:42.385 user 0m23.715s 00:20:42.385 sys 0m1.225s 00:20:42.385 15:30:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:42.385 15:30:59 -- common/autotest_common.sh@10 -- # set +x 00:20:42.385 ************************************ 00:20:42.385 END TEST nvmf_shutdown_tc2 00:20:42.385 ************************************ 00:20:42.647 15:30:59 -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:20:42.647 15:30:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:42.647 15:30:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:42.647 15:30:59 -- common/autotest_common.sh@10 -- # set +x 00:20:42.647 ************************************ 00:20:42.647 START TEST nvmf_shutdown_tc3 00:20:42.647 ************************************ 00:20:42.647 15:31:00 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc3 00:20:42.647 15:31:00 -- target/shutdown.sh@121 -- # starttarget 00:20:42.648 15:31:00 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:42.648 15:31:00 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:42.648 15:31:00 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:42.648 15:31:00 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:42.648 15:31:00 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:42.648 15:31:00 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:42.648 15:31:00 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:42.648 15:31:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:42.648 15:31:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:42.648 15:31:00 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:42.648 15:31:00 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:42.648 15:31:00 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:42.648 15:31:00 -- common/autotest_common.sh@10 -- # set +x 00:20:42.648 15:31:00 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:42.648 15:31:00 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:42.648 15:31:00 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:42.648 15:31:00 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:42.648 15:31:00 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:42.648 15:31:00 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:42.648 15:31:00 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:42.648 15:31:00 -- nvmf/common.sh@295 -- # net_devs=() 00:20:42.648 15:31:00 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:42.648 15:31:00 -- nvmf/common.sh@296 -- # e810=() 00:20:42.648 15:31:00 -- nvmf/common.sh@296 -- # local -ga e810 00:20:42.648 15:31:00 -- nvmf/common.sh@297 -- # x722=() 00:20:42.648 15:31:00 -- nvmf/common.sh@297 -- # local -ga x722 00:20:42.648 15:31:00 -- nvmf/common.sh@298 -- # mlx=() 00:20:42.648 15:31:00 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:42.648 15:31:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:42.648 15:31:00 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:42.648 15:31:00 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:42.648 15:31:00 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:42.648 15:31:00 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:42.648 15:31:00 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:42.648 15:31:00 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:42.648 15:31:00 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:42.648 15:31:00 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:42.648 15:31:00 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:42.648 15:31:00 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:42.648 15:31:00 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:42.648 15:31:00 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:42.648 15:31:00 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:42.648 15:31:00 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:42.648 15:31:00 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:42.648 15:31:00 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:42.648 15:31:00 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:42.648 15:31:00 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:42.648 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:42.648 15:31:00 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:42.648 15:31:00 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:42.648 15:31:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:42.648 15:31:00 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:42.648 15:31:00 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:42.648 15:31:00 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:42.648 15:31:00 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:42.648 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:42.648 15:31:00 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:42.648 15:31:00 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:42.648 15:31:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:42.648 15:31:00 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:42.648 15:31:00 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:42.648 15:31:00 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:42.648 15:31:00 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:42.648 15:31:00 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:42.648 15:31:00 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:42.648 15:31:00 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:42.648 15:31:00 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:42.648 15:31:00 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:42.648 15:31:00 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:42.648 Found net devices under 0000:31:00.0: cvl_0_0 00:20:42.648 15:31:00 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:42.648 15:31:00 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:42.648 15:31:00 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:42.648 15:31:00 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:42.648 15:31:00 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:42.648 15:31:00 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:42.648 Found net devices under 0000:31:00.1: cvl_0_1 00:20:42.648 15:31:00 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:42.648 15:31:00 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:42.648 15:31:00 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:42.648 15:31:00 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:42.648 15:31:00 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:42.648 15:31:00 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:42.648 15:31:00 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:42.648 15:31:00 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:42.648 15:31:00 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:42.648 15:31:00 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:42.648 15:31:00 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:42.648 15:31:00 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:42.648 15:31:00 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:42.648 15:31:00 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:42.648 15:31:00 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:42.648 15:31:00 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:42.648 15:31:00 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:42.648 15:31:00 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:42.648 15:31:00 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:42.909 15:31:00 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:42.909 15:31:00 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:42.909 15:31:00 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:42.909 15:31:00 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:42.910 15:31:00 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:42.910 15:31:00 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:42.910 15:31:00 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:42.910 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:42.910 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.549 ms 00:20:42.910 00:20:42.910 --- 10.0.0.2 ping statistics --- 00:20:42.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:42.910 rtt min/avg/max/mdev = 0.549/0.549/0.549/0.000 ms 00:20:42.910 15:31:00 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:43.169 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:43.169 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.331 ms 00:20:43.169 00:20:43.169 --- 10.0.0.1 ping statistics --- 00:20:43.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:43.169 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:20:43.169 15:31:00 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:43.169 15:31:00 -- nvmf/common.sh@411 -- # return 0 00:20:43.169 15:31:00 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:43.169 15:31:00 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:43.169 15:31:00 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:43.169 15:31:00 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:43.169 15:31:00 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:43.170 15:31:00 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:43.170 15:31:00 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:43.170 15:31:00 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:43.170 15:31:00 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:43.170 15:31:00 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:43.170 15:31:00 -- common/autotest_common.sh@10 -- # set +x 00:20:43.170 15:31:00 -- nvmf/common.sh@470 -- # nvmfpid=1685734 00:20:43.170 15:31:00 -- nvmf/common.sh@471 -- # waitforlisten 1685734 00:20:43.170 15:31:00 -- common/autotest_common.sh@817 -- # '[' -z 1685734 ']' 00:20:43.170 15:31:00 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:43.170 15:31:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:43.170 15:31:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:43.170 15:31:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:43.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:43.170 15:31:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:43.170 15:31:00 -- common/autotest_common.sh@10 -- # set +x 00:20:43.170 [2024-04-26 15:31:00.492422] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:20:43.170 [2024-04-26 15:31:00.492487] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:43.170 EAL: No free 2048 kB hugepages reported on node 1 00:20:43.170 [2024-04-26 15:31:00.582173] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:43.430 [2024-04-26 15:31:00.641323] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:43.430 [2024-04-26 15:31:00.641359] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:43.430 [2024-04-26 15:31:00.641365] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:43.430 [2024-04-26 15:31:00.641369] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:43.430 [2024-04-26 15:31:00.641373] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:43.430 [2024-04-26 15:31:00.641486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:43.430 [2024-04-26 15:31:00.641643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:43.430 [2024-04-26 15:31:00.641798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:43.430 [2024-04-26 15:31:00.641800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:44.000 15:31:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:44.000 15:31:01 -- common/autotest_common.sh@850 -- # return 0 00:20:44.000 15:31:01 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:44.000 15:31:01 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:44.001 15:31:01 -- common/autotest_common.sh@10 -- # set +x 00:20:44.001 15:31:01 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:44.001 15:31:01 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:44.001 15:31:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:44.001 15:31:01 -- common/autotest_common.sh@10 -- # set +x 00:20:44.001 [2024-04-26 15:31:01.296953] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:44.001 15:31:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:44.001 15:31:01 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:44.001 15:31:01 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:44.001 15:31:01 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:44.001 15:31:01 -- common/autotest_common.sh@10 -- # set +x 00:20:44.001 15:31:01 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:44.001 15:31:01 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:44.001 15:31:01 -- target/shutdown.sh@28 -- # cat 00:20:44.001 15:31:01 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:44.001 15:31:01 -- target/shutdown.sh@28 -- # cat 00:20:44.001 15:31:01 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:44.001 15:31:01 -- target/shutdown.sh@28 -- # cat 00:20:44.001 15:31:01 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:44.001 15:31:01 -- target/shutdown.sh@28 -- # cat 00:20:44.001 15:31:01 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:44.001 15:31:01 -- target/shutdown.sh@28 -- # cat 00:20:44.001 15:31:01 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:44.001 15:31:01 -- target/shutdown.sh@28 -- # cat 00:20:44.001 15:31:01 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:44.001 15:31:01 -- target/shutdown.sh@28 -- # cat 00:20:44.001 15:31:01 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:44.001 15:31:01 -- target/shutdown.sh@28 -- # cat 00:20:44.001 15:31:01 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:44.001 15:31:01 -- target/shutdown.sh@28 -- # cat 00:20:44.001 15:31:01 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:44.001 15:31:01 -- target/shutdown.sh@28 -- # cat 00:20:44.001 15:31:01 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:44.001 15:31:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:44.001 15:31:01 -- common/autotest_common.sh@10 -- # set +x 00:20:44.001 Malloc1 00:20:44.001 [2024-04-26 15:31:01.395687] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:44.001 Malloc2 00:20:44.261 Malloc3 00:20:44.261 Malloc4 00:20:44.261 Malloc5 00:20:44.261 Malloc6 00:20:44.261 Malloc7 00:20:44.261 Malloc8 00:20:44.261 Malloc9 00:20:44.523 Malloc10 00:20:44.523 15:31:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:44.523 15:31:01 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:44.523 15:31:01 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:44.523 15:31:01 -- common/autotest_common.sh@10 -- # set +x 00:20:44.523 15:31:01 -- target/shutdown.sh@125 -- # perfpid=1686112 00:20:44.523 15:31:01 -- target/shutdown.sh@126 -- # waitforlisten 1686112 /var/tmp/bdevperf.sock 00:20:44.523 15:31:01 -- common/autotest_common.sh@817 -- # '[' -z 1686112 ']' 00:20:44.523 15:31:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:44.523 15:31:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:44.523 15:31:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:44.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:44.523 15:31:01 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:44.523 15:31:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:44.523 15:31:01 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:44.523 15:31:01 -- common/autotest_common.sh@10 -- # set +x 00:20:44.523 15:31:01 -- nvmf/common.sh@521 -- # config=() 00:20:44.523 15:31:01 -- nvmf/common.sh@521 -- # local subsystem config 00:20:44.523 15:31:01 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:44.523 15:31:01 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:44.523 { 00:20:44.523 "params": { 00:20:44.523 "name": "Nvme$subsystem", 00:20:44.523 "trtype": "$TEST_TRANSPORT", 00:20:44.523 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:44.523 "adrfam": "ipv4", 00:20:44.523 "trsvcid": "$NVMF_PORT", 00:20:44.523 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:44.523 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:44.523 "hdgst": ${hdgst:-false}, 00:20:44.523 "ddgst": ${ddgst:-false} 00:20:44.523 }, 00:20:44.523 "method": "bdev_nvme_attach_controller" 00:20:44.523 } 00:20:44.523 EOF 00:20:44.523 )") 00:20:44.523 15:31:01 -- nvmf/common.sh@543 -- # cat 00:20:44.523 15:31:01 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:44.523 15:31:01 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:44.523 { 00:20:44.523 "params": { 00:20:44.523 "name": "Nvme$subsystem", 00:20:44.523 "trtype": "$TEST_TRANSPORT", 00:20:44.523 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:44.523 "adrfam": "ipv4", 00:20:44.523 "trsvcid": "$NVMF_PORT", 00:20:44.523 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:44.523 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:44.523 "hdgst": ${hdgst:-false}, 00:20:44.523 "ddgst": ${ddgst:-false} 00:20:44.523 }, 00:20:44.523 "method": "bdev_nvme_attach_controller" 00:20:44.523 } 00:20:44.523 EOF 00:20:44.523 )") 00:20:44.523 15:31:01 -- nvmf/common.sh@543 -- # cat 00:20:44.523 15:31:01 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:44.523 15:31:01 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:44.523 { 00:20:44.523 "params": { 00:20:44.523 "name": "Nvme$subsystem", 00:20:44.523 "trtype": "$TEST_TRANSPORT", 00:20:44.523 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:44.523 "adrfam": "ipv4", 00:20:44.523 "trsvcid": "$NVMF_PORT", 00:20:44.523 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:44.523 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:44.523 "hdgst": ${hdgst:-false}, 00:20:44.523 "ddgst": ${ddgst:-false} 00:20:44.523 }, 00:20:44.523 "method": "bdev_nvme_attach_controller" 00:20:44.523 } 00:20:44.523 EOF 00:20:44.523 )") 00:20:44.523 15:31:01 -- nvmf/common.sh@543 -- # cat 00:20:44.523 15:31:01 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:44.523 15:31:01 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:44.523 { 00:20:44.523 "params": { 00:20:44.523 "name": "Nvme$subsystem", 00:20:44.523 "trtype": "$TEST_TRANSPORT", 00:20:44.523 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:44.523 "adrfam": "ipv4", 00:20:44.523 "trsvcid": "$NVMF_PORT", 00:20:44.523 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:44.523 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:44.523 "hdgst": ${hdgst:-false}, 00:20:44.523 "ddgst": ${ddgst:-false} 00:20:44.523 }, 00:20:44.523 "method": "bdev_nvme_attach_controller" 00:20:44.523 } 00:20:44.523 EOF 00:20:44.523 )") 00:20:44.523 15:31:01 -- nvmf/common.sh@543 -- # cat 00:20:44.523 15:31:01 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:44.523 15:31:01 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:44.523 { 00:20:44.523 "params": { 00:20:44.523 "name": "Nvme$subsystem", 00:20:44.523 "trtype": "$TEST_TRANSPORT", 00:20:44.523 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:44.523 "adrfam": "ipv4", 00:20:44.523 "trsvcid": "$NVMF_PORT", 00:20:44.523 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:44.523 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:44.523 "hdgst": ${hdgst:-false}, 00:20:44.523 "ddgst": ${ddgst:-false} 00:20:44.523 }, 00:20:44.523 "method": "bdev_nvme_attach_controller" 00:20:44.523 } 00:20:44.523 EOF 00:20:44.523 )") 00:20:44.523 15:31:01 -- nvmf/common.sh@543 -- # cat 00:20:44.523 15:31:01 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:44.523 15:31:01 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:44.523 { 00:20:44.523 "params": { 00:20:44.523 "name": "Nvme$subsystem", 00:20:44.523 "trtype": "$TEST_TRANSPORT", 00:20:44.523 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:44.523 "adrfam": "ipv4", 00:20:44.523 "trsvcid": "$NVMF_PORT", 00:20:44.523 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:44.523 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:44.523 "hdgst": ${hdgst:-false}, 00:20:44.523 "ddgst": ${ddgst:-false} 00:20:44.523 }, 00:20:44.523 "method": "bdev_nvme_attach_controller" 00:20:44.523 } 00:20:44.523 EOF 00:20:44.523 )") 00:20:44.523 15:31:01 -- nvmf/common.sh@543 -- # cat 00:20:44.523 15:31:01 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:44.523 15:31:01 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:44.524 { 00:20:44.524 "params": { 00:20:44.524 "name": "Nvme$subsystem", 00:20:44.524 "trtype": "$TEST_TRANSPORT", 00:20:44.524 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:44.524 "adrfam": "ipv4", 00:20:44.524 "trsvcid": "$NVMF_PORT", 00:20:44.524 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:44.524 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:44.524 "hdgst": ${hdgst:-false}, 00:20:44.524 "ddgst": ${ddgst:-false} 00:20:44.524 }, 00:20:44.524 "method": "bdev_nvme_attach_controller" 00:20:44.524 } 00:20:44.524 EOF 00:20:44.524 )") 00:20:44.524 15:31:01 -- nvmf/common.sh@543 -- # cat 00:20:44.524 [2024-04-26 15:31:01.845778] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:20:44.524 [2024-04-26 15:31:01.845830] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1686112 ] 00:20:44.524 15:31:01 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:44.524 15:31:01 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:44.524 { 00:20:44.524 "params": { 00:20:44.524 "name": "Nvme$subsystem", 00:20:44.524 "trtype": "$TEST_TRANSPORT", 00:20:44.524 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:44.524 "adrfam": "ipv4", 00:20:44.524 "trsvcid": "$NVMF_PORT", 00:20:44.524 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:44.524 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:44.524 "hdgst": ${hdgst:-false}, 00:20:44.524 "ddgst": ${ddgst:-false} 00:20:44.524 }, 00:20:44.524 "method": "bdev_nvme_attach_controller" 00:20:44.524 } 00:20:44.524 EOF 00:20:44.524 )") 00:20:44.524 15:31:01 -- nvmf/common.sh@543 -- # cat 00:20:44.524 15:31:01 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:44.524 15:31:01 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:44.524 { 00:20:44.524 "params": { 00:20:44.524 "name": "Nvme$subsystem", 00:20:44.524 "trtype": "$TEST_TRANSPORT", 00:20:44.524 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:44.524 "adrfam": "ipv4", 00:20:44.524 "trsvcid": "$NVMF_PORT", 00:20:44.524 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:44.524 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:44.524 "hdgst": ${hdgst:-false}, 00:20:44.524 "ddgst": ${ddgst:-false} 00:20:44.524 }, 00:20:44.524 "method": "bdev_nvme_attach_controller" 00:20:44.524 } 00:20:44.524 EOF 00:20:44.524 )") 00:20:44.524 15:31:01 -- nvmf/common.sh@543 -- # cat 00:20:44.524 15:31:01 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:44.524 15:31:01 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:44.524 { 00:20:44.524 "params": { 00:20:44.524 "name": "Nvme$subsystem", 00:20:44.524 "trtype": "$TEST_TRANSPORT", 00:20:44.524 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:44.524 "adrfam": "ipv4", 00:20:44.524 "trsvcid": "$NVMF_PORT", 00:20:44.524 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:44.524 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:44.524 "hdgst": ${hdgst:-false}, 00:20:44.524 "ddgst": ${ddgst:-false} 00:20:44.524 }, 00:20:44.524 "method": "bdev_nvme_attach_controller" 00:20:44.524 } 00:20:44.524 EOF 00:20:44.524 )") 00:20:44.524 15:31:01 -- nvmf/common.sh@543 -- # cat 00:20:44.524 15:31:01 -- nvmf/common.sh@545 -- # jq . 00:20:44.524 EAL: No free 2048 kB hugepages reported on node 1 00:20:44.524 15:31:01 -- nvmf/common.sh@546 -- # IFS=, 00:20:44.524 15:31:01 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:44.524 "params": { 00:20:44.524 "name": "Nvme1", 00:20:44.524 "trtype": "tcp", 00:20:44.524 "traddr": "10.0.0.2", 00:20:44.524 "adrfam": "ipv4", 00:20:44.524 "trsvcid": "4420", 00:20:44.524 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:44.524 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:44.524 "hdgst": false, 00:20:44.524 "ddgst": false 00:20:44.524 }, 00:20:44.524 "method": "bdev_nvme_attach_controller" 00:20:44.524 },{ 00:20:44.524 "params": { 00:20:44.524 "name": "Nvme2", 00:20:44.524 "trtype": "tcp", 00:20:44.524 "traddr": "10.0.0.2", 00:20:44.524 "adrfam": "ipv4", 00:20:44.524 "trsvcid": "4420", 00:20:44.524 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:44.524 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:44.524 "hdgst": false, 00:20:44.524 "ddgst": false 00:20:44.524 }, 00:20:44.524 "method": "bdev_nvme_attach_controller" 00:20:44.524 },{ 00:20:44.524 "params": { 00:20:44.524 "name": "Nvme3", 00:20:44.524 "trtype": "tcp", 00:20:44.524 "traddr": "10.0.0.2", 00:20:44.524 "adrfam": "ipv4", 00:20:44.524 "trsvcid": "4420", 00:20:44.524 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:44.524 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:44.524 "hdgst": false, 00:20:44.524 "ddgst": false 00:20:44.524 }, 00:20:44.524 "method": "bdev_nvme_attach_controller" 00:20:44.524 },{ 00:20:44.524 "params": { 00:20:44.524 "name": "Nvme4", 00:20:44.524 "trtype": "tcp", 00:20:44.524 "traddr": "10.0.0.2", 00:20:44.524 "adrfam": "ipv4", 00:20:44.524 "trsvcid": "4420", 00:20:44.524 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:44.524 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:44.524 "hdgst": false, 00:20:44.524 "ddgst": false 00:20:44.524 }, 00:20:44.524 "method": "bdev_nvme_attach_controller" 00:20:44.524 },{ 00:20:44.524 "params": { 00:20:44.524 "name": "Nvme5", 00:20:44.524 "trtype": "tcp", 00:20:44.524 "traddr": "10.0.0.2", 00:20:44.524 "adrfam": "ipv4", 00:20:44.524 "trsvcid": "4420", 00:20:44.524 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:44.524 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:44.524 "hdgst": false, 00:20:44.524 "ddgst": false 00:20:44.524 }, 00:20:44.524 "method": "bdev_nvme_attach_controller" 00:20:44.524 },{ 00:20:44.524 "params": { 00:20:44.524 "name": "Nvme6", 00:20:44.524 "trtype": "tcp", 00:20:44.524 "traddr": "10.0.0.2", 00:20:44.524 "adrfam": "ipv4", 00:20:44.524 "trsvcid": "4420", 00:20:44.524 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:44.524 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:44.524 "hdgst": false, 00:20:44.524 "ddgst": false 00:20:44.524 }, 00:20:44.524 "method": "bdev_nvme_attach_controller" 00:20:44.524 },{ 00:20:44.524 "params": { 00:20:44.524 "name": "Nvme7", 00:20:44.524 "trtype": "tcp", 00:20:44.524 "traddr": "10.0.0.2", 00:20:44.524 "adrfam": "ipv4", 00:20:44.524 "trsvcid": "4420", 00:20:44.524 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:44.524 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:44.524 "hdgst": false, 00:20:44.524 "ddgst": false 00:20:44.524 }, 00:20:44.524 "method": "bdev_nvme_attach_controller" 00:20:44.524 },{ 00:20:44.524 "params": { 00:20:44.524 "name": "Nvme8", 00:20:44.524 "trtype": "tcp", 00:20:44.524 "traddr": "10.0.0.2", 00:20:44.524 "adrfam": "ipv4", 00:20:44.524 "trsvcid": "4420", 00:20:44.524 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:44.524 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:44.524 "hdgst": false, 00:20:44.524 "ddgst": false 00:20:44.524 }, 00:20:44.524 "method": "bdev_nvme_attach_controller" 00:20:44.524 },{ 00:20:44.524 "params": { 00:20:44.524 "name": "Nvme9", 00:20:44.524 "trtype": "tcp", 00:20:44.524 "traddr": "10.0.0.2", 00:20:44.524 "adrfam": "ipv4", 00:20:44.524 "trsvcid": "4420", 00:20:44.524 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:44.524 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:44.524 "hdgst": false, 00:20:44.524 "ddgst": false 00:20:44.524 }, 00:20:44.524 "method": "bdev_nvme_attach_controller" 00:20:44.524 },{ 00:20:44.524 "params": { 00:20:44.524 "name": "Nvme10", 00:20:44.524 "trtype": "tcp", 00:20:44.524 "traddr": "10.0.0.2", 00:20:44.524 "adrfam": "ipv4", 00:20:44.524 "trsvcid": "4420", 00:20:44.524 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:44.524 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:44.524 "hdgst": false, 00:20:44.524 "ddgst": false 00:20:44.524 }, 00:20:44.524 "method": "bdev_nvme_attach_controller" 00:20:44.524 }' 00:20:44.524 [2024-04-26 15:31:01.907356] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:44.524 [2024-04-26 15:31:01.970318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:45.909 Running I/O for 10 seconds... 00:20:45.909 15:31:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:45.909 15:31:03 -- common/autotest_common.sh@850 -- # return 0 00:20:45.909 15:31:03 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:45.909 15:31:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:45.910 15:31:03 -- common/autotest_common.sh@10 -- # set +x 00:20:46.171 15:31:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:46.171 15:31:03 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:46.171 15:31:03 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:46.171 15:31:03 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:46.171 15:31:03 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:20:46.171 15:31:03 -- target/shutdown.sh@57 -- # local ret=1 00:20:46.171 15:31:03 -- target/shutdown.sh@58 -- # local i 00:20:46.171 15:31:03 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:20:46.171 15:31:03 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:46.171 15:31:03 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:46.171 15:31:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:46.171 15:31:03 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:46.171 15:31:03 -- common/autotest_common.sh@10 -- # set +x 00:20:46.171 15:31:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:46.171 15:31:03 -- target/shutdown.sh@60 -- # read_io_count=3 00:20:46.171 15:31:03 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:20:46.171 15:31:03 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:46.432 15:31:03 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:46.432 15:31:03 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:46.432 15:31:03 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:46.432 15:31:03 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:46.432 15:31:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:46.432 15:31:03 -- common/autotest_common.sh@10 -- # set +x 00:20:46.432 15:31:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:46.432 15:31:03 -- target/shutdown.sh@60 -- # read_io_count=67 00:20:46.432 15:31:03 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:20:46.432 15:31:03 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:46.693 15:31:04 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:46.693 15:31:04 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:46.693 15:31:04 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:46.693 15:31:04 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:46.693 15:31:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:46.693 15:31:04 -- common/autotest_common.sh@10 -- # set +x 00:20:46.693 15:31:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:46.693 15:31:04 -- target/shutdown.sh@60 -- # read_io_count=131 00:20:46.693 15:31:04 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:20:46.693 15:31:04 -- target/shutdown.sh@64 -- # ret=0 00:20:46.693 15:31:04 -- target/shutdown.sh@65 -- # break 00:20:46.693 15:31:04 -- target/shutdown.sh@69 -- # return 0 00:20:46.693 15:31:04 -- target/shutdown.sh@135 -- # killprocess 1685734 00:20:46.693 15:31:04 -- common/autotest_common.sh@936 -- # '[' -z 1685734 ']' 00:20:46.693 15:31:04 -- common/autotest_common.sh@940 -- # kill -0 1685734 00:20:46.978 15:31:04 -- common/autotest_common.sh@941 -- # uname 00:20:46.978 15:31:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:46.978 15:31:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1685734 00:20:46.978 15:31:04 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:46.978 15:31:04 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:46.978 15:31:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1685734' 00:20:46.978 killing process with pid 1685734 00:20:46.978 15:31:04 -- common/autotest_common.sh@955 -- # kill 1685734 00:20:46.978 15:31:04 -- common/autotest_common.sh@960 -- # wait 1685734 00:20:46.978 [2024-04-26 15:31:04.197525] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96470 is same with the state(5) to be set 00:20:46.978 [2024-04-26 15:31:04.197570] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96470 is same with the state(5) to be set 00:20:46.978 [2024-04-26 15:31:04.198537] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079790 is same with the state(5) to be set 00:20:46.978 [2024-04-26 15:31:04.198559] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079790 is same with the state(5) to be set 00:20:46.978 [2024-04-26 15:31:04.198565] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079790 is same with the state(5) to be set 00:20:46.978 [2024-04-26 15:31:04.198570] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079790 is same with the state(5) to be set 00:20:46.978 [2024-04-26 15:31:04.198575] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079790 is same with the state(5) to be set 00:20:46.978 [2024-04-26 15:31:04.198581] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079790 is same with the state(5) to be set 00:20:46.978 [2024-04-26 15:31:04.198585] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079790 is same with the state(5) to be set 00:20:46.978 [2024-04-26 15:31:04.198590] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079790 is same with the state(5) to be set 00:20:46.978 [2024-04-26 15:31:04.198594] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079790 is same with the state(5) to be set 00:20:46.978 [2024-04-26 15:31:04.198599] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079790 is same with the state(5) to be set 00:20:46.978 [2024-04-26 15:31:04.198603] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079790 is same with the state(5) to be set 00:20:46.978 [2024-04-26 15:31:04.198607] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079790 is same with the state(5) to be set 00:20:46.978 [2024-04-26 15:31:04.198612] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079790 is same with the state(5) to be set 00:20:46.978 [2024-04-26 15:31:04.198616] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079790 is same with the state(5) to be set 00:20:46.978 [2024-04-26 15:31:04.198621] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079790 is same with the state(5) to be set 00:20:46.978 [2024-04-26 15:31:04.198625] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079790 is same with the state(5) to be set 00:20:46.978 [2024-04-26 15:31:04.198630] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079790 is same with the state(5) to be set 00:20:46.978 [2024-04-26 15:31:04.198635] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079790 is same with the state(5) to be set 00:20:46.978 [2024-04-26 15:31:04.198645] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079790 is same with the state(5) to be set 00:20:46.978 [2024-04-26 15:31:04.198650] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079790 is same with the state(5) to be set 00:20:46.978 [2024-04-26 15:31:04.198655] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079790 is same with the state(5) to be set 00:20:46.978 [2024-04-26 15:31:04.198659] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079790 is same with the state(5) to be set 00:20:46.978 [2024-04-26 15:31:04.198664] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079790 is same with the state(5) to be set 00:20:46.978 [2024-04-26 15:31:04.198668] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079790 is same with the state(5) to be set 00:20:46.978 [2024-04-26 15:31:04.198672] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079790 is same with the state(5) to be set 00:20:46.978 [2024-04-26 15:31:04.198677] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079790 is same with the state(5) to be set 00:20:46.978 [2024-04-26 15:31:04.198682] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079790 is same with the state(5) to be set 00:20:46.978 [2024-04-26 15:31:04.198687] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079790 is same with the state(5) to be set 00:20:46.978 [2024-04-26 15:31:04.198691] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079790 is same with the state(5) to be set 00:20:46.978 [2024-04-26 15:31:04.198696] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079790 is same with the state(5) to be set 00:20:46.978 [2024-04-26 15:31:04.198700] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079790 is same with the state(5) to be set 00:20:46.978 [2024-04-26 15:31:04.198704] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079790 is same with the state(5) to be set 00:20:46.978 [2024-04-26 15:31:04.198709] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079790 is same with the state(5) to be set 00:20:46.978 [2024-04-26 15:31:04.198714] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079790 is same with the state(5) to be set 00:20:46.978 [2024-04-26 15:31:04.198718] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079790 is same with the state(5) to be set 00:20:46.978 [2024-04-26 15:31:04.198723] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079790 is same with the state(5) to be set 00:20:46.978 [2024-04-26 15:31:04.198728] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079790 is same with the state(5) to be set 00:20:46.978 [2024-04-26 15:31:04.198732] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079790 is same with the state(5) to be set 00:20:46.978 [2024-04-26 15:31:04.198737] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079790 is same with the state(5) to be set 00:20:46.978 [2024-04-26 15:31:04.198742] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079790 is same with the state(5) to be set 00:20:46.978 [2024-04-26 15:31:04.198746] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079790 is same with the state(5) to be set 00:20:46.978 [2024-04-26 15:31:04.198751] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079790 is same with the state(5) to be set 00:20:46.978 [2024-04-26 15:31:04.198755] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079790 is same with the state(5) to be set 00:20:46.978 [2024-04-26 15:31:04.198760] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079790 is same with the state(5) to be set 00:20:46.978 [2024-04-26 15:31:04.198764] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079790 is same with the state(5) to be set 00:20:46.978 [2024-04-26 15:31:04.198770] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079790 is same with the state(5) to be set 00:20:46.978 [2024-04-26 15:31:04.198774] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079790 is same with the state(5) to be set 00:20:46.978 [2024-04-26 15:31:04.198779] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079790 is same with the state(5) to be set 00:20:46.978 [2024-04-26 15:31:04.198784] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079790 is same with the state(5) to be set 00:20:46.978 [2024-04-26 15:31:04.198788] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079790 is same with the state(5) to be set 00:20:46.978 [2024-04-26 15:31:04.198793] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079790 is same with the state(5) to be set 00:20:46.978 [2024-04-26 15:31:04.198797] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079790 is same with the state(5) to be set 00:20:46.979 [2024-04-26 15:31:04.198801] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079790 is same with the state(5) to be set 00:20:46.979 [2024-04-26 15:31:04.198806] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079790 is same with the state(5) to be set 00:20:46.979 [2024-04-26 15:31:04.198810] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079790 is same with the state(5) to be set 00:20:46.979 [2024-04-26 15:31:04.198814] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079790 is same with the state(5) to be set 00:20:46.979 [2024-04-26 15:31:04.198819] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079790 is same with the state(5) to be set 00:20:46.979 [2024-04-26 15:31:04.198823] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079790 is same with the state(5) to be set 00:20:46.979 [2024-04-26 15:31:04.198828] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079790 is same with the state(5) to be set 00:20:46.979 [2024-04-26 15:31:04.198832] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079790 is same with the state(5) to be set 00:20:46.979 [2024-04-26 15:31:04.198841] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079790 is same with the state(5) to be set 00:20:46.979 [2024-04-26 15:31:04.198845] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079790 is same with the state(5) to be set 00:20:46.979 [2024-04-26 15:31:04.198849] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079790 is same with the state(5) to be set 00:20:46.979 [2024-04-26 15:31:04.200146] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96900 is same with the state(5) to be set 00:20:46.979 [2024-04-26 15:31:04.200158] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96900 is same with the state(5) to be set 00:20:46.979 [2024-04-26 15:31:04.200162] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96900 is same with the state(5) to be set 00:20:46.979 [2024-04-26 15:31:04.200167] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96900 is same with the state(5) to be set 00:20:46.979 [2024-04-26 15:31:04.200171] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96900 is same with the state(5) to be set 00:20:46.979 [2024-04-26 15:31:04.200176] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96900 is same with the state(5) to be set 00:20:46.979 [2024-04-26 15:31:04.200180] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96900 is same with the state(5) to be set 00:20:46.979 [2024-04-26 15:31:04.200185] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96900 is same with the state(5) to be set 00:20:46.979 [2024-04-26 15:31:04.200189] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96900 is same with the state(5) to be set 00:20:46.979 [2024-04-26 15:31:04.200194] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96900 is same with the state(5) to be set 00:20:46.979 [2024-04-26 15:31:04.200202] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96900 is same with the state(5) to be set 00:20:46.979 [2024-04-26 15:31:04.200207] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96900 is same with the state(5) to be set 00:20:46.979 [2024-04-26 15:31:04.200211] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96900 is same with the state(5) to be set 00:20:46.979 [2024-04-26 15:31:04.200216] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96900 is same with the state(5) to be set 00:20:46.979 [2024-04-26 15:31:04.200220] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96900 is same with the state(5) to be set 00:20:46.979 [2024-04-26 15:31:04.200225] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96900 is same with the state(5) to be set 00:20:46.979 [2024-04-26 15:31:04.200229] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96900 is same with the state(5) to be set 00:20:46.979 [2024-04-26 15:31:04.200234] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96900 is same with the state(5) to be set 00:20:46.979 [2024-04-26 15:31:04.200239] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96900 is same with the state(5) to be set 00:20:46.979 [2024-04-26 15:31:04.200243] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96900 is same with the state(5) to be set 00:20:46.979 [2024-04-26 15:31:04.200248] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96900 is same with the state(5) to be set 00:20:46.979 [2024-04-26 15:31:04.200252] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96900 is same with the state(5) to be set 00:20:46.979 [2024-04-26 15:31:04.200257] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96900 is same with the state(5) to be set 00:20:46.979 [2024-04-26 15:31:04.200261] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96900 is same with the state(5) to be set 00:20:46.979 [2024-04-26 15:31:04.200266] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96900 is same with the state(5) to be set 00:20:46.979 [2024-04-26 15:31:04.200270] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96900 is same with the state(5) to be set 00:20:46.979 [2024-04-26 15:31:04.200274] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96900 is same with the state(5) to be set 00:20:46.979 [2024-04-26 15:31:04.200279] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96900 is same with the state(5) to be set 00:20:46.979 [2024-04-26 15:31:04.200283] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96900 is same with the state(5) to be set 00:20:46.979 [2024-04-26 15:31:04.200287] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96900 is same with the state(5) to be set 00:20:46.979 [2024-04-26 15:31:04.200292] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96900 is same with the state(5) to be set 00:20:46.979 [2024-04-26 15:31:04.200297] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96900 is same with the state(5) to be set 00:20:46.979 [2024-04-26 15:31:04.200302] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96900 is same with the state(5) to be set 00:20:46.979 [2024-04-26 15:31:04.200306] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96900 is same with the state(5) to be set 00:20:46.979 [2024-04-26 15:31:04.200311] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96900 is same with the state(5) to be set 00:20:46.979 [2024-04-26 15:31:04.200315] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96900 is same with the state(5) to be set 00:20:46.979 [2024-04-26 15:31:04.200320] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96900 is same with the state(5) to be set 00:20:46.979 [2024-04-26 15:31:04.200325] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96900 is same with the state(5) to be set 00:20:46.979 [2024-04-26 15:31:04.200330] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96900 is same with the state(5) to be set 00:20:46.979 [2024-04-26 15:31:04.200335] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96900 is same with the state(5) to be set 00:20:46.979 [2024-04-26 15:31:04.200340] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96900 is same with the state(5) to be set 00:20:46.979 [2024-04-26 15:31:04.200344] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96900 is same with the state(5) to be set 00:20:46.979 [2024-04-26 15:31:04.200349] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96900 is same with the state(5) to be set 00:20:46.979 [2024-04-26 15:31:04.200353] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96900 is same with the state(5) to be set 00:20:46.979 [2024-04-26 15:31:04.200358] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96900 is same with the state(5) to be set 00:20:46.979 [2024-04-26 15:31:04.200362] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96900 is same with the state(5) to be set 00:20:46.979 [2024-04-26 15:31:04.200367] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96900 is same with the state(5) to be set 00:20:46.979 [2024-04-26 15:31:04.200371] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96900 is same with the state(5) to be set 00:20:46.979 [2024-04-26 15:31:04.200375] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96900 is same with the state(5) to be set 00:20:46.979 [2024-04-26 15:31:04.200380] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96900 is same with the state(5) to be set 00:20:46.979 [2024-04-26 15:31:04.200384] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96900 is same with the state(5) to be set 00:20:46.979 [2024-04-26 15:31:04.200389] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96900 is same with the state(5) to be set 00:20:46.979 [2024-04-26 15:31:04.200393] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96900 is same with the state(5) to be set 00:20:46.979 [2024-04-26 15:31:04.200398] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96900 is same with the state(5) to be set 00:20:46.979 [2024-04-26 15:31:04.200402] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96900 is same with the state(5) to be set 00:20:46.979 [2024-04-26 15:31:04.200406] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96900 is same with the state(5) to be set 00:20:46.979 [2024-04-26 15:31:04.200411] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96900 is same with the state(5) to be set 00:20:46.979 [2024-04-26 15:31:04.200415] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96900 is same with the state(5) to be set 00:20:46.979 [2024-04-26 15:31:04.200420] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96900 is same with the state(5) to be set 00:20:46.979 [2024-04-26 15:31:04.200424] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96900 is same with the state(5) to be set 00:20:46.979 [2024-04-26 15:31:04.200428] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96900 is same with the state(5) to be set 00:20:46.979 [2024-04-26 15:31:04.200433] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96900 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.200437] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96900 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.201536] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96d90 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.201561] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96d90 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.201567] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96d90 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.201572] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96d90 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.201576] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96d90 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.201581] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96d90 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.201585] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96d90 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.201590] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96d90 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.201595] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96d90 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.201599] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96d90 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.201604] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96d90 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.201608] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96d90 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.201613] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96d90 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.201617] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96d90 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.201621] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96d90 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.201626] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96d90 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.201630] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96d90 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.201635] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96d90 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.201639] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96d90 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.201644] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96d90 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.201649] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96d90 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.201653] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96d90 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.201658] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96d90 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.201662] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96d90 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.201667] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96d90 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.201671] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96d90 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.201676] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96d90 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.201680] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96d90 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.201685] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96d90 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.201691] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96d90 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.201696] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96d90 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.201700] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96d90 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.201704] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96d90 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.201709] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96d90 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.201713] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96d90 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.201718] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96d90 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.201722] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96d90 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.201727] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96d90 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.201731] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96d90 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.201736] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96d90 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.201740] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96d90 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.201745] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96d90 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.201749] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96d90 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.201754] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96d90 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.201758] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96d90 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.201762] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96d90 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.201767] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96d90 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.201771] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96d90 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.201775] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96d90 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.201780] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96d90 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.201784] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96d90 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.201789] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96d90 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.201793] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96d90 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.201797] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96d90 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.201802] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96d90 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.201806] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96d90 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.201811] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96d90 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.201816] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96d90 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.201820] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96d90 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.201825] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96d90 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.201829] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96d90 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.201834] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96d90 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.201843] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe96d90 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.202954] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97220 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.202974] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97220 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.202980] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97220 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.202985] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97220 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.202989] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97220 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.202994] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97220 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.202999] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97220 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.203004] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97220 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.203008] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97220 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.203013] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97220 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.203017] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97220 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.203022] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97220 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.203026] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97220 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.203031] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97220 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.203035] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97220 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.203040] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97220 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.203045] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97220 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.203050] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97220 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.203054] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97220 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.203059] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97220 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.203067] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97220 is same with the state(5) to be set 00:20:46.980 [2024-04-26 15:31:04.203072] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97220 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.203076] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97220 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.203081] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97220 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.203085] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97220 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.203089] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97220 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.203094] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97220 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.203099] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97220 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.203104] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97220 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.203108] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97220 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.203113] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97220 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.203117] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97220 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.203121] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97220 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.203126] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97220 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.203130] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97220 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.203135] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97220 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.203140] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97220 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.203144] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97220 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.203149] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97220 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.203153] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97220 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.203157] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97220 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.203162] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97220 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.203166] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97220 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.203170] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97220 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.203175] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97220 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.203179] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97220 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.203184] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97220 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.203189] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97220 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.203194] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97220 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.203199] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97220 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.203203] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97220 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.203208] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97220 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.203212] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97220 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.203216] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97220 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.203220] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97220 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.203225] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97220 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.203229] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97220 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.203234] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97220 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.203238] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97220 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.203242] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97220 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.203247] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97220 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.203253] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97220 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.203257] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97220 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.203848] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97580 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.203864] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97580 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.203869] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97580 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.203874] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97580 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.203879] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97580 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.203883] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97580 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.203888] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97580 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.204372] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97a30 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.204382] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97a30 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.204387] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97a30 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.204391] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97a30 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.204396] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97a30 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.204403] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97a30 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.204408] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97a30 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.204412] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97a30 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.204417] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97a30 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.204421] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97a30 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.204425] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97a30 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.204430] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97a30 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.204434] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97a30 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.204439] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97a30 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.204443] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97a30 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.204448] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97a30 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.204453] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97a30 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.204457] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97a30 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.204462] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97a30 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.204466] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97a30 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.204470] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97a30 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.204475] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97a30 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.204479] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97a30 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.204484] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97a30 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.204488] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97a30 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.204493] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97a30 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.204497] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97a30 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.204502] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97a30 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.204506] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97a30 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.204510] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97a30 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.204515] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97a30 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.204519] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97a30 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.204525] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97a30 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.204530] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97a30 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.204534] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97a30 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.204539] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97a30 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.204543] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97a30 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.204548] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97a30 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.204552] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97a30 is same with the state(5) to be set 00:20:46.981 [2024-04-26 15:31:04.204556] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97a30 is same with the state(5) to be set 00:20:46.982 [2024-04-26 15:31:04.204561] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97a30 is same with the state(5) to be set 00:20:46.982 [2024-04-26 15:31:04.204565] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97a30 is same with the state(5) to be set 00:20:46.982 [2024-04-26 15:31:04.204570] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97a30 is same with the state(5) to be set 00:20:46.982 [2024-04-26 15:31:04.204574] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97a30 is same with the state(5) to be set 00:20:46.982 [2024-04-26 15:31:04.204578] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97a30 is same with the state(5) to be set 00:20:46.982 [2024-04-26 15:31:04.204583] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97a30 is same with the state(5) to be set 00:20:46.982 [2024-04-26 15:31:04.204587] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97a30 is same with the state(5) to be set 00:20:46.982 [2024-04-26 15:31:04.204591] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97a30 is same with the state(5) to be set 00:20:46.982 [2024-04-26 15:31:04.204596] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97a30 is same with the state(5) to be set 00:20:46.982 [2024-04-26 15:31:04.204600] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97a30 is same with the state(5) to be set 00:20:46.982 [2024-04-26 15:31:04.204605] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97a30 is same with the state(5) to be set 00:20:46.982 [2024-04-26 15:31:04.204609] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97a30 is same with the state(5) to be set 00:20:46.982 [2024-04-26 15:31:04.204614] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97a30 is same with the state(5) to be set 00:20:46.982 [2024-04-26 15:31:04.204618] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97a30 is same with the state(5) to be set 00:20:46.982 [2024-04-26 15:31:04.204622] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97a30 is same with the state(5) to be set 00:20:46.982 [2024-04-26 15:31:04.204627] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97a30 is same with the state(5) to be set 00:20:46.982 [2024-04-26 15:31:04.204631] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97a30 is same with the state(5) to be set 00:20:46.982 [2024-04-26 15:31:04.204636] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97a30 is same with the state(5) to be set 00:20:46.982 [2024-04-26 15:31:04.204640] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97a30 is same with the state(5) to be set 00:20:46.982 [2024-04-26 15:31:04.204648] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97a30 is same with the state(5) to be set 00:20:46.982 [2024-04-26 15:31:04.204653] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97a30 is same with the state(5) to be set 00:20:46.982 [2024-04-26 15:31:04.204657] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97a30 is same with the state(5) to be set 00:20:46.982 [2024-04-26 15:31:04.204662] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97a30 is same with the state(5) to be set 00:20:46.982 [2024-04-26 15:31:04.204666] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97a30 is same with the state(5) to be set 00:20:46.982 [2024-04-26 15:31:04.205607] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97ec0 is same with the state(5) to be set 00:20:46.982 [2024-04-26 15:31:04.205621] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97ec0 is same with the state(5) to be set 00:20:46.982 [2024-04-26 15:31:04.205626] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97ec0 is same with the state(5) to be set 00:20:46.982 [2024-04-26 15:31:04.205631] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97ec0 is same with the state(5) to be set 00:20:46.982 [2024-04-26 15:31:04.205635] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97ec0 is same with the state(5) to be set 00:20:46.982 [2024-04-26 15:31:04.205640] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97ec0 is same with the state(5) to be set 00:20:46.982 [2024-04-26 15:31:04.205645] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97ec0 is same with the state(5) to be set 00:20:46.982 [2024-04-26 15:31:04.205649] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97ec0 is same with the state(5) to be set 00:20:46.982 [2024-04-26 15:31:04.205654] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97ec0 is same with the state(5) to be set 00:20:46.982 [2024-04-26 15:31:04.205658] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97ec0 is same with the state(5) to be set 00:20:46.982 [2024-04-26 15:31:04.205663] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97ec0 is same with the state(5) to be set 00:20:46.982 [2024-04-26 15:31:04.205667] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97ec0 is same with the state(5) to be set 00:20:46.982 [2024-04-26 15:31:04.205671] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97ec0 is same with the state(5) to be set 00:20:46.982 [2024-04-26 15:31:04.205676] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97ec0 is same with the state(5) to be set 00:20:46.982 [2024-04-26 15:31:04.205680] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97ec0 is same with the state(5) to be set 00:20:46.982 [2024-04-26 15:31:04.205684] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97ec0 is same with the state(5) to be set 00:20:46.982 [2024-04-26 15:31:04.205689] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97ec0 is same with the state(5) to be set 00:20:46.982 [2024-04-26 15:31:04.205693] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97ec0 is same with the state(5) to be set 00:20:46.982 [2024-04-26 15:31:04.205698] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97ec0 is same with the state(5) to be set 00:20:46.982 [2024-04-26 15:31:04.205702] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97ec0 is same with the state(5) to be set 00:20:46.982 [2024-04-26 15:31:04.205707] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97ec0 is same with the state(5) to be set 00:20:46.982 [2024-04-26 15:31:04.205712] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97ec0 is same with the state(5) to be set 00:20:46.982 [2024-04-26 15:31:04.205716] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97ec0 is same with the state(5) to be set 00:20:46.982 [2024-04-26 15:31:04.205723] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97ec0 is same with the state(5) to be set 00:20:46.982 [2024-04-26 15:31:04.205728] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97ec0 is same with the state(5) to be set 00:20:46.982 [2024-04-26 15:31:04.205732] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97ec0 is same with the state(5) to be set 00:20:46.982 [2024-04-26 15:31:04.205737] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97ec0 is same with the state(5) to be set 00:20:46.982 [2024-04-26 15:31:04.205741] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97ec0 is same with the state(5) to be set 00:20:46.982 [2024-04-26 15:31:04.205747] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97ec0 is same with the state(5) to be set 00:20:46.982 [2024-04-26 15:31:04.205752] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97ec0 is same with the state(5) to be set 00:20:46.982 [2024-04-26 15:31:04.205756] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97ec0 is same with the state(5) to be set 00:20:46.982 [2024-04-26 15:31:04.205761] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97ec0 is same with the state(5) to be set 00:20:46.982 [2024-04-26 15:31:04.205766] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97ec0 is same with the state(5) to be set 00:20:46.982 [2024-04-26 15:31:04.205770] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97ec0 is same with the state(5) to be set 00:20:46.982 [2024-04-26 15:31:04.205774] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97ec0 is same with the state(5) to be set 00:20:46.982 [2024-04-26 15:31:04.205779] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97ec0 is same with the state(5) to be set 00:20:46.982 [2024-04-26 15:31:04.205783] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97ec0 is same with the state(5) to be set 00:20:46.982 [2024-04-26 15:31:04.205787] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97ec0 is same with the state(5) to be set 00:20:46.982 [2024-04-26 15:31:04.205792] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97ec0 is same with the state(5) to be set 00:20:46.982 [2024-04-26 15:31:04.205797] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97ec0 is same with the state(5) to be set 00:20:46.982 [2024-04-26 15:31:04.205801] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97ec0 is same with the state(5) to be set 00:20:46.982 [2024-04-26 15:31:04.205805] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97ec0 is same with the state(5) to be set 00:20:46.982 [2024-04-26 15:31:04.205810] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97ec0 is same with the state(5) to be set 00:20:46.982 [2024-04-26 15:31:04.205814] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97ec0 is same with the state(5) to be set 00:20:46.982 [2024-04-26 15:31:04.205819] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97ec0 is same with the state(5) to be set 00:20:46.982 [2024-04-26 15:31:04.205823] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97ec0 is same with the state(5) to be set 00:20:46.982 [2024-04-26 15:31:04.205827] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97ec0 is same with the state(5) to be set 00:20:46.982 [2024-04-26 15:31:04.205832] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97ec0 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.205836] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97ec0 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.205844] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97ec0 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.205850] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97ec0 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.205858] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97ec0 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.205862] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97ec0 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.205867] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97ec0 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.205871] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97ec0 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.205875] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97ec0 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.205880] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97ec0 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.205884] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97ec0 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.205888] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97ec0 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.205893] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97ec0 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.205897] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97ec0 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.205901] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97ec0 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.205906] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe97ec0 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.206486] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe98350 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.206499] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe98350 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.206504] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe98350 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.206509] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe98350 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.206513] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe98350 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.206518] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe98350 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.206522] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe98350 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.206527] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe98350 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.206531] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe98350 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.206535] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe98350 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.206540] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe98350 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.206544] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe98350 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.206548] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe98350 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.206553] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe98350 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.206560] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe98350 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.206565] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe98350 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.206569] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe98350 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.206574] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe98350 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.206578] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe98350 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.206583] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe98350 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.206587] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe98350 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.206591] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe98350 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.206596] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe98350 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.206600] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe98350 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.206605] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe98350 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.206610] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe98350 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.206614] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe98350 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.206619] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe98350 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.206623] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe98350 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.206627] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe98350 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.206632] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe98350 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.206636] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe98350 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.206641] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe98350 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.206646] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe98350 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.206650] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe98350 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.206654] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe98350 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.206659] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe98350 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.206663] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe98350 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.206667] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe98350 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.206672] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe98350 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.206676] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe98350 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.206681] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe98350 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.206687] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe98350 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.206692] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe98350 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.206696] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe98350 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.206701] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe98350 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.206705] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe98350 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.206709] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe98350 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.206714] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe98350 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.206718] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe98350 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.206722] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe98350 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.206728] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe98350 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.206732] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe98350 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.206737] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe98350 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.206741] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe98350 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.206745] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe98350 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.206750] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe98350 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.206754] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe98350 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.206758] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe98350 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.206763] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe98350 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.206767] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe98350 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.206771] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe98350 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.206776] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe98350 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.207215] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079300 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.207227] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079300 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.207232] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079300 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.207237] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079300 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.207242] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079300 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.207246] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079300 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.207263] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079300 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.207268] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079300 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.207272] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079300 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.207277] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079300 is same with the state(5) to be set 00:20:46.983 [2024-04-26 15:31:04.207281] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079300 is same with the state(5) to be set 00:20:46.984 [2024-04-26 15:31:04.207286] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079300 is same with the state(5) to be set 00:20:46.984 [2024-04-26 15:31:04.207290] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079300 is same with the state(5) to be set 00:20:46.984 [2024-04-26 15:31:04.207295] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079300 is same with the state(5) to be set 00:20:46.984 [2024-04-26 15:31:04.207300] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079300 is same with the state(5) to be set 00:20:46.984 [2024-04-26 15:31:04.207304] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079300 is same with the state(5) to be set 00:20:46.984 [2024-04-26 15:31:04.212047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.984 [2024-04-26 15:31:04.212081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.984 [2024-04-26 15:31:04.212091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.984 [2024-04-26 15:31:04.212099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.984 [2024-04-26 15:31:04.212107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.984 [2024-04-26 15:31:04.212114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.984 [2024-04-26 15:31:04.212122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.984 [2024-04-26 15:31:04.212129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.984 [2024-04-26 15:31:04.212136] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1511000 is same with the state(5) to be set 00:20:46.984 [2024-04-26 15:31:04.212173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.984 [2024-04-26 15:31:04.212182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.984 [2024-04-26 15:31:04.212190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.984 [2024-04-26 15:31:04.212198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.984 [2024-04-26 15:31:04.212206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.984 [2024-04-26 15:31:04.212213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.984 [2024-04-26 15:31:04.212221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.984 [2024-04-26 15:31:04.212232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.984 [2024-04-26 15:31:04.212239] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1384ff0 is same with the state(5) to be set 00:20:46.984 [2024-04-26 15:31:04.212264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.984 [2024-04-26 15:31:04.212272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.984 [2024-04-26 15:31:04.212280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.984 [2024-04-26 15:31:04.212287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.984 [2024-04-26 15:31:04.212295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.984 [2024-04-26 15:31:04.212301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.984 [2024-04-26 15:31:04.212309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.984 [2024-04-26 15:31:04.212316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.984 [2024-04-26 15:31:04.212323] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148deb0 is same with the state(5) to be set 00:20:46.984 [2024-04-26 15:31:04.212347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.984 [2024-04-26 15:31:04.212354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.984 [2024-04-26 15:31:04.212362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.984 [2024-04-26 15:31:04.212369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.984 [2024-04-26 15:31:04.212377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.984 [2024-04-26 15:31:04.212384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.984 [2024-04-26 15:31:04.212392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.984 [2024-04-26 15:31:04.212398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.984 [2024-04-26 15:31:04.212405] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1514700 is same with the state(5) to be set 00:20:46.984 [2024-04-26 15:31:04.212429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.984 [2024-04-26 15:31:04.212437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.984 [2024-04-26 15:31:04.212445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.984 [2024-04-26 15:31:04.212451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.984 [2024-04-26 15:31:04.212459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.984 [2024-04-26 15:31:04.212466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.984 [2024-04-26 15:31:04.212479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.984 [2024-04-26 15:31:04.212486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.984 [2024-04-26 15:31:04.212492] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1384910 is same with the state(5) to be set 00:20:46.984 [2024-04-26 15:31:04.212515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.984 [2024-04-26 15:31:04.212523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.984 [2024-04-26 15:31:04.212531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.984 [2024-04-26 15:31:04.212538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.984 [2024-04-26 15:31:04.212546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.984 [2024-04-26 15:31:04.212553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.984 [2024-04-26 15:31:04.212561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.984 [2024-04-26 15:31:04.212568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.984 [2024-04-26 15:31:04.212575] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151b240 is same with the state(5) to be set 00:20:46.984 [2024-04-26 15:31:04.212597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.984 [2024-04-26 15:31:04.212605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.984 [2024-04-26 15:31:04.212613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.984 [2024-04-26 15:31:04.212620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.984 [2024-04-26 15:31:04.212627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.984 [2024-04-26 15:31:04.212635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.984 [2024-04-26 15:31:04.212642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.984 [2024-04-26 15:31:04.212649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.984 [2024-04-26 15:31:04.212656] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362eb0 is same with the state(5) to be set 00:20:46.984 [2024-04-26 15:31:04.212676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.984 [2024-04-26 15:31:04.212684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.984 [2024-04-26 15:31:04.212692] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.984 [2024-04-26 15:31:04.212699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.984 [2024-04-26 15:31:04.212708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.984 [2024-04-26 15:31:04.212715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.984 [2024-04-26 15:31:04.212723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.984 [2024-04-26 15:31:04.212730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.984 [2024-04-26 15:31:04.212737] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1365c20 is same with the state(5) to be set 00:20:46.984 [2024-04-26 15:31:04.212758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.984 [2024-04-26 15:31:04.212766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.984 [2024-04-26 15:31:04.212774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.984 [2024-04-26 15:31:04.212781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.984 [2024-04-26 15:31:04.212789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.984 [2024-04-26 15:31:04.212796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.984 [2024-04-26 15:31:04.212803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.984 [2024-04-26 15:31:04.212810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.984 [2024-04-26 15:31:04.212817] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec3ce0 is same with the state(5) to be set 00:20:46.984 [2024-04-26 15:31:04.213802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.985 [2024-04-26 15:31:04.213823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.985 [2024-04-26 15:31:04.213846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.985 [2024-04-26 15:31:04.213854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.985 [2024-04-26 15:31:04.213863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.985 [2024-04-26 15:31:04.213870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.985 [2024-04-26 15:31:04.213880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.985 [2024-04-26 15:31:04.213886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.985 [2024-04-26 15:31:04.213896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.985 [2024-04-26 15:31:04.213903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.985 [2024-04-26 15:31:04.213912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.985 [2024-04-26 15:31:04.213922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.985 [2024-04-26 15:31:04.213932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.985 [2024-04-26 15:31:04.213939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.985 [2024-04-26 15:31:04.213948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.985 [2024-04-26 15:31:04.213955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.985 [2024-04-26 15:31:04.213964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.985 [2024-04-26 15:31:04.213971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.985 [2024-04-26 15:31:04.213979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.985 [2024-04-26 15:31:04.213986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.985 [2024-04-26 15:31:04.213995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.985 [2024-04-26 15:31:04.214002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.985 [2024-04-26 15:31:04.214011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.985 [2024-04-26 15:31:04.214018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.985 [2024-04-26 15:31:04.214027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.985 [2024-04-26 15:31:04.214034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.985 [2024-04-26 15:31:04.214043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.985 [2024-04-26 15:31:04.214049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.985 [2024-04-26 15:31:04.214058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.985 [2024-04-26 15:31:04.214065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.985 [2024-04-26 15:31:04.214074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.985 [2024-04-26 15:31:04.214081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.985 [2024-04-26 15:31:04.214090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.985 [2024-04-26 15:31:04.214097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.985 [2024-04-26 15:31:04.214107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.985 [2024-04-26 15:31:04.214113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.985 [2024-04-26 15:31:04.214124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.985 [2024-04-26 15:31:04.214131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.985 [2024-04-26 15:31:04.214140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.985 [2024-04-26 15:31:04.214147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.985 [2024-04-26 15:31:04.214156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.985 [2024-04-26 15:31:04.214163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.985 [2024-04-26 15:31:04.214172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.985 [2024-04-26 15:31:04.214179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.985 [2024-04-26 15:31:04.214188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.985 [2024-04-26 15:31:04.214195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.985 [2024-04-26 15:31:04.214204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.985 [2024-04-26 15:31:04.214211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.985 [2024-04-26 15:31:04.214220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.985 [2024-04-26 15:31:04.214227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.985 [2024-04-26 15:31:04.214236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.985 [2024-04-26 15:31:04.214243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.985 [2024-04-26 15:31:04.214252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.985 [2024-04-26 15:31:04.214259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.985 [2024-04-26 15:31:04.214268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.985 [2024-04-26 15:31:04.214275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.985 [2024-04-26 15:31:04.214284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.985 [2024-04-26 15:31:04.214290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.985 [2024-04-26 15:31:04.214300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.985 [2024-04-26 15:31:04.214307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.985 [2024-04-26 15:31:04.214315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.985 [2024-04-26 15:31:04.214323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.985 [2024-04-26 15:31:04.214332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.985 [2024-04-26 15:31:04.214339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.985 [2024-04-26 15:31:04.214348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.985 [2024-04-26 15:31:04.214355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.985 [2024-04-26 15:31:04.214364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.985 [2024-04-26 15:31:04.214371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.985 [2024-04-26 15:31:04.214380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.985 [2024-04-26 15:31:04.214387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.985 [2024-04-26 15:31:04.214396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.985 [2024-04-26 15:31:04.214402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.985 [2024-04-26 15:31:04.214411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.985 [2024-04-26 15:31:04.214418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.985 [2024-04-26 15:31:04.214427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.985 [2024-04-26 15:31:04.214434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.985 [2024-04-26 15:31:04.214443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.985 [2024-04-26 15:31:04.214449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.985 [2024-04-26 15:31:04.214458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.985 [2024-04-26 15:31:04.214465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.985 [2024-04-26 15:31:04.214474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.985 [2024-04-26 15:31:04.214481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.985 [2024-04-26 15:31:04.214490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.985 [2024-04-26 15:31:04.214496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.985 [2024-04-26 15:31:04.214505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.985 [2024-04-26 15:31:04.214512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.986 [2024-04-26 15:31:04.214521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.986 [2024-04-26 15:31:04.214529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.986 [2024-04-26 15:31:04.214538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.986 [2024-04-26 15:31:04.214545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.986 [2024-04-26 15:31:04.214554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.986 [2024-04-26 15:31:04.214560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.986 [2024-04-26 15:31:04.214569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.986 [2024-04-26 15:31:04.214576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.986 [2024-04-26 15:31:04.214585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.986 [2024-04-26 15:31:04.214592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.986 [2024-04-26 15:31:04.214601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.986 [2024-04-26 15:31:04.214607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.986 [2024-04-26 15:31:04.214616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.986 [2024-04-26 15:31:04.214623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.986 [2024-04-26 15:31:04.214632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.986 [2024-04-26 15:31:04.214639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.986 [2024-04-26 15:31:04.214648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.986 [2024-04-26 15:31:04.214654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.986 [2024-04-26 15:31:04.214663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.986 [2024-04-26 15:31:04.214670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.986 [2024-04-26 15:31:04.214679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.986 [2024-04-26 15:31:04.214686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.986 [2024-04-26 15:31:04.214695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.986 [2024-04-26 15:31:04.214701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.986 [2024-04-26 15:31:04.214710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.986 [2024-04-26 15:31:04.214717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.986 [2024-04-26 15:31:04.214728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.986 [2024-04-26 15:31:04.214735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.986 [2024-04-26 15:31:04.214745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.986 [2024-04-26 15:31:04.214752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.986 [2024-04-26 15:31:04.214761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.986 [2024-04-26 15:31:04.214767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.986 [2024-04-26 15:31:04.214777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.986 [2024-04-26 15:31:04.214784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.986 [2024-04-26 15:31:04.214793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.986 [2024-04-26 15:31:04.214799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.986 [2024-04-26 15:31:04.214809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.986 [2024-04-26 15:31:04.214816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.986 [2024-04-26 15:31:04.214825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.986 [2024-04-26 15:31:04.214832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.986 [2024-04-26 15:31:04.214844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.986 [2024-04-26 15:31:04.214851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.986 [2024-04-26 15:31:04.214905] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1473750 was disconnected and freed. reset controller. 00:20:46.986 [2024-04-26 15:31:04.215040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.986 [2024-04-26 15:31:04.215051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.986 [2024-04-26 15:31:04.215063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.986 [2024-04-26 15:31:04.215071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.986 [2024-04-26 15:31:04.215080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.986 [2024-04-26 15:31:04.215087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.986 [2024-04-26 15:31:04.215096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.986 [2024-04-26 15:31:04.215103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.986 [2024-04-26 15:31:04.215115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.986 [2024-04-26 15:31:04.215123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.986 [2024-04-26 15:31:04.215132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.986 [2024-04-26 15:31:04.215139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.986 [2024-04-26 15:31:04.215148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.986 [2024-04-26 15:31:04.215154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.986 [2024-04-26 15:31:04.215163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.986 [2024-04-26 15:31:04.215170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.986 [2024-04-26 15:31:04.215179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.986 [2024-04-26 15:31:04.215186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.986 [2024-04-26 15:31:04.215195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.986 [2024-04-26 15:31:04.215201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.986 [2024-04-26 15:31:04.215210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.986 [2024-04-26 15:31:04.215217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.986 [2024-04-26 15:31:04.215226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.986 [2024-04-26 15:31:04.215233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.986 [2024-04-26 15:31:04.215242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.986 [2024-04-26 15:31:04.215248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.986 [2024-04-26 15:31:04.215257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.986 [2024-04-26 15:31:04.215264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.986 [2024-04-26 15:31:04.215273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.986 [2024-04-26 15:31:04.215280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.986 [2024-04-26 15:31:04.215289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.986 [2024-04-26 15:31:04.215296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.986 [2024-04-26 15:31:04.215305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.986 [2024-04-26 15:31:04.215313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.986 [2024-04-26 15:31:04.215322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.986 [2024-04-26 15:31:04.215316] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079300 is same with the state(5) to be set 00:20:46.986 [2024-04-26 15:31:04.215329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.986 [2024-04-26 15:31:04.215335] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079300 is same with the state(5) to be set 00:20:46.986 [2024-04-26 15:31:04.215338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.986 [2024-04-26 15:31:04.215342] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079300 is same with the state(5) to be set 00:20:46.986 [2024-04-26 15:31:04.215346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.987 [2024-04-26 15:31:04.215348] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079300 is same with the state(5) to be set 00:20:46.987 [2024-04-26 15:31:04.215355] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079300 is same with the state(5) to be set 00:20:46.987 [2024-04-26 15:31:04.215356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.987 [2024-04-26 15:31:04.215360] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079300 is same with the state(5) to be set 00:20:46.987 [2024-04-26 15:31:04.215363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.987 [2024-04-26 15:31:04.215366] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079300 is same with the state(5) to be set 00:20:46.987 [2024-04-26 15:31:04.215373] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079300 is same with [2024-04-26 15:31:04.215373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:1the state(5) to be set 00:20:46.987 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.987 [2024-04-26 15:31:04.215380] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079300 is same with the state(5) to be set 00:20:46.987 [2024-04-26 15:31:04.215382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.987 [2024-04-26 15:31:04.215385] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079300 is same with the state(5) to be set 00:20:46.987 [2024-04-26 15:31:04.215391] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079300 is same with the state(5) to be set 00:20:46.987 [2024-04-26 15:31:04.215392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.987 [2024-04-26 15:31:04.215395] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079300 is same with the state(5) to be set 00:20:46.987 [2024-04-26 15:31:04.215399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-26 15:31:04.215400] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079300 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.987 the state(5) to be set 00:20:46.987 [2024-04-26 15:31:04.215408] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079300 is same with the state(5) to be set 00:20:46.987 [2024-04-26 15:31:04.215410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:1[2024-04-26 15:31:04.215412] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079300 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.987 the state(5) to be set 00:20:46.987 [2024-04-26 15:31:04.215421] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079300 is same with the state(5) to be set 00:20:46.987 [2024-04-26 15:31:04.215421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.987 [2024-04-26 15:31:04.215426] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079300 is same with the state(5) to be set 00:20:46.987 [2024-04-26 15:31:04.215431] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079300 is same with the state(5) to be set 00:20:46.987 [2024-04-26 15:31:04.215431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.987 [2024-04-26 15:31:04.215437] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079300 is same with the state(5) to be set 00:20:46.987 [2024-04-26 15:31:04.215442] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079300 is same with the state(5) to be set 00:20:46.987 [2024-04-26 15:31:04.215443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.987 [2024-04-26 15:31:04.215447] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079300 is same with the state(5) to be set 00:20:46.987 [2024-04-26 15:31:04.215452] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079300 is same with the state(5) to be set 00:20:46.987 [2024-04-26 15:31:04.215453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.987 [2024-04-26 15:31:04.215457] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079300 is same with the state(5) to be set 00:20:46.987 [2024-04-26 15:31:04.215460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-26 15:31:04.215462] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079300 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.987 the state(5) to be set 00:20:46.987 [2024-04-26 15:31:04.215468] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079300 is same with the state(5) to be set 00:20:46.987 [2024-04-26 15:31:04.215471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.987 [2024-04-26 15:31:04.215473] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079300 is same with the state(5) to be set 00:20:46.987 [2024-04-26 15:31:04.215478] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079300 is same with the state(5) to be set 00:20:46.987 [2024-04-26 15:31:04.215478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.987 [2024-04-26 15:31:04.215483] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079300 is same with the state(5) to be set 00:20:46.987 [2024-04-26 15:31:04.215488] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079300 is same with the state(5) to be set 00:20:46.987 [2024-04-26 15:31:04.215488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.987 [2024-04-26 15:31:04.215493] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079300 is same with the state(5) to be set 00:20:46.987 [2024-04-26 15:31:04.215496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.987 [2024-04-26 15:31:04.215498] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079300 is same with the state(5) to be set 00:20:46.987 [2024-04-26 15:31:04.215506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:1[2024-04-26 15:31:04.215507] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079300 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.987 the state(5) to be set 00:20:46.987 [2024-04-26 15:31:04.215515] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079300 is same with the state(5) to be set 00:20:46.987 [2024-04-26 15:31:04.215516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.987 [2024-04-26 15:31:04.215520] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079300 is same with the state(5) to be set 00:20:46.987 [2024-04-26 15:31:04.215525] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079300 is same with the state(5) to be set 00:20:46.987 [2024-04-26 15:31:04.215526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.987 [2024-04-26 15:31:04.215530] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079300 is same with the state(5) to be set 00:20:46.987 [2024-04-26 15:31:04.215533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.987 [2024-04-26 15:31:04.215535] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079300 is same with the state(5) to be set 00:20:46.987 [2024-04-26 15:31:04.215540] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079300 is same with the state(5) to be set 00:20:46.987 [2024-04-26 15:31:04.215543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.987 [2024-04-26 15:31:04.215545] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079300 is same with the state(5) to be set 00:20:46.987 [2024-04-26 15:31:04.215550] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079300 is same with [2024-04-26 15:31:04.215550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:20:46.987 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.987 [2024-04-26 15:31:04.215557] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079300 is same with the state(5) to be set 00:20:46.987 [2024-04-26 15:31:04.215562] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079300 is same with [2024-04-26 15:31:04.215561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:1the state(5) to be set 00:20:46.987 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.987 [2024-04-26 15:31:04.215568] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079300 is same with the state(5) to be set 00:20:46.987 [2024-04-26 15:31:04.215570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.987 [2024-04-26 15:31:04.215574] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079300 is same with the state(5) to be set 00:20:46.987 [2024-04-26 15:31:04.215579] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079300 is same with the state(5) to be set 00:20:46.987 [2024-04-26 15:31:04.215580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.987 [2024-04-26 15:31:04.215584] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079300 is same with the state(5) to be set 00:20:46.987 [2024-04-26 15:31:04.215587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.987 [2024-04-26 15:31:04.215589] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079300 is same with the state(5) to be set 00:20:46.987 [2024-04-26 15:31:04.215597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.987 [2024-04-26 15:31:04.215605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.987 [2024-04-26 15:31:04.215614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.987 [2024-04-26 15:31:04.215621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.988 [2024-04-26 15:31:04.215630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.988 [2024-04-26 15:31:04.223878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.988 [2024-04-26 15:31:04.223924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.988 [2024-04-26 15:31:04.223935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.988 [2024-04-26 15:31:04.223946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.988 [2024-04-26 15:31:04.223955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.988 [2024-04-26 15:31:04.223966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.988 [2024-04-26 15:31:04.223974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.988 [2024-04-26 15:31:04.223983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.988 [2024-04-26 15:31:04.223990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.988 [2024-04-26 15:31:04.223999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.988 [2024-04-26 15:31:04.224006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.988 [2024-04-26 15:31:04.224015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.988 [2024-04-26 15:31:04.224022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.988 [2024-04-26 15:31:04.224031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.988 [2024-04-26 15:31:04.224038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.988 [2024-04-26 15:31:04.224047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.988 [2024-04-26 15:31:04.224054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.988 [2024-04-26 15:31:04.224063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.988 [2024-04-26 15:31:04.224070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.988 [2024-04-26 15:31:04.224079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.988 [2024-04-26 15:31:04.224090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.988 [2024-04-26 15:31:04.224099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.988 [2024-04-26 15:31:04.224106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.988 [2024-04-26 15:31:04.224115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.988 [2024-04-26 15:31:04.224122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.988 [2024-04-26 15:31:04.224131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.988 [2024-04-26 15:31:04.224138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.988 [2024-04-26 15:31:04.224147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.988 [2024-04-26 15:31:04.224154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.988 [2024-04-26 15:31:04.224164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.988 [2024-04-26 15:31:04.224171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.988 [2024-04-26 15:31:04.224180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.988 [2024-04-26 15:31:04.224186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.988 [2024-04-26 15:31:04.224195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.988 [2024-04-26 15:31:04.224202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.988 [2024-04-26 15:31:04.224211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.988 [2024-04-26 15:31:04.224217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.988 [2024-04-26 15:31:04.224226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.988 [2024-04-26 15:31:04.224233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.988 [2024-04-26 15:31:04.224242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.988 [2024-04-26 15:31:04.224248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.988 [2024-04-26 15:31:04.224257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.988 [2024-04-26 15:31:04.224265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.988 [2024-04-26 15:31:04.224274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.988 [2024-04-26 15:31:04.224281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.988 [2024-04-26 15:31:04.224290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.988 [2024-04-26 15:31:04.224298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.988 [2024-04-26 15:31:04.224307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.988 [2024-04-26 15:31:04.224313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.988 [2024-04-26 15:31:04.224322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.988 [2024-04-26 15:31:04.224329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.988 [2024-04-26 15:31:04.224338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.988 [2024-04-26 15:31:04.224345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.988 [2024-04-26 15:31:04.224354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.988 [2024-04-26 15:31:04.224360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.988 [2024-04-26 15:31:04.224369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.988 [2024-04-26 15:31:04.224376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.988 [2024-04-26 15:31:04.224385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.988 [2024-04-26 15:31:04.224392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.988 [2024-04-26 15:31:04.224456] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x159c370 was disconnected and freed. reset controller. 00:20:46.988 [2024-04-26 15:31:04.224737] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1511000 (9): Bad file descriptor 00:20:46.988 [2024-04-26 15:31:04.224783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.988 [2024-04-26 15:31:04.224793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.988 [2024-04-26 15:31:04.224802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.988 [2024-04-26 15:31:04.224809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.988 [2024-04-26 15:31:04.224817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.988 [2024-04-26 15:31:04.224824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.988 [2024-04-26 15:31:04.224831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.988 [2024-04-26 15:31:04.224852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.988 [2024-04-26 15:31:04.224860] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360fb0 is same with the state(5) to be set 00:20:46.988 [2024-04-26 15:31:04.224876] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1384ff0 (9): Bad file descriptor 00:20:46.988 [2024-04-26 15:31:04.224898] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148deb0 (9): Bad file descriptor 00:20:46.988 [2024-04-26 15:31:04.224915] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1514700 (9): Bad file descriptor 00:20:46.988 [2024-04-26 15:31:04.224927] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1384910 (9): Bad file descriptor 00:20:46.988 [2024-04-26 15:31:04.224943] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151b240 (9): Bad file descriptor 00:20:46.988 [2024-04-26 15:31:04.224960] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1362eb0 (9): Bad file descriptor 00:20:46.988 [2024-04-26 15:31:04.224972] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1365c20 (9): Bad file descriptor 00:20:46.988 [2024-04-26 15:31:04.224986] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec3ce0 (9): Bad file descriptor 00:20:46.988 [2024-04-26 15:31:04.227733] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:46.988 [2024-04-26 15:31:04.227761] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:20:46.988 [2024-04-26 15:31:04.227965] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:46.988 [2024-04-26 15:31:04.228010] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:46.988 [2024-04-26 15:31:04.228049] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:46.988 [2024-04-26 15:31:04.228125] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:46.989 [2024-04-26 15:31:04.228268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:46.989 [2024-04-26 15:31:04.228640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:46.989 [2024-04-26 15:31:04.228651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1362eb0 with addr=10.0.0.2, port=4420 00:20:46.989 [2024-04-26 15:31:04.228659] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362eb0 is same with the state(5) to be set 00:20:46.989 [2024-04-26 15:31:04.229081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:46.989 [2024-04-26 15:31:04.229455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:46.989 [2024-04-26 15:31:04.229469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1514700 with addr=10.0.0.2, port=4420 00:20:46.989 [2024-04-26 15:31:04.229479] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1514700 is same with the state(5) to be set 00:20:46.989 [2024-04-26 15:31:04.229807] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:46.989 [2024-04-26 15:31:04.229860] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:46.989 [2024-04-26 15:31:04.229900] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:46.989 [2024-04-26 15:31:04.230253] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1362eb0 (9): Bad file descriptor 00:20:46.989 [2024-04-26 15:31:04.230269] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1514700 (9): Bad file descriptor 00:20:46.989 [2024-04-26 15:31:04.230367] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:46.989 [2024-04-26 15:31:04.230384] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:46.989 [2024-04-26 15:31:04.230392] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:46.989 [2024-04-26 15:31:04.230400] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:46.989 [2024-04-26 15:31:04.230416] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:20:46.989 [2024-04-26 15:31:04.230427] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:20:46.989 [2024-04-26 15:31:04.230433] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:20:46.989 [2024-04-26 15:31:04.230499] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:46.989 [2024-04-26 15:31:04.230507] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:46.989 [2024-04-26 15:31:04.234736] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1360fb0 (9): Bad file descriptor 00:20:46.989 [2024-04-26 15:31:04.234894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.989 [2024-04-26 15:31:04.234908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.989 [2024-04-26 15:31:04.234924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.989 [2024-04-26 15:31:04.234932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.989 [2024-04-26 15:31:04.234941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.989 [2024-04-26 15:31:04.234948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.989 [2024-04-26 15:31:04.234958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.989 [2024-04-26 15:31:04.234965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.989 [2024-04-26 15:31:04.234975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.989 [2024-04-26 15:31:04.234982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.989 [2024-04-26 15:31:04.234990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.989 [2024-04-26 15:31:04.234998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.989 [2024-04-26 15:31:04.235007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.989 [2024-04-26 15:31:04.235014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.989 [2024-04-26 15:31:04.235023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.989 [2024-04-26 15:31:04.235030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.989 [2024-04-26 15:31:04.235039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.989 [2024-04-26 15:31:04.235046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.989 [2024-04-26 15:31:04.235055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.989 [2024-04-26 15:31:04.235062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.989 [2024-04-26 15:31:04.235071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.989 [2024-04-26 15:31:04.235078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.989 [2024-04-26 15:31:04.235094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.989 [2024-04-26 15:31:04.235101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.989 [2024-04-26 15:31:04.235110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.989 [2024-04-26 15:31:04.235118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.989 [2024-04-26 15:31:04.235126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.989 [2024-04-26 15:31:04.235134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.989 [2024-04-26 15:31:04.235143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.989 [2024-04-26 15:31:04.235150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.989 [2024-04-26 15:31:04.235159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.989 [2024-04-26 15:31:04.235166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.989 [2024-04-26 15:31:04.235175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.989 [2024-04-26 15:31:04.235182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.989 [2024-04-26 15:31:04.235191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.989 [2024-04-26 15:31:04.235198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.989 [2024-04-26 15:31:04.235207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.989 [2024-04-26 15:31:04.235214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.989 [2024-04-26 15:31:04.235223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.989 [2024-04-26 15:31:04.235230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.989 [2024-04-26 15:31:04.235240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.989 [2024-04-26 15:31:04.235246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.989 [2024-04-26 15:31:04.235255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.989 [2024-04-26 15:31:04.235263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.989 [2024-04-26 15:31:04.235272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.989 [2024-04-26 15:31:04.235279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.989 [2024-04-26 15:31:04.235288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.989 [2024-04-26 15:31:04.235296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.989 [2024-04-26 15:31:04.235305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.989 [2024-04-26 15:31:04.235312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.989 [2024-04-26 15:31:04.235321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.989 [2024-04-26 15:31:04.235328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.989 [2024-04-26 15:31:04.235338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.989 [2024-04-26 15:31:04.235344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.989 [2024-04-26 15:31:04.235353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.989 [2024-04-26 15:31:04.235360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.989 [2024-04-26 15:31:04.235369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.989 [2024-04-26 15:31:04.235376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.989 [2024-04-26 15:31:04.235385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.989 [2024-04-26 15:31:04.235392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.989 [2024-04-26 15:31:04.235401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.989 [2024-04-26 15:31:04.235408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.989 [2024-04-26 15:31:04.235417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.989 [2024-04-26 15:31:04.235424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.989 [2024-04-26 15:31:04.235433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.989 [2024-04-26 15:31:04.235440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.990 [2024-04-26 15:31:04.235449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.990 [2024-04-26 15:31:04.235456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.990 [2024-04-26 15:31:04.235465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.990 [2024-04-26 15:31:04.235472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.990 [2024-04-26 15:31:04.235481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.990 [2024-04-26 15:31:04.235488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.990 [2024-04-26 15:31:04.235498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.990 [2024-04-26 15:31:04.235505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.990 [2024-04-26 15:31:04.235514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.990 [2024-04-26 15:31:04.235521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.990 [2024-04-26 15:31:04.235530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.990 [2024-04-26 15:31:04.235537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.990 [2024-04-26 15:31:04.235547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.990 [2024-04-26 15:31:04.235553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.990 [2024-04-26 15:31:04.235563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.990 [2024-04-26 15:31:04.235570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.990 [2024-04-26 15:31:04.235579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.990 [2024-04-26 15:31:04.235586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.990 [2024-04-26 15:31:04.235595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.990 [2024-04-26 15:31:04.235602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.990 [2024-04-26 15:31:04.235612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.990 [2024-04-26 15:31:04.235619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.990 [2024-04-26 15:31:04.235628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.990 [2024-04-26 15:31:04.235635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.990 [2024-04-26 15:31:04.235644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.990 [2024-04-26 15:31:04.235651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.990 [2024-04-26 15:31:04.235660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.990 [2024-04-26 15:31:04.235667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.990 [2024-04-26 15:31:04.235676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.990 [2024-04-26 15:31:04.235683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.990 [2024-04-26 15:31:04.235692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.990 [2024-04-26 15:31:04.235700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.990 [2024-04-26 15:31:04.235709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.990 [2024-04-26 15:31:04.235716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.990 [2024-04-26 15:31:04.235725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.990 [2024-04-26 15:31:04.235733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.990 [2024-04-26 15:31:04.235741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.990 [2024-04-26 15:31:04.235748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.990 [2024-04-26 15:31:04.235757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.990 [2024-04-26 15:31:04.235764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.990 [2024-04-26 15:31:04.235773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.990 [2024-04-26 15:31:04.235780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.990 [2024-04-26 15:31:04.235789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.990 [2024-04-26 15:31:04.235796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.990 [2024-04-26 15:31:04.235806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.990 [2024-04-26 15:31:04.235813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.990 [2024-04-26 15:31:04.235822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.990 [2024-04-26 15:31:04.235828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.990 [2024-04-26 15:31:04.235842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.990 [2024-04-26 15:31:04.235849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.990 [2024-04-26 15:31:04.235858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.990 [2024-04-26 15:31:04.235865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.990 [2024-04-26 15:31:04.235874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.990 [2024-04-26 15:31:04.235881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.990 [2024-04-26 15:31:04.235890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.990 [2024-04-26 15:31:04.235897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.990 [2024-04-26 15:31:04.235907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.990 [2024-04-26 15:31:04.235914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.990 [2024-04-26 15:31:04.235923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.990 [2024-04-26 15:31:04.235930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.990 [2024-04-26 15:31:04.235939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.990 [2024-04-26 15:31:04.235946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.990 [2024-04-26 15:31:04.235954] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e1070 is same with the state(5) to be set 00:20:46.990 [2024-04-26 15:31:04.237240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.990 [2024-04-26 15:31:04.237254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.990 [2024-04-26 15:31:04.237266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.990 [2024-04-26 15:31:04.237275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.990 [2024-04-26 15:31:04.237286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.990 [2024-04-26 15:31:04.237294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.990 [2024-04-26 15:31:04.237305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.990 [2024-04-26 15:31:04.237314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.990 [2024-04-26 15:31:04.237324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.990 [2024-04-26 15:31:04.237331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.990 [2024-04-26 15:31:04.237340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.990 [2024-04-26 15:31:04.237348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.990 [2024-04-26 15:31:04.237357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.990 [2024-04-26 15:31:04.237364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.990 [2024-04-26 15:31:04.237373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.990 [2024-04-26 15:31:04.237380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.990 [2024-04-26 15:31:04.237389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.990 [2024-04-26 15:31:04.237396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.990 [2024-04-26 15:31:04.237408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.990 [2024-04-26 15:31:04.237415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.990 [2024-04-26 15:31:04.237424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.990 [2024-04-26 15:31:04.237431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.991 [2024-04-26 15:31:04.237440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.991 [2024-04-26 15:31:04.237447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.991 [2024-04-26 15:31:04.237456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.991 [2024-04-26 15:31:04.237463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.991 [2024-04-26 15:31:04.237471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.991 [2024-04-26 15:31:04.237478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.991 [2024-04-26 15:31:04.237488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.991 [2024-04-26 15:31:04.237494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.991 [2024-04-26 15:31:04.237504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.991 [2024-04-26 15:31:04.237510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.991 [2024-04-26 15:31:04.237519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.991 [2024-04-26 15:31:04.237526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.991 [2024-04-26 15:31:04.237535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.991 [2024-04-26 15:31:04.237543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.991 [2024-04-26 15:31:04.237551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.991 [2024-04-26 15:31:04.237559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.991 [2024-04-26 15:31:04.237568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.991 [2024-04-26 15:31:04.237575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.991 [2024-04-26 15:31:04.237584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.991 [2024-04-26 15:31:04.237591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.991 [2024-04-26 15:31:04.237600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.991 [2024-04-26 15:31:04.237609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.991 [2024-04-26 15:31:04.237618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.991 [2024-04-26 15:31:04.237625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.991 [2024-04-26 15:31:04.237634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.991 [2024-04-26 15:31:04.237641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.991 [2024-04-26 15:31:04.237650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.991 [2024-04-26 15:31:04.237657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.991 [2024-04-26 15:31:04.237665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.991 [2024-04-26 15:31:04.237673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.991 [2024-04-26 15:31:04.237681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.991 [2024-04-26 15:31:04.237689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.991 [2024-04-26 15:31:04.237697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.991 [2024-04-26 15:31:04.237704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.991 [2024-04-26 15:31:04.237713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.991 [2024-04-26 15:31:04.237720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.991 [2024-04-26 15:31:04.237729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.991 [2024-04-26 15:31:04.237736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.991 [2024-04-26 15:31:04.237745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.991 [2024-04-26 15:31:04.237752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.991 [2024-04-26 15:31:04.237761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.991 [2024-04-26 15:31:04.237768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.991 [2024-04-26 15:31:04.237778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.991 [2024-04-26 15:31:04.237785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.991 [2024-04-26 15:31:04.237794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.991 [2024-04-26 15:31:04.237801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.991 [2024-04-26 15:31:04.237812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.991 [2024-04-26 15:31:04.237819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.991 [2024-04-26 15:31:04.237828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.991 [2024-04-26 15:31:04.237835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.991 [2024-04-26 15:31:04.237848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.991 [2024-04-26 15:31:04.237855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.991 [2024-04-26 15:31:04.237865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.991 [2024-04-26 15:31:04.237871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.991 [2024-04-26 15:31:04.237881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.991 [2024-04-26 15:31:04.237888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.991 [2024-04-26 15:31:04.237897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.991 [2024-04-26 15:31:04.237904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.991 [2024-04-26 15:31:04.237913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.991 [2024-04-26 15:31:04.237920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.991 [2024-04-26 15:31:04.237929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.991 [2024-04-26 15:31:04.237936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.991 [2024-04-26 15:31:04.237945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.991 [2024-04-26 15:31:04.237952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.991 [2024-04-26 15:31:04.237962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.991 [2024-04-26 15:31:04.237968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.991 [2024-04-26 15:31:04.237978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.991 [2024-04-26 15:31:04.237984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.991 [2024-04-26 15:31:04.237994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.991 [2024-04-26 15:31:04.238001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.991 [2024-04-26 15:31:04.238010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.991 [2024-04-26 15:31:04.238019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.992 [2024-04-26 15:31:04.238028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.992 [2024-04-26 15:31:04.238035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.992 [2024-04-26 15:31:04.238044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.992 [2024-04-26 15:31:04.238052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.992 [2024-04-26 15:31:04.238061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.992 [2024-04-26 15:31:04.238068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.992 [2024-04-26 15:31:04.238077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.992 [2024-04-26 15:31:04.238084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.992 [2024-04-26 15:31:04.238093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.992 [2024-04-26 15:31:04.238101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.992 [2024-04-26 15:31:04.238110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.992 [2024-04-26 15:31:04.238117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.992 [2024-04-26 15:31:04.238126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.992 [2024-04-26 15:31:04.238134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.992 [2024-04-26 15:31:04.238143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.992 [2024-04-26 15:31:04.238150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.992 [2024-04-26 15:31:04.238159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.992 [2024-04-26 15:31:04.238166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.992 [2024-04-26 15:31:04.238175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.992 [2024-04-26 15:31:04.238182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.992 [2024-04-26 15:31:04.238191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.992 [2024-04-26 15:31:04.238199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.992 [2024-04-26 15:31:04.238208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.992 [2024-04-26 15:31:04.238215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.992 [2024-04-26 15:31:04.238225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.992 [2024-04-26 15:31:04.238232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.992 [2024-04-26 15:31:04.238242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.992 [2024-04-26 15:31:04.238249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.992 [2024-04-26 15:31:04.238258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.992 [2024-04-26 15:31:04.238265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.992 [2024-04-26 15:31:04.238274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.992 [2024-04-26 15:31:04.238281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.992 [2024-04-26 15:31:04.238290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.992 [2024-04-26 15:31:04.238297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.992 [2024-04-26 15:31:04.238305] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e24d0 is same with the state(5) to be set 00:20:46.992 [2024-04-26 15:31:04.239578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.992 [2024-04-26 15:31:04.239591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.992 [2024-04-26 15:31:04.239604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.992 [2024-04-26 15:31:04.239612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.992 [2024-04-26 15:31:04.239623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.992 [2024-04-26 15:31:04.239631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.992 [2024-04-26 15:31:04.239643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.992 [2024-04-26 15:31:04.239651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.992 [2024-04-26 15:31:04.239662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.992 [2024-04-26 15:31:04.239668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.992 [2024-04-26 15:31:04.239677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.992 [2024-04-26 15:31:04.239685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.992 [2024-04-26 15:31:04.239694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.992 [2024-04-26 15:31:04.239701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.992 [2024-04-26 15:31:04.239713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.992 [2024-04-26 15:31:04.239720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.992 [2024-04-26 15:31:04.239729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.992 [2024-04-26 15:31:04.239737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.992 [2024-04-26 15:31:04.239746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.992 [2024-04-26 15:31:04.239753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.992 [2024-04-26 15:31:04.239762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.992 [2024-04-26 15:31:04.239769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.992 [2024-04-26 15:31:04.239778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.992 [2024-04-26 15:31:04.239785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.992 [2024-04-26 15:31:04.239795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.992 [2024-04-26 15:31:04.239802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.992 [2024-04-26 15:31:04.239811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.992 [2024-04-26 15:31:04.239818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.992 [2024-04-26 15:31:04.239827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.992 [2024-04-26 15:31:04.239834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.992 [2024-04-26 15:31:04.239847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.992 [2024-04-26 15:31:04.239854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.992 [2024-04-26 15:31:04.239864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.992 [2024-04-26 15:31:04.239871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.992 [2024-04-26 15:31:04.239881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.992 [2024-04-26 15:31:04.239887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.992 [2024-04-26 15:31:04.239897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.992 [2024-04-26 15:31:04.239904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.992 [2024-04-26 15:31:04.239913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.992 [2024-04-26 15:31:04.239922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.992 [2024-04-26 15:31:04.239931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.992 [2024-04-26 15:31:04.239938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.992 [2024-04-26 15:31:04.239947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.992 [2024-04-26 15:31:04.239955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.992 [2024-04-26 15:31:04.239964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.992 [2024-04-26 15:31:04.239972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.992 [2024-04-26 15:31:04.239980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.992 [2024-04-26 15:31:04.239988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.992 [2024-04-26 15:31:04.239997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.992 [2024-04-26 15:31:04.240004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.993 [2024-04-26 15:31:04.240013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.993 [2024-04-26 15:31:04.240020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.993 [2024-04-26 15:31:04.240029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.993 [2024-04-26 15:31:04.240036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.993 [2024-04-26 15:31:04.240046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.993 [2024-04-26 15:31:04.240052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.993 [2024-04-26 15:31:04.240062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.993 [2024-04-26 15:31:04.240069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.993 [2024-04-26 15:31:04.240078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.993 [2024-04-26 15:31:04.240085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.993 [2024-04-26 15:31:04.240094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.993 [2024-04-26 15:31:04.240101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.993 [2024-04-26 15:31:04.240110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.993 [2024-04-26 15:31:04.240118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.993 [2024-04-26 15:31:04.240128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.993 [2024-04-26 15:31:04.240135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.993 [2024-04-26 15:31:04.240144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.993 [2024-04-26 15:31:04.240151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.993 [2024-04-26 15:31:04.240161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.993 [2024-04-26 15:31:04.240168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.993 [2024-04-26 15:31:04.240177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.993 [2024-04-26 15:31:04.240184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.993 [2024-04-26 15:31:04.240193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.993 [2024-04-26 15:31:04.240200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.993 [2024-04-26 15:31:04.240210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.993 [2024-04-26 15:31:04.240217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.993 [2024-04-26 15:31:04.240226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.993 [2024-04-26 15:31:04.240233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.993 [2024-04-26 15:31:04.240242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.993 [2024-04-26 15:31:04.240249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.993 [2024-04-26 15:31:04.240258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.993 [2024-04-26 15:31:04.240266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.993 [2024-04-26 15:31:04.240275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.993 [2024-04-26 15:31:04.240282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.993 [2024-04-26 15:31:04.240291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.993 [2024-04-26 15:31:04.240298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.993 [2024-04-26 15:31:04.240308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.993 [2024-04-26 15:31:04.240314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.993 [2024-04-26 15:31:04.240323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.993 [2024-04-26 15:31:04.240333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.993 [2024-04-26 15:31:04.240343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.993 [2024-04-26 15:31:04.240349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.993 [2024-04-26 15:31:04.240359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.993 [2024-04-26 15:31:04.240366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.993 [2024-04-26 15:31:04.240375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.993 [2024-04-26 15:31:04.240383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.993 [2024-04-26 15:31:04.240392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.993 [2024-04-26 15:31:04.240399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.993 [2024-04-26 15:31:04.240409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.993 [2024-04-26 15:31:04.240417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.993 [2024-04-26 15:31:04.240426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.993 [2024-04-26 15:31:04.240433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.993 [2024-04-26 15:31:04.240442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.993 [2024-04-26 15:31:04.240450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.993 [2024-04-26 15:31:04.240459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.993 [2024-04-26 15:31:04.240466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.993 [2024-04-26 15:31:04.240475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.993 [2024-04-26 15:31:04.240482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.993 [2024-04-26 15:31:04.240492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.993 [2024-04-26 15:31:04.240498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.993 [2024-04-26 15:31:04.240508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.993 [2024-04-26 15:31:04.240514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.993 [2024-04-26 15:31:04.240523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.993 [2024-04-26 15:31:04.240530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.993 [2024-04-26 15:31:04.240541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.993 [2024-04-26 15:31:04.240549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.993 [2024-04-26 15:31:04.240558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.993 [2024-04-26 15:31:04.240565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.993 [2024-04-26 15:31:04.240574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.993 [2024-04-26 15:31:04.240581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.993 [2024-04-26 15:31:04.240590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.993 [2024-04-26 15:31:04.240597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.993 [2024-04-26 15:31:04.240606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.993 [2024-04-26 15:31:04.240613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.993 [2024-04-26 15:31:04.240622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.993 [2024-04-26 15:31:04.240629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.993 [2024-04-26 15:31:04.240638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.993 [2024-04-26 15:31:04.240645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.993 [2024-04-26 15:31:04.240653] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159aec0 is same with the state(5) to be set 00:20:46.993 [2024-04-26 15:31:04.241917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.993 [2024-04-26 15:31:04.241929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.993 [2024-04-26 15:31:04.241940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.993 [2024-04-26 15:31:04.241947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.993 [2024-04-26 15:31:04.241956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.994 [2024-04-26 15:31:04.241964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.994 [2024-04-26 15:31:04.241973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.994 [2024-04-26 15:31:04.241980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.994 [2024-04-26 15:31:04.241989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.994 [2024-04-26 15:31:04.241996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.994 [2024-04-26 15:31:04.242005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.994 [2024-04-26 15:31:04.242017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.994 [2024-04-26 15:31:04.242026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.994 [2024-04-26 15:31:04.242033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.994 [2024-04-26 15:31:04.242042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.994 [2024-04-26 15:31:04.242049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.994 [2024-04-26 15:31:04.242059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.994 [2024-04-26 15:31:04.242066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.994 [2024-04-26 15:31:04.242075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.994 [2024-04-26 15:31:04.242082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.994 [2024-04-26 15:31:04.242091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.994 [2024-04-26 15:31:04.242098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.994 [2024-04-26 15:31:04.242107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.994 [2024-04-26 15:31:04.242115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.994 [2024-04-26 15:31:04.242123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.994 [2024-04-26 15:31:04.242131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.994 [2024-04-26 15:31:04.242140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.994 [2024-04-26 15:31:04.242147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.994 [2024-04-26 15:31:04.242156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.994 [2024-04-26 15:31:04.242163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.994 [2024-04-26 15:31:04.242172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.994 [2024-04-26 15:31:04.242179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.994 [2024-04-26 15:31:04.242188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.994 [2024-04-26 15:31:04.242195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.994 [2024-04-26 15:31:04.242204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.994 [2024-04-26 15:31:04.242211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.994 [2024-04-26 15:31:04.242222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.994 [2024-04-26 15:31:04.242229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.994 [2024-04-26 15:31:04.242238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.994 [2024-04-26 15:31:04.242245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.994 [2024-04-26 15:31:04.242254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.994 [2024-04-26 15:31:04.242261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.994 [2024-04-26 15:31:04.242270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.994 [2024-04-26 15:31:04.242277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.994 [2024-04-26 15:31:04.242286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.994 [2024-04-26 15:31:04.242293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.994 [2024-04-26 15:31:04.242302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.994 [2024-04-26 15:31:04.242309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.994 [2024-04-26 15:31:04.242319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.994 [2024-04-26 15:31:04.242325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.994 [2024-04-26 15:31:04.242335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.994 [2024-04-26 15:31:04.242342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.994 [2024-04-26 15:31:04.242351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.994 [2024-04-26 15:31:04.242358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.994 [2024-04-26 15:31:04.242367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.994 [2024-04-26 15:31:04.242374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.994 [2024-04-26 15:31:04.242383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.994 [2024-04-26 15:31:04.242390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.994 [2024-04-26 15:31:04.242399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.994 [2024-04-26 15:31:04.242406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.994 [2024-04-26 15:31:04.242415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.994 [2024-04-26 15:31:04.242423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.994 [2024-04-26 15:31:04.242432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.994 [2024-04-26 15:31:04.242439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.994 [2024-04-26 15:31:04.242448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.994 [2024-04-26 15:31:04.242455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.994 [2024-04-26 15:31:04.242464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.994 [2024-04-26 15:31:04.242472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.994 [2024-04-26 15:31:04.242481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.994 [2024-04-26 15:31:04.242488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.994 [2024-04-26 15:31:04.242497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.994 [2024-04-26 15:31:04.242504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.994 [2024-04-26 15:31:04.242513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.994 [2024-04-26 15:31:04.242520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.994 [2024-04-26 15:31:04.242529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.994 [2024-04-26 15:31:04.242536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.994 [2024-04-26 15:31:04.242545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.994 [2024-04-26 15:31:04.242552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.994 [2024-04-26 15:31:04.242561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.994 [2024-04-26 15:31:04.242568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.994 [2024-04-26 15:31:04.242577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.994 [2024-04-26 15:31:04.242584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.994 [2024-04-26 15:31:04.242593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.994 [2024-04-26 15:31:04.242600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.994 [2024-04-26 15:31:04.242609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.994 [2024-04-26 15:31:04.242616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.994 [2024-04-26 15:31:04.242627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.994 [2024-04-26 15:31:04.242634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.994 [2024-04-26 15:31:04.242642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.995 [2024-04-26 15:31:04.242650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.995 [2024-04-26 15:31:04.242659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.995 [2024-04-26 15:31:04.242666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.995 [2024-04-26 15:31:04.242675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.995 [2024-04-26 15:31:04.242682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.995 [2024-04-26 15:31:04.242691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.995 [2024-04-26 15:31:04.242698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.995 [2024-04-26 15:31:04.242707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.995 [2024-04-26 15:31:04.242714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.995 [2024-04-26 15:31:04.242724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.995 [2024-04-26 15:31:04.242731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.995 [2024-04-26 15:31:04.242740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.995 [2024-04-26 15:31:04.242747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.995 [2024-04-26 15:31:04.242756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.995 [2024-04-26 15:31:04.242763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.995 [2024-04-26 15:31:04.242773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.995 [2024-04-26 15:31:04.242779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.995 [2024-04-26 15:31:04.242788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.995 [2024-04-26 15:31:04.242795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.995 [2024-04-26 15:31:04.242804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.995 [2024-04-26 15:31:04.242811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.995 [2024-04-26 15:31:04.242820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.995 [2024-04-26 15:31:04.242829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.995 [2024-04-26 15:31:04.242846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.995 [2024-04-26 15:31:04.242853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.995 [2024-04-26 15:31:04.242862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.995 [2024-04-26 15:31:04.242869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.995 [2024-04-26 15:31:04.242878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.995 [2024-04-26 15:31:04.242885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.995 [2024-04-26 15:31:04.242894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.995 [2024-04-26 15:31:04.242901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.995 [2024-04-26 15:31:04.242910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.995 [2024-04-26 15:31:04.242917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.995 [2024-04-26 15:31:04.242926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.995 [2024-04-26 15:31:04.242933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.995 [2024-04-26 15:31:04.242942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.995 [2024-04-26 15:31:04.242949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.995 [2024-04-26 15:31:04.242958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.995 [2024-04-26 15:31:04.242965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.995 [2024-04-26 15:31:04.242973] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1475220 is same with the state(5) to be set 00:20:46.995 [2024-04-26 15:31:04.244238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.995 [2024-04-26 15:31:04.244252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.995 [2024-04-26 15:31:04.244263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.995 [2024-04-26 15:31:04.244270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.995 [2024-04-26 15:31:04.244279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.995 [2024-04-26 15:31:04.244287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.995 [2024-04-26 15:31:04.244296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.995 [2024-04-26 15:31:04.244307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.995 [2024-04-26 15:31:04.244316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.995 [2024-04-26 15:31:04.244323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.995 [2024-04-26 15:31:04.244332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.995 [2024-04-26 15:31:04.244339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.995 [2024-04-26 15:31:04.244348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.995 [2024-04-26 15:31:04.244356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.995 [2024-04-26 15:31:04.244365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.995 [2024-04-26 15:31:04.244372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.995 [2024-04-26 15:31:04.244381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.995 [2024-04-26 15:31:04.244388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.995 [2024-04-26 15:31:04.244398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.995 [2024-04-26 15:31:04.244405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.995 [2024-04-26 15:31:04.244414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.995 [2024-04-26 15:31:04.244421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.995 [2024-04-26 15:31:04.244430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.995 [2024-04-26 15:31:04.244437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.995 [2024-04-26 15:31:04.244446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.995 [2024-04-26 15:31:04.244453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.995 [2024-04-26 15:31:04.244462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.995 [2024-04-26 15:31:04.244470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.995 [2024-04-26 15:31:04.244479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.995 [2024-04-26 15:31:04.244486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.995 [2024-04-26 15:31:04.244495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.995 [2024-04-26 15:31:04.244502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.995 [2024-04-26 15:31:04.244513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.996 [2024-04-26 15:31:04.244521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.996 [2024-04-26 15:31:04.244530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.996 [2024-04-26 15:31:04.244538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.996 [2024-04-26 15:31:04.244547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.996 [2024-04-26 15:31:04.244554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.996 [2024-04-26 15:31:04.244563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.996 [2024-04-26 15:31:04.244570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.996 [2024-04-26 15:31:04.244579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.996 [2024-04-26 15:31:04.244586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.996 [2024-04-26 15:31:04.244595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.996 [2024-04-26 15:31:04.244601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.996 [2024-04-26 15:31:04.244610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.996 [2024-04-26 15:31:04.244617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.996 [2024-04-26 15:31:04.244626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.996 [2024-04-26 15:31:04.244633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.996 [2024-04-26 15:31:04.244642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.996 [2024-04-26 15:31:04.244649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.996 [2024-04-26 15:31:04.244658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.996 [2024-04-26 15:31:04.244665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.996 [2024-04-26 15:31:04.244674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.996 [2024-04-26 15:31:04.244681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.996 [2024-04-26 15:31:04.244691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.996 [2024-04-26 15:31:04.244698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.996 [2024-04-26 15:31:04.244707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.996 [2024-04-26 15:31:04.244716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.996 [2024-04-26 15:31:04.244724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.996 [2024-04-26 15:31:04.244732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.996 [2024-04-26 15:31:04.244741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.996 [2024-04-26 15:31:04.244748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.996 [2024-04-26 15:31:04.244757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.996 [2024-04-26 15:31:04.244764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.996 [2024-04-26 15:31:04.244773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.996 [2024-04-26 15:31:04.244780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.996 [2024-04-26 15:31:04.244789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.996 [2024-04-26 15:31:04.244796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.996 [2024-04-26 15:31:04.244806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.996 [2024-04-26 15:31:04.244812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.996 [2024-04-26 15:31:04.244822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.996 [2024-04-26 15:31:04.244828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.996 [2024-04-26 15:31:04.244841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.996 [2024-04-26 15:31:04.244848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.996 [2024-04-26 15:31:04.244857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.996 [2024-04-26 15:31:04.244864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.996 [2024-04-26 15:31:04.244873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.996 [2024-04-26 15:31:04.244880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.996 [2024-04-26 15:31:04.244889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.996 [2024-04-26 15:31:04.244896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.996 [2024-04-26 15:31:04.244905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.996 [2024-04-26 15:31:04.244912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.996 [2024-04-26 15:31:04.244923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.996 [2024-04-26 15:31:04.244930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.996 [2024-04-26 15:31:04.244939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.996 [2024-04-26 15:31:04.244946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.996 [2024-04-26 15:31:04.244955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.996 [2024-04-26 15:31:04.244962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.996 [2024-04-26 15:31:04.244971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.996 [2024-04-26 15:31:04.244978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.996 [2024-04-26 15:31:04.244987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.996 [2024-04-26 15:31:04.244995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.996 [2024-04-26 15:31:04.245004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.996 [2024-04-26 15:31:04.245011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.996 [2024-04-26 15:31:04.245020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.996 [2024-04-26 15:31:04.245027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.996 [2024-04-26 15:31:04.245036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.996 [2024-04-26 15:31:04.245043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.996 [2024-04-26 15:31:04.245052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.996 [2024-04-26 15:31:04.245059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.996 [2024-04-26 15:31:04.245068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.996 [2024-04-26 15:31:04.245076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.996 [2024-04-26 15:31:04.245085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.996 [2024-04-26 15:31:04.245092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.996 [2024-04-26 15:31:04.245101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.996 [2024-04-26 15:31:04.245109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.996 [2024-04-26 15:31:04.245118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.996 [2024-04-26 15:31:04.245127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.996 [2024-04-26 15:31:04.245136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.996 [2024-04-26 15:31:04.245143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.996 [2024-04-26 15:31:04.245152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.996 [2024-04-26 15:31:04.245159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.996 [2024-04-26 15:31:04.245168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.996 [2024-04-26 15:31:04.245175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.996 [2024-04-26 15:31:04.245184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.996 [2024-04-26 15:31:04.245191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.997 [2024-04-26 15:31:04.245199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.997 [2024-04-26 15:31:04.245206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.997 [2024-04-26 15:31:04.245216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.997 [2024-04-26 15:31:04.245222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.997 [2024-04-26 15:31:04.245231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.997 [2024-04-26 15:31:04.245238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.997 [2024-04-26 15:31:04.245247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.997 [2024-04-26 15:31:04.245254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.997 [2024-04-26 15:31:04.245263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.997 [2024-04-26 15:31:04.245270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.997 [2024-04-26 15:31:04.245280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.997 [2024-04-26 15:31:04.245286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.997 [2024-04-26 15:31:04.245295] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135bb00 is same with the state(5) to be set 00:20:46.997 [2024-04-26 15:31:04.246551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.997 [2024-04-26 15:31:04.246564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.997 [2024-04-26 15:31:04.246576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.997 [2024-04-26 15:31:04.246588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.997 [2024-04-26 15:31:04.246599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.997 [2024-04-26 15:31:04.246606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.997 [2024-04-26 15:31:04.246615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.997 [2024-04-26 15:31:04.246623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.997 [2024-04-26 15:31:04.246632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.997 [2024-04-26 15:31:04.246639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.997 [2024-04-26 15:31:04.246648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.997 [2024-04-26 15:31:04.246655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.997 [2024-04-26 15:31:04.246664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.997 [2024-04-26 15:31:04.246671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.997 [2024-04-26 15:31:04.246681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.997 [2024-04-26 15:31:04.246688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.997 [2024-04-26 15:31:04.246697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.997 [2024-04-26 15:31:04.246704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.997 [2024-04-26 15:31:04.246714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.997 [2024-04-26 15:31:04.246721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.997 [2024-04-26 15:31:04.246730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.997 [2024-04-26 15:31:04.246737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.997 [2024-04-26 15:31:04.246746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.997 [2024-04-26 15:31:04.246754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.997 [2024-04-26 15:31:04.246763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.997 [2024-04-26 15:31:04.246771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.997 [2024-04-26 15:31:04.246779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.997 [2024-04-26 15:31:04.246787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.997 [2024-04-26 15:31:04.246797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.997 [2024-04-26 15:31:04.246805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.997 [2024-04-26 15:31:04.246813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.997 [2024-04-26 15:31:04.246821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.997 [2024-04-26 15:31:04.246830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.997 [2024-04-26 15:31:04.246841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.997 [2024-04-26 15:31:04.246850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.997 [2024-04-26 15:31:04.246858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.997 [2024-04-26 15:31:04.246867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.997 [2024-04-26 15:31:04.246874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.997 [2024-04-26 15:31:04.246884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.997 [2024-04-26 15:31:04.246891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.997 [2024-04-26 15:31:04.246900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.997 [2024-04-26 15:31:04.246907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.997 [2024-04-26 15:31:04.246916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.997 [2024-04-26 15:31:04.246923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.997 [2024-04-26 15:31:04.246932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.997 [2024-04-26 15:31:04.246939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.997 [2024-04-26 15:31:04.246948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.997 [2024-04-26 15:31:04.246955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.997 [2024-04-26 15:31:04.246964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.997 [2024-04-26 15:31:04.246971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.997 [2024-04-26 15:31:04.246980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.997 [2024-04-26 15:31:04.246987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.997 [2024-04-26 15:31:04.246995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.997 [2024-04-26 15:31:04.247004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.997 [2024-04-26 15:31:04.247013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.997 [2024-04-26 15:31:04.247020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.997 [2024-04-26 15:31:04.247029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.997 [2024-04-26 15:31:04.247036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.997 [2024-04-26 15:31:04.247045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.997 [2024-04-26 15:31:04.247052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.997 [2024-04-26 15:31:04.247061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.997 [2024-04-26 15:31:04.247068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.997 [2024-04-26 15:31:04.247078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.997 [2024-04-26 15:31:04.247085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.997 [2024-04-26 15:31:04.247094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.997 [2024-04-26 15:31:04.247101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.997 [2024-04-26 15:31:04.247110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.997 [2024-04-26 15:31:04.247117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.997 [2024-04-26 15:31:04.247126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.997 [2024-04-26 15:31:04.247134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.997 [2024-04-26 15:31:04.247142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.997 [2024-04-26 15:31:04.247149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.998 [2024-04-26 15:31:04.247158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.998 [2024-04-26 15:31:04.247165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.998 [2024-04-26 15:31:04.247175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.998 [2024-04-26 15:31:04.247181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.998 [2024-04-26 15:31:04.247190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.998 [2024-04-26 15:31:04.247197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.998 [2024-04-26 15:31:04.247207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.998 [2024-04-26 15:31:04.247215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.998 [2024-04-26 15:31:04.247224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.998 [2024-04-26 15:31:04.247231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.998 [2024-04-26 15:31:04.247240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.998 [2024-04-26 15:31:04.247247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.998 [2024-04-26 15:31:04.247256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.998 [2024-04-26 15:31:04.247264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.998 [2024-04-26 15:31:04.247272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.998 [2024-04-26 15:31:04.247279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.998 [2024-04-26 15:31:04.247288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.998 [2024-04-26 15:31:04.247296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.998 [2024-04-26 15:31:04.247305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.998 [2024-04-26 15:31:04.247312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.998 [2024-04-26 15:31:04.247321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.998 [2024-04-26 15:31:04.247328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.998 [2024-04-26 15:31:04.247337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.998 [2024-04-26 15:31:04.247343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.998 [2024-04-26 15:31:04.247352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.998 [2024-04-26 15:31:04.247359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.998 [2024-04-26 15:31:04.247368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.998 [2024-04-26 15:31:04.247375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.998 [2024-04-26 15:31:04.247384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.998 [2024-04-26 15:31:04.247392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.998 [2024-04-26 15:31:04.247400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.998 [2024-04-26 15:31:04.247408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.998 [2024-04-26 15:31:04.247427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.998 [2024-04-26 15:31:04.247435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.998 [2024-04-26 15:31:04.247444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.998 [2024-04-26 15:31:04.247451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.998 [2024-04-26 15:31:04.247461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.998 [2024-04-26 15:31:04.247467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.998 [2024-04-26 15:31:04.247477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.998 [2024-04-26 15:31:04.247484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.998 [2024-04-26 15:31:04.247493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.998 [2024-04-26 15:31:04.247500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.998 [2024-04-26 15:31:04.247509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.998 [2024-04-26 15:31:04.247516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.998 [2024-04-26 15:31:04.247525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.998 [2024-04-26 15:31:04.247533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.998 [2024-04-26 15:31:04.247542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.998 [2024-04-26 15:31:04.247549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.998 [2024-04-26 15:31:04.247558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.998 [2024-04-26 15:31:04.247565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.998 [2024-04-26 15:31:04.247574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.998 [2024-04-26 15:31:04.247581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.998 [2024-04-26 15:31:04.247590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.998 [2024-04-26 15:31:04.247597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.998 [2024-04-26 15:31:04.247606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.998 [2024-04-26 15:31:04.247612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.998 [2024-04-26 15:31:04.247621] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135ce60 is same with the state(5) to be set 00:20:46.998 [2024-04-26 15:31:04.248899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.998 [2024-04-26 15:31:04.248911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.998 [2024-04-26 15:31:04.248922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.998 [2024-04-26 15:31:04.248929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.998 [2024-04-26 15:31:04.248939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.998 [2024-04-26 15:31:04.248946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.998 [2024-04-26 15:31:04.248956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.998 [2024-04-26 15:31:04.248963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.998 [2024-04-26 15:31:04.248972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.998 [2024-04-26 15:31:04.248979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.998 [2024-04-26 15:31:04.248988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.998 [2024-04-26 15:31:04.248995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.998 [2024-04-26 15:31:04.249004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.998 [2024-04-26 15:31:04.249011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.998 [2024-04-26 15:31:04.249021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.998 [2024-04-26 15:31:04.249027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.998 [2024-04-26 15:31:04.249036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.998 [2024-04-26 15:31:04.249044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.998 [2024-04-26 15:31:04.249052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.998 [2024-04-26 15:31:04.249060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.998 [2024-04-26 15:31:04.249069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.998 [2024-04-26 15:31:04.249076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.998 [2024-04-26 15:31:04.249085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.998 [2024-04-26 15:31:04.249092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.998 [2024-04-26 15:31:04.249101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.998 [2024-04-26 15:31:04.249111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.998 [2024-04-26 15:31:04.249121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.998 [2024-04-26 15:31:04.249128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.999 [2024-04-26 15:31:04.249137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-04-26 15:31:04.249144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.999 [2024-04-26 15:31:04.249153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-04-26 15:31:04.249160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.999 [2024-04-26 15:31:04.249169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-04-26 15:31:04.249176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.999 [2024-04-26 15:31:04.249185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-04-26 15:31:04.249192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.999 [2024-04-26 15:31:04.249201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-04-26 15:31:04.249208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.999 [2024-04-26 15:31:04.249217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-04-26 15:31:04.249224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.999 [2024-04-26 15:31:04.249233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-04-26 15:31:04.249240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.999 [2024-04-26 15:31:04.249249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-04-26 15:31:04.249256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.999 [2024-04-26 15:31:04.249265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-04-26 15:31:04.249272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.999 [2024-04-26 15:31:04.249281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-04-26 15:31:04.249288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.999 [2024-04-26 15:31:04.249297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-04-26 15:31:04.249304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.999 [2024-04-26 15:31:04.249316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-04-26 15:31:04.249323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.999 [2024-04-26 15:31:04.249332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-04-26 15:31:04.249339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.999 [2024-04-26 15:31:04.249348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-04-26 15:31:04.249355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.999 [2024-04-26 15:31:04.249364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-04-26 15:31:04.249371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.999 [2024-04-26 15:31:04.249380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-04-26 15:31:04.249387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.999 [2024-04-26 15:31:04.249396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-04-26 15:31:04.249403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.999 [2024-04-26 15:31:04.249412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-04-26 15:31:04.249419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.999 [2024-04-26 15:31:04.249428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-04-26 15:31:04.249435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.999 [2024-04-26 15:31:04.249443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-04-26 15:31:04.249450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.999 [2024-04-26 15:31:04.249460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-04-26 15:31:04.249466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.999 [2024-04-26 15:31:04.249475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-04-26 15:31:04.249482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.999 [2024-04-26 15:31:04.249491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-04-26 15:31:04.249498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.999 [2024-04-26 15:31:04.249508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-04-26 15:31:04.249517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.999 [2024-04-26 15:31:04.249526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-04-26 15:31:04.249533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.999 [2024-04-26 15:31:04.249542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-04-26 15:31:04.249549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.999 [2024-04-26 15:31:04.249558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-04-26 15:31:04.249565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.999 [2024-04-26 15:31:04.249574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-04-26 15:31:04.249581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.999 [2024-04-26 15:31:04.249590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-04-26 15:31:04.249597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.999 [2024-04-26 15:31:04.249606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-04-26 15:31:04.249613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.999 [2024-04-26 15:31:04.249622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-04-26 15:31:04.249629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.999 [2024-04-26 15:31:04.249639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-04-26 15:31:04.249645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.999 [2024-04-26 15:31:04.249654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-04-26 15:31:04.249661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.999 [2024-04-26 15:31:04.249670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-04-26 15:31:04.249678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.999 [2024-04-26 15:31:04.249687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-04-26 15:31:04.249694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.999 [2024-04-26 15:31:04.249703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-04-26 15:31:04.249711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.999 [2024-04-26 15:31:04.249722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.000 [2024-04-26 15:31:04.249729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.000 [2024-04-26 15:31:04.249738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.000 [2024-04-26 15:31:04.249745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.000 [2024-04-26 15:31:04.249755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.000 [2024-04-26 15:31:04.249762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.000 [2024-04-26 15:31:04.249771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.000 [2024-04-26 15:31:04.249778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.000 [2024-04-26 15:31:04.249787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.000 [2024-04-26 15:31:04.249795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.000 [2024-04-26 15:31:04.249804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.000 [2024-04-26 15:31:04.249811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.000 [2024-04-26 15:31:04.249820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.000 [2024-04-26 15:31:04.249827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.000 [2024-04-26 15:31:04.249836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.000 [2024-04-26 15:31:04.249862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.000 [2024-04-26 15:31:04.249871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.000 [2024-04-26 15:31:04.249878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.000 [2024-04-26 15:31:04.249887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.000 [2024-04-26 15:31:04.249894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.000 [2024-04-26 15:31:04.249903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.000 [2024-04-26 15:31:04.249910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.000 [2024-04-26 15:31:04.249919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.000 [2024-04-26 15:31:04.249926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.000 [2024-04-26 15:31:04.249935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.000 [2024-04-26 15:31:04.249944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.000 [2024-04-26 15:31:04.249953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.000 [2024-04-26 15:31:04.249960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.000 [2024-04-26 15:31:04.249968] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f5a0 is same with the state(5) to be set 00:20:47.000 [2024-04-26 15:31:04.251975] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:20:47.000 [2024-04-26 15:31:04.252000] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:20:47.000 [2024-04-26 15:31:04.252010] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:20:47.000 [2024-04-26 15:31:04.252019] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:20:47.000 [2024-04-26 15:31:04.252082] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:47.000 [2024-04-26 15:31:04.252105] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:47.000 [2024-04-26 15:31:04.252115] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:47.000 [2024-04-26 15:31:04.252202] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:20:47.000 [2024-04-26 15:31:04.252213] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:20:47.000 [2024-04-26 15:31:04.252222] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:20:47.000 [2024-04-26 15:31:04.252496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.000 [2024-04-26 15:31:04.252690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.000 [2024-04-26 15:31:04.252700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec3ce0 with addr=10.0.0.2, port=4420 00:20:47.000 [2024-04-26 15:31:04.252709] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec3ce0 is same with the state(5) to be set 00:20:47.000 [2024-04-26 15:31:04.252918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.000 [2024-04-26 15:31:04.253299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.000 [2024-04-26 15:31:04.253308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1365c20 with addr=10.0.0.2, port=4420 00:20:47.000 [2024-04-26 15:31:04.253316] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1365c20 is same with the state(5) to be set 00:20:47.000 [2024-04-26 15:31:04.253686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.000 [2024-04-26 15:31:04.254043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.000 [2024-04-26 15:31:04.254053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1384910 with addr=10.0.0.2, port=4420 00:20:47.000 [2024-04-26 15:31:04.254060] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1384910 is same with the state(5) to be set 00:20:47.000 [2024-04-26 15:31:04.254405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.000 [2024-04-26 15:31:04.254761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.000 [2024-04-26 15:31:04.254770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151b240 with addr=10.0.0.2, port=4420 00:20:47.000 [2024-04-26 15:31:04.254778] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151b240 is same with the state(5) to be set 00:20:47.000 [2024-04-26 15:31:04.256599] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:20:47.000 [2024-04-26 15:31:04.256618] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:47.000 [2024-04-26 15:31:04.256794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.000 [2024-04-26 15:31:04.257165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.000 [2024-04-26 15:31:04.257175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1511000 with addr=10.0.0.2, port=4420 00:20:47.000 [2024-04-26 15:31:04.257182] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1511000 is same with the state(5) to be set 00:20:47.000 [2024-04-26 15:31:04.257518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.000 [2024-04-26 15:31:04.257706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.000 [2024-04-26 15:31:04.257716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148deb0 with addr=10.0.0.2, port=4420 00:20:47.000 [2024-04-26 15:31:04.257723] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148deb0 is same with the state(5) to be set 00:20:47.000 [2024-04-26 15:31:04.258066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.000 [2024-04-26 15:31:04.258412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.000 [2024-04-26 15:31:04.258422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1384ff0 with addr=10.0.0.2, port=4420 00:20:47.000 [2024-04-26 15:31:04.258429] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1384ff0 is same with the state(5) to be set 00:20:47.000 [2024-04-26 15:31:04.258439] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec3ce0 (9): Bad file descriptor 00:20:47.000 [2024-04-26 15:31:04.258449] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1365c20 (9): Bad file descriptor 00:20:47.000 [2024-04-26 15:31:04.258458] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1384910 (9): Bad file descriptor 00:20:47.000 [2024-04-26 15:31:04.258467] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151b240 (9): Bad file descriptor 00:20:47.000 [2024-04-26 15:31:04.258553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.000 [2024-04-26 15:31:04.258563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.000 [2024-04-26 15:31:04.258577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.000 [2024-04-26 15:31:04.258584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.000 [2024-04-26 15:31:04.258594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.000 [2024-04-26 15:31:04.258601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.000 [2024-04-26 15:31:04.258610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.000 [2024-04-26 15:31:04.258617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.000 [2024-04-26 15:31:04.258627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.000 [2024-04-26 15:31:04.258633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.000 [2024-04-26 15:31:04.258643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.000 [2024-04-26 15:31:04.258653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.000 [2024-04-26 15:31:04.258662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.000 [2024-04-26 15:31:04.258669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.000 [2024-04-26 15:31:04.258679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.000 [2024-04-26 15:31:04.258686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.000 [2024-04-26 15:31:04.258695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.000 [2024-04-26 15:31:04.258702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.000 [2024-04-26 15:31:04.258712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.000 [2024-04-26 15:31:04.258718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.001 [2024-04-26 15:31:04.258728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.001 [2024-04-26 15:31:04.258735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.001 [2024-04-26 15:31:04.258744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.001 [2024-04-26 15:31:04.258751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.001 [2024-04-26 15:31:04.258760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.001 [2024-04-26 15:31:04.258767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.001 [2024-04-26 15:31:04.258776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.001 [2024-04-26 15:31:04.258783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.001 [2024-04-26 15:31:04.258792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.001 [2024-04-26 15:31:04.258799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.001 [2024-04-26 15:31:04.258808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.001 [2024-04-26 15:31:04.258815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.001 [2024-04-26 15:31:04.258825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.001 [2024-04-26 15:31:04.258831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.001 [2024-04-26 15:31:04.258844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.001 [2024-04-26 15:31:04.258851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.001 [2024-04-26 15:31:04.258862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.001 [2024-04-26 15:31:04.258869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.001 [2024-04-26 15:31:04.258879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.001 [2024-04-26 15:31:04.258886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.001 [2024-04-26 15:31:04.258895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.001 [2024-04-26 15:31:04.258902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.001 [2024-04-26 15:31:04.258911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.001 [2024-04-26 15:31:04.258918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.001 [2024-04-26 15:31:04.258927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.001 [2024-04-26 15:31:04.258934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.001 [2024-04-26 15:31:04.258943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.001 [2024-04-26 15:31:04.258950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.001 [2024-04-26 15:31:04.258959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.001 [2024-04-26 15:31:04.258966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.001 [2024-04-26 15:31:04.258975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.001 [2024-04-26 15:31:04.258982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.001 [2024-04-26 15:31:04.258991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.001 [2024-04-26 15:31:04.258998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.001 [2024-04-26 15:31:04.259007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.001 [2024-04-26 15:31:04.259014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.001 [2024-04-26 15:31:04.259023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.001 [2024-04-26 15:31:04.259030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.001 [2024-04-26 15:31:04.259039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.001 [2024-04-26 15:31:04.259046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.001 [2024-04-26 15:31:04.259055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.001 [2024-04-26 15:31:04.259064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.001 [2024-04-26 15:31:04.259073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.001 [2024-04-26 15:31:04.259080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.001 [2024-04-26 15:31:04.259089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.001 [2024-04-26 15:31:04.259096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.001 [2024-04-26 15:31:04.259105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.001 [2024-04-26 15:31:04.259112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.001 [2024-04-26 15:31:04.259121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.001 [2024-04-26 15:31:04.259128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.001 [2024-04-26 15:31:04.259137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.001 [2024-04-26 15:31:04.259144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.001 [2024-04-26 15:31:04.259153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.001 [2024-04-26 15:31:04.259160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.001 [2024-04-26 15:31:04.259170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.001 [2024-04-26 15:31:04.259177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.001 [2024-04-26 15:31:04.259187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.001 [2024-04-26 15:31:04.259194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.001 [2024-04-26 15:31:04.259203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.001 [2024-04-26 15:31:04.259210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.001 [2024-04-26 15:31:04.259220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.001 [2024-04-26 15:31:04.259226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.001 [2024-04-26 15:31:04.259236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.001 [2024-04-26 15:31:04.259242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.001 [2024-04-26 15:31:04.259252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.001 [2024-04-26 15:31:04.259259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.001 [2024-04-26 15:31:04.259269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.001 [2024-04-26 15:31:04.259276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.001 [2024-04-26 15:31:04.259285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.001 [2024-04-26 15:31:04.259292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.001 [2024-04-26 15:31:04.259301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.001 [2024-04-26 15:31:04.259308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.001 [2024-04-26 15:31:04.259317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.001 [2024-04-26 15:31:04.259324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.001 [2024-04-26 15:31:04.259333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.001 [2024-04-26 15:31:04.259340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.001 [2024-04-26 15:31:04.259349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.001 [2024-04-26 15:31:04.259356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.001 [2024-04-26 15:31:04.259365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.001 [2024-04-26 15:31:04.259372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.001 [2024-04-26 15:31:04.259381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.001 [2024-04-26 15:31:04.259388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.001 [2024-04-26 15:31:04.259397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.001 [2024-04-26 15:31:04.259404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.001 [2024-04-26 15:31:04.259414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.002 [2024-04-26 15:31:04.259421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.002 [2024-04-26 15:31:04.259430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.002 [2024-04-26 15:31:04.259437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.002 [2024-04-26 15:31:04.259446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.002 [2024-04-26 15:31:04.259453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.002 [2024-04-26 15:31:04.259463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.002 [2024-04-26 15:31:04.259472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.002 [2024-04-26 15:31:04.259481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.002 [2024-04-26 15:31:04.259488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.002 [2024-04-26 15:31:04.259497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.002 [2024-04-26 15:31:04.259504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.002 [2024-04-26 15:31:04.259514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.002 [2024-04-26 15:31:04.259521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.002 [2024-04-26 15:31:04.259530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.002 [2024-04-26 15:31:04.259537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.002 [2024-04-26 15:31:04.259546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.002 [2024-04-26 15:31:04.259553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.002 [2024-04-26 15:31:04.259562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.002 [2024-04-26 15:31:04.259569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.002 [2024-04-26 15:31:04.259579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.002 [2024-04-26 15:31:04.259585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.002 [2024-04-26 15:31:04.259595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.002 [2024-04-26 15:31:04.259602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.002 [2024-04-26 15:31:04.259610] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135e100 is same with the state(5) to be set 00:20:47.002 task offset: 29568 on job bdev=Nvme1n1 fails 00:20:47.002 00:20:47.002 Latency(us) 00:20:47.002 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:47.002 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:47.002 Job: Nvme1n1 ended in about 0.96 seconds with error 00:20:47.002 Verification LBA range: start 0x0 length 0x400 00:20:47.002 Nvme1n1 : 0.96 200.92 12.56 66.97 0.00 236281.81 12069.55 244667.73 00:20:47.002 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:47.002 Job: Nvme2n1 ended in about 0.97 seconds with error 00:20:47.002 Verification LBA range: start 0x0 length 0x400 00:20:47.002 Nvme2n1 : 0.97 132.43 8.28 66.22 0.00 312495.79 14308.69 256901.12 00:20:47.002 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:47.002 Job: Nvme3n1 ended in about 0.97 seconds with error 00:20:47.002 Verification LBA range: start 0x0 length 0x400 00:20:47.002 Nvme3n1 : 0.97 198.17 12.39 66.06 0.00 230100.69 21954.56 237677.23 00:20:47.002 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:47.002 Job: Nvme4n1 ended in about 0.97 seconds with error 00:20:47.002 Verification LBA range: start 0x0 length 0x400 00:20:47.002 Nvme4n1 : 0.97 197.69 12.36 65.90 0.00 225988.80 11523.41 230686.72 00:20:47.002 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:47.002 Job: Nvme5n1 ended in about 0.96 seconds with error 00:20:47.002 Verification LBA range: start 0x0 length 0x400 00:20:47.002 Nvme5n1 : 0.96 200.64 12.54 66.88 0.00 217586.77 14745.60 246415.36 00:20:47.002 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:47.002 Job: Nvme6n1 ended in about 0.97 seconds with error 00:20:47.002 Verification LBA range: start 0x0 length 0x400 00:20:47.002 Nvme6n1 : 0.97 197.22 12.33 65.74 0.00 217034.24 18459.31 242920.11 00:20:47.002 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:47.002 Job: Nvme7n1 ended in about 0.98 seconds with error 00:20:47.002 Verification LBA range: start 0x0 length 0x400 00:20:47.002 Nvme7n1 : 0.98 196.75 12.30 65.58 0.00 212853.44 12342.61 255153.49 00:20:47.002 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:47.002 Job: Nvme8n1 ended in about 0.98 seconds with error 00:20:47.002 Verification LBA range: start 0x0 length 0x400 00:20:47.002 Nvme8n1 : 0.98 196.28 12.27 65.43 0.00 208755.63 17913.17 237677.23 00:20:47.002 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:47.002 Job: Nvme9n1 ended in about 0.99 seconds with error 00:20:47.002 Verification LBA range: start 0x0 length 0x400 00:20:47.002 Nvme9n1 : 0.99 129.27 8.08 64.64 0.00 276083.20 20316.16 269134.51 00:20:47.002 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:47.002 Job: Nvme10n1 ended in about 0.98 seconds with error 00:20:47.002 Verification LBA range: start 0x0 length 0x400 00:20:47.002 Nvme10n1 : 0.98 130.54 8.16 65.27 0.00 266703.08 20862.29 244667.73 00:20:47.002 =================================================================================================================== 00:20:47.002 Total : 1779.93 111.25 658.68 0.00 236763.56 11523.41 269134.51 00:20:47.002 [2024-04-26 15:31:04.289187] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:47.002 [2024-04-26 15:31:04.289237] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:20:47.002 [2024-04-26 15:31:04.289675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.002 [2024-04-26 15:31:04.289771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.002 [2024-04-26 15:31:04.289780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1514700 with addr=10.0.0.2, port=4420 00:20:47.002 [2024-04-26 15:31:04.289790] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1514700 is same with the state(5) to be set 00:20:47.002 [2024-04-26 15:31:04.290004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.002 [2024-04-26 15:31:04.290361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.002 [2024-04-26 15:31:04.290372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1362eb0 with addr=10.0.0.2, port=4420 00:20:47.002 [2024-04-26 15:31:04.290380] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362eb0 is same with the state(5) to be set 00:20:47.002 [2024-04-26 15:31:04.290395] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1511000 (9): Bad file descriptor 00:20:47.002 [2024-04-26 15:31:04.290406] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148deb0 (9): Bad file descriptor 00:20:47.002 [2024-04-26 15:31:04.290415] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1384ff0 (9): Bad file descriptor 00:20:47.002 [2024-04-26 15:31:04.290424] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:47.002 [2024-04-26 15:31:04.290436] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:20:47.002 [2024-04-26 15:31:04.290445] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:47.002 [2024-04-26 15:31:04.290460] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:20:47.002 [2024-04-26 15:31:04.290466] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:20:47.002 [2024-04-26 15:31:04.290473] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:20:47.002 [2024-04-26 15:31:04.290484] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:20:47.002 [2024-04-26 15:31:04.290490] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:20:47.002 [2024-04-26 15:31:04.290497] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:20:47.002 [2024-04-26 15:31:04.290508] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:20:47.002 [2024-04-26 15:31:04.290514] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:20:47.002 [2024-04-26 15:31:04.290521] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:20:47.002 [2024-04-26 15:31:04.290644] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:47.002 [2024-04-26 15:31:04.290654] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:47.002 [2024-04-26 15:31:04.290660] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:47.002 [2024-04-26 15:31:04.290666] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:47.002 [2024-04-26 15:31:04.291033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.002 [2024-04-26 15:31:04.291271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.002 [2024-04-26 15:31:04.291281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1360fb0 with addr=10.0.0.2, port=4420 00:20:47.002 [2024-04-26 15:31:04.291288] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360fb0 is same with the state(5) to be set 00:20:47.002 [2024-04-26 15:31:04.291298] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1514700 (9): Bad file descriptor 00:20:47.002 [2024-04-26 15:31:04.291307] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1362eb0 (9): Bad file descriptor 00:20:47.002 [2024-04-26 15:31:04.291315] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:20:47.002 [2024-04-26 15:31:04.291321] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:20:47.002 [2024-04-26 15:31:04.291328] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:20:47.002 [2024-04-26 15:31:04.291338] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:20:47.003 [2024-04-26 15:31:04.291345] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:20:47.003 [2024-04-26 15:31:04.291351] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:20:47.003 [2024-04-26 15:31:04.291360] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:20:47.003 [2024-04-26 15:31:04.291367] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:20:47.003 [2024-04-26 15:31:04.291373] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:20:47.003 [2024-04-26 15:31:04.291415] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:47.003 [2024-04-26 15:31:04.291428] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:47.003 [2024-04-26 15:31:04.291439] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:47.003 [2024-04-26 15:31:04.291456] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:47.003 [2024-04-26 15:31:04.291468] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:47.003 [2024-04-26 15:31:04.291774] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:47.003 [2024-04-26 15:31:04.291785] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:47.003 [2024-04-26 15:31:04.291791] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:47.003 [2024-04-26 15:31:04.291812] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1360fb0 (9): Bad file descriptor 00:20:47.003 [2024-04-26 15:31:04.291821] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:20:47.003 [2024-04-26 15:31:04.291827] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:20:47.003 [2024-04-26 15:31:04.291834] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:20:47.003 [2024-04-26 15:31:04.291849] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:47.003 [2024-04-26 15:31:04.291856] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:47.003 [2024-04-26 15:31:04.291862] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:47.003 [2024-04-26 15:31:04.292130] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:20:47.003 [2024-04-26 15:31:04.292142] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:20:47.003 [2024-04-26 15:31:04.292152] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:20:47.003 [2024-04-26 15:31:04.292161] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:20:47.003 [2024-04-26 15:31:04.292170] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:47.003 [2024-04-26 15:31:04.292176] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:47.003 [2024-04-26 15:31:04.292204] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:20:47.003 [2024-04-26 15:31:04.292211] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:20:47.003 [2024-04-26 15:31:04.292218] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:20:47.003 [2024-04-26 15:31:04.292256] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:47.003 [2024-04-26 15:31:04.292584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.003 [2024-04-26 15:31:04.292800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.003 [2024-04-26 15:31:04.292809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151b240 with addr=10.0.0.2, port=4420 00:20:47.003 [2024-04-26 15:31:04.292817] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151b240 is same with the state(5) to be set 00:20:47.003 [2024-04-26 15:31:04.293197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.003 [2024-04-26 15:31:04.293528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.003 [2024-04-26 15:31:04.293537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1384910 with addr=10.0.0.2, port=4420 00:20:47.003 [2024-04-26 15:31:04.293544] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1384910 is same with the state(5) to be set 00:20:47.003 [2024-04-26 15:31:04.293872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.003 [2024-04-26 15:31:04.294213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.003 [2024-04-26 15:31:04.294223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1365c20 with addr=10.0.0.2, port=4420 00:20:47.003 [2024-04-26 15:31:04.294230] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1365c20 is same with the state(5) to be set 00:20:47.003 [2024-04-26 15:31:04.294406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.003 [2024-04-26 15:31:04.294711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.003 [2024-04-26 15:31:04.294720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec3ce0 with addr=10.0.0.2, port=4420 00:20:47.003 [2024-04-26 15:31:04.294727] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec3ce0 is same with the state(5) to be set 00:20:47.003 [2024-04-26 15:31:04.294756] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151b240 (9): Bad file descriptor 00:20:47.003 [2024-04-26 15:31:04.294766] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1384910 (9): Bad file descriptor 00:20:47.003 [2024-04-26 15:31:04.294775] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1365c20 (9): Bad file descriptor 00:20:47.003 [2024-04-26 15:31:04.294784] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec3ce0 (9): Bad file descriptor 00:20:47.003 [2024-04-26 15:31:04.294818] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:20:47.003 [2024-04-26 15:31:04.294825] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:20:47.003 [2024-04-26 15:31:04.294832] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:20:47.003 [2024-04-26 15:31:04.294846] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:20:47.003 [2024-04-26 15:31:04.294852] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:20:47.003 [2024-04-26 15:31:04.294858] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:20:47.003 [2024-04-26 15:31:04.294868] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:20:47.003 [2024-04-26 15:31:04.294874] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:20:47.003 [2024-04-26 15:31:04.294880] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:20:47.003 [2024-04-26 15:31:04.294889] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:47.003 [2024-04-26 15:31:04.294895] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:20:47.003 [2024-04-26 15:31:04.294901] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:47.003 [2024-04-26 15:31:04.294929] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:47.003 [2024-04-26 15:31:04.294936] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:47.003 [2024-04-26 15:31:04.294942] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:47.003 [2024-04-26 15:31:04.294948] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:47.264 15:31:04 -- target/shutdown.sh@136 -- # nvmfpid= 00:20:47.264 15:31:04 -- target/shutdown.sh@139 -- # sleep 1 00:20:48.204 15:31:05 -- target/shutdown.sh@142 -- # kill -9 1686112 00:20:48.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (1686112) - No such process 00:20:48.204 15:31:05 -- target/shutdown.sh@142 -- # true 00:20:48.204 15:31:05 -- target/shutdown.sh@144 -- # stoptarget 00:20:48.204 15:31:05 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:48.204 15:31:05 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:48.204 15:31:05 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:48.204 15:31:05 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:48.204 15:31:05 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:48.204 15:31:05 -- nvmf/common.sh@117 -- # sync 00:20:48.204 15:31:05 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:48.204 15:31:05 -- nvmf/common.sh@120 -- # set +e 00:20:48.204 15:31:05 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:48.204 15:31:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:48.204 rmmod nvme_tcp 00:20:48.204 rmmod nvme_fabrics 00:20:48.204 rmmod nvme_keyring 00:20:48.204 15:31:05 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:48.204 15:31:05 -- nvmf/common.sh@124 -- # set -e 00:20:48.204 15:31:05 -- nvmf/common.sh@125 -- # return 0 00:20:48.204 15:31:05 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:20:48.204 15:31:05 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:48.204 15:31:05 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:48.204 15:31:05 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:48.204 15:31:05 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:48.204 15:31:05 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:48.204 15:31:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:48.204 15:31:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:48.204 15:31:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:50.751 15:31:07 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:50.751 00:20:50.751 real 0m7.590s 00:20:50.751 user 0m18.008s 00:20:50.751 sys 0m1.189s 00:20:50.751 15:31:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:50.751 15:31:07 -- common/autotest_common.sh@10 -- # set +x 00:20:50.751 ************************************ 00:20:50.751 END TEST nvmf_shutdown_tc3 00:20:50.751 ************************************ 00:20:50.751 15:31:07 -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:20:50.751 00:20:50.751 real 0m32.594s 00:20:50.751 user 1m15.503s 00:20:50.751 sys 0m9.259s 00:20:50.751 15:31:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:50.751 15:31:07 -- common/autotest_common.sh@10 -- # set +x 00:20:50.751 ************************************ 00:20:50.751 END TEST nvmf_shutdown 00:20:50.751 ************************************ 00:20:50.751 15:31:07 -- nvmf/nvmf.sh@84 -- # timing_exit target 00:20:50.751 15:31:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:50.751 15:31:07 -- common/autotest_common.sh@10 -- # set +x 00:20:50.751 15:31:07 -- nvmf/nvmf.sh@86 -- # timing_enter host 00:20:50.751 15:31:07 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:50.751 15:31:07 -- common/autotest_common.sh@10 -- # set +x 00:20:50.751 15:31:07 -- nvmf/nvmf.sh@88 -- # [[ 0 -eq 0 ]] 00:20:50.751 15:31:07 -- nvmf/nvmf.sh@89 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:50.751 15:31:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:50.751 15:31:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:50.751 15:31:07 -- common/autotest_common.sh@10 -- # set +x 00:20:50.751 ************************************ 00:20:50.751 START TEST nvmf_multicontroller 00:20:50.751 ************************************ 00:20:50.751 15:31:07 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:50.751 * Looking for test storage... 00:20:50.751 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:50.751 15:31:07 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:50.751 15:31:07 -- nvmf/common.sh@7 -- # uname -s 00:20:50.751 15:31:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:50.751 15:31:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:50.751 15:31:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:50.751 15:31:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:50.751 15:31:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:50.751 15:31:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:50.751 15:31:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:50.751 15:31:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:50.751 15:31:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:50.751 15:31:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:50.751 15:31:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:50.751 15:31:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:50.751 15:31:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:50.751 15:31:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:50.751 15:31:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:50.751 15:31:08 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:50.751 15:31:08 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:50.751 15:31:08 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:50.751 15:31:08 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:50.751 15:31:08 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:50.751 15:31:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.751 15:31:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.751 15:31:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.751 15:31:08 -- paths/export.sh@5 -- # export PATH 00:20:50.751 15:31:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.751 15:31:08 -- nvmf/common.sh@47 -- # : 0 00:20:50.751 15:31:08 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:50.751 15:31:08 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:50.751 15:31:08 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:50.751 15:31:08 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:50.751 15:31:08 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:50.751 15:31:08 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:50.751 15:31:08 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:50.751 15:31:08 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:50.751 15:31:08 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:50.751 15:31:08 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:50.751 15:31:08 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:20:50.751 15:31:08 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:20:50.751 15:31:08 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:50.751 15:31:08 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:20:50.751 15:31:08 -- host/multicontroller.sh@23 -- # nvmftestinit 00:20:50.751 15:31:08 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:50.751 15:31:08 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:50.751 15:31:08 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:50.751 15:31:08 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:50.751 15:31:08 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:50.751 15:31:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:50.751 15:31:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:50.751 15:31:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:50.751 15:31:08 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:50.751 15:31:08 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:50.751 15:31:08 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:50.751 15:31:08 -- common/autotest_common.sh@10 -- # set +x 00:20:58.976 15:31:15 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:58.976 15:31:15 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:58.976 15:31:15 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:58.976 15:31:15 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:58.976 15:31:15 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:58.976 15:31:15 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:58.976 15:31:15 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:58.976 15:31:15 -- nvmf/common.sh@295 -- # net_devs=() 00:20:58.976 15:31:15 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:58.976 15:31:15 -- nvmf/common.sh@296 -- # e810=() 00:20:58.976 15:31:15 -- nvmf/common.sh@296 -- # local -ga e810 00:20:58.976 15:31:15 -- nvmf/common.sh@297 -- # x722=() 00:20:58.976 15:31:15 -- nvmf/common.sh@297 -- # local -ga x722 00:20:58.976 15:31:15 -- nvmf/common.sh@298 -- # mlx=() 00:20:58.976 15:31:15 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:58.976 15:31:15 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:58.976 15:31:15 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:58.976 15:31:15 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:58.976 15:31:15 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:58.976 15:31:15 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:58.976 15:31:15 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:58.976 15:31:15 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:58.976 15:31:15 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:58.976 15:31:15 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:58.976 15:31:15 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:58.976 15:31:15 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:58.976 15:31:15 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:58.976 15:31:15 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:58.976 15:31:15 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:58.976 15:31:15 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:58.976 15:31:15 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:58.976 15:31:15 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:58.976 15:31:15 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:58.976 15:31:15 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:58.976 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:58.976 15:31:15 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:58.976 15:31:15 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:58.976 15:31:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:58.976 15:31:15 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:58.976 15:31:15 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:58.976 15:31:15 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:58.976 15:31:15 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:58.976 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:58.976 15:31:15 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:58.976 15:31:15 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:58.976 15:31:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:58.976 15:31:15 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:58.976 15:31:15 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:58.976 15:31:15 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:58.976 15:31:15 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:58.976 15:31:15 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:58.976 15:31:15 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:58.976 15:31:15 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:58.976 15:31:15 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:58.976 15:31:15 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:58.976 15:31:15 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:58.976 Found net devices under 0000:31:00.0: cvl_0_0 00:20:58.976 15:31:15 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:58.976 15:31:15 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:58.976 15:31:15 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:58.976 15:31:15 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:58.976 15:31:15 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:58.976 15:31:15 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:58.976 Found net devices under 0000:31:00.1: cvl_0_1 00:20:58.976 15:31:15 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:58.976 15:31:15 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:58.976 15:31:15 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:58.976 15:31:15 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:58.976 15:31:15 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:58.976 15:31:15 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:58.976 15:31:15 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:58.976 15:31:15 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:58.976 15:31:15 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:58.976 15:31:15 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:58.976 15:31:15 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:58.976 15:31:15 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:58.976 15:31:15 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:58.976 15:31:15 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:58.976 15:31:15 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:58.976 15:31:15 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:58.976 15:31:15 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:58.976 15:31:15 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:58.976 15:31:15 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:58.976 15:31:15 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:58.976 15:31:15 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:58.976 15:31:15 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:58.976 15:31:15 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:58.976 15:31:15 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:58.976 15:31:15 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:58.976 15:31:15 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:58.976 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:58.976 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.631 ms 00:20:58.976 00:20:58.976 --- 10.0.0.2 ping statistics --- 00:20:58.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:58.976 rtt min/avg/max/mdev = 0.631/0.631/0.631/0.000 ms 00:20:58.976 15:31:15 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:58.976 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:58.976 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:20:58.976 00:20:58.976 --- 10.0.0.1 ping statistics --- 00:20:58.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:58.976 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:20:58.976 15:31:15 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:58.976 15:31:15 -- nvmf/common.sh@411 -- # return 0 00:20:58.976 15:31:15 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:58.976 15:31:15 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:58.976 15:31:15 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:58.976 15:31:15 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:58.976 15:31:15 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:58.976 15:31:15 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:58.976 15:31:15 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:58.977 15:31:15 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:20:58.977 15:31:15 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:58.977 15:31:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:58.977 15:31:15 -- common/autotest_common.sh@10 -- # set +x 00:20:58.977 15:31:15 -- nvmf/common.sh@470 -- # nvmfpid=1691122 00:20:58.977 15:31:15 -- nvmf/common.sh@471 -- # waitforlisten 1691122 00:20:58.977 15:31:15 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:58.977 15:31:15 -- common/autotest_common.sh@817 -- # '[' -z 1691122 ']' 00:20:58.977 15:31:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:58.977 15:31:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:58.977 15:31:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:58.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:58.977 15:31:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:58.977 15:31:15 -- common/autotest_common.sh@10 -- # set +x 00:20:58.977 [2024-04-26 15:31:15.517611] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:20:58.977 [2024-04-26 15:31:15.517699] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:58.977 EAL: No free 2048 kB hugepages reported on node 1 00:20:58.977 [2024-04-26 15:31:15.610267] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:58.977 [2024-04-26 15:31:15.702391] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:58.977 [2024-04-26 15:31:15.702444] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:58.977 [2024-04-26 15:31:15.702453] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:58.977 [2024-04-26 15:31:15.702459] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:58.977 [2024-04-26 15:31:15.702466] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:58.977 [2024-04-26 15:31:15.702601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:58.977 [2024-04-26 15:31:15.702770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:58.977 [2024-04-26 15:31:15.702770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:58.977 15:31:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:58.977 15:31:16 -- common/autotest_common.sh@850 -- # return 0 00:20:58.977 15:31:16 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:58.977 15:31:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:58.977 15:31:16 -- common/autotest_common.sh@10 -- # set +x 00:20:58.977 15:31:16 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:58.977 15:31:16 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:58.977 15:31:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:58.977 15:31:16 -- common/autotest_common.sh@10 -- # set +x 00:20:58.977 [2024-04-26 15:31:16.340801] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:58.977 15:31:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:58.977 15:31:16 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:58.977 15:31:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:58.977 15:31:16 -- common/autotest_common.sh@10 -- # set +x 00:20:58.977 Malloc0 00:20:58.977 15:31:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:58.977 15:31:16 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:58.977 15:31:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:58.977 15:31:16 -- common/autotest_common.sh@10 -- # set +x 00:20:58.977 15:31:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:58.977 15:31:16 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:58.977 15:31:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:58.977 15:31:16 -- common/autotest_common.sh@10 -- # set +x 00:20:58.977 15:31:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:58.977 15:31:16 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:58.977 15:31:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:58.977 15:31:16 -- common/autotest_common.sh@10 -- # set +x 00:20:58.977 [2024-04-26 15:31:16.407222] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:58.977 15:31:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:58.977 15:31:16 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:58.977 15:31:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:58.977 15:31:16 -- common/autotest_common.sh@10 -- # set +x 00:20:58.977 [2024-04-26 15:31:16.419189] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:58.977 15:31:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:58.977 15:31:16 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:59.238 15:31:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:59.238 15:31:16 -- common/autotest_common.sh@10 -- # set +x 00:20:59.238 Malloc1 00:20:59.238 15:31:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:59.238 15:31:16 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:20:59.238 15:31:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:59.238 15:31:16 -- common/autotest_common.sh@10 -- # set +x 00:20:59.238 15:31:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:59.238 15:31:16 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:20:59.238 15:31:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:59.238 15:31:16 -- common/autotest_common.sh@10 -- # set +x 00:20:59.238 15:31:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:59.238 15:31:16 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:59.238 15:31:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:59.238 15:31:16 -- common/autotest_common.sh@10 -- # set +x 00:20:59.238 15:31:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:59.238 15:31:16 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:20:59.238 15:31:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:59.238 15:31:16 -- common/autotest_common.sh@10 -- # set +x 00:20:59.238 15:31:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:59.238 15:31:16 -- host/multicontroller.sh@44 -- # bdevperf_pid=1691273 00:20:59.238 15:31:16 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:59.238 15:31:16 -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:20:59.238 15:31:16 -- host/multicontroller.sh@47 -- # waitforlisten 1691273 /var/tmp/bdevperf.sock 00:20:59.238 15:31:16 -- common/autotest_common.sh@817 -- # '[' -z 1691273 ']' 00:20:59.238 15:31:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:59.238 15:31:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:59.238 15:31:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:59.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:59.238 15:31:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:59.238 15:31:16 -- common/autotest_common.sh@10 -- # set +x 00:21:00.193 15:31:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:00.193 15:31:17 -- common/autotest_common.sh@850 -- # return 0 00:21:00.194 15:31:17 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:21:00.194 15:31:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:00.194 15:31:17 -- common/autotest_common.sh@10 -- # set +x 00:21:00.194 NVMe0n1 00:21:00.194 15:31:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:00.194 15:31:17 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:00.194 15:31:17 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:21:00.194 15:31:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:00.194 15:31:17 -- common/autotest_common.sh@10 -- # set +x 00:21:00.194 15:31:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:00.194 1 00:21:00.194 15:31:17 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:00.194 15:31:17 -- common/autotest_common.sh@638 -- # local es=0 00:21:00.194 15:31:17 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:00.194 15:31:17 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:21:00.194 15:31:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:00.194 15:31:17 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:21:00.194 15:31:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:00.194 15:31:17 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:00.194 15:31:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:00.194 15:31:17 -- common/autotest_common.sh@10 -- # set +x 00:21:00.194 request: 00:21:00.194 { 00:21:00.194 "name": "NVMe0", 00:21:00.194 "trtype": "tcp", 00:21:00.194 "traddr": "10.0.0.2", 00:21:00.194 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:21:00.194 "hostaddr": "10.0.0.2", 00:21:00.194 "hostsvcid": "60000", 00:21:00.194 "adrfam": "ipv4", 00:21:00.194 "trsvcid": "4420", 00:21:00.194 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:00.194 "method": "bdev_nvme_attach_controller", 00:21:00.194 "req_id": 1 00:21:00.194 } 00:21:00.194 Got JSON-RPC error response 00:21:00.194 response: 00:21:00.194 { 00:21:00.194 "code": -114, 00:21:00.194 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:00.194 } 00:21:00.194 15:31:17 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:21:00.194 15:31:17 -- common/autotest_common.sh@641 -- # es=1 00:21:00.194 15:31:17 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:00.194 15:31:17 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:00.194 15:31:17 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:00.194 15:31:17 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:00.194 15:31:17 -- common/autotest_common.sh@638 -- # local es=0 00:21:00.194 15:31:17 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:00.194 15:31:17 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:21:00.194 15:31:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:00.194 15:31:17 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:21:00.194 15:31:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:00.194 15:31:17 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:00.194 15:31:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:00.194 15:31:17 -- common/autotest_common.sh@10 -- # set +x 00:21:00.194 request: 00:21:00.194 { 00:21:00.194 "name": "NVMe0", 00:21:00.194 "trtype": "tcp", 00:21:00.194 "traddr": "10.0.0.2", 00:21:00.194 "hostaddr": "10.0.0.2", 00:21:00.194 "hostsvcid": "60000", 00:21:00.194 "adrfam": "ipv4", 00:21:00.194 "trsvcid": "4420", 00:21:00.194 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:00.194 "method": "bdev_nvme_attach_controller", 00:21:00.194 "req_id": 1 00:21:00.194 } 00:21:00.194 Got JSON-RPC error response 00:21:00.194 response: 00:21:00.194 { 00:21:00.194 "code": -114, 00:21:00.194 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:00.194 } 00:21:00.194 15:31:17 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:21:00.194 15:31:17 -- common/autotest_common.sh@641 -- # es=1 00:21:00.194 15:31:17 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:00.194 15:31:17 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:00.194 15:31:17 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:00.194 15:31:17 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:00.194 15:31:17 -- common/autotest_common.sh@638 -- # local es=0 00:21:00.194 15:31:17 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:00.194 15:31:17 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:21:00.194 15:31:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:00.194 15:31:17 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:21:00.194 15:31:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:00.194 15:31:17 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:00.194 15:31:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:00.194 15:31:17 -- common/autotest_common.sh@10 -- # set +x 00:21:00.194 request: 00:21:00.194 { 00:21:00.194 "name": "NVMe0", 00:21:00.194 "trtype": "tcp", 00:21:00.194 "traddr": "10.0.0.2", 00:21:00.194 "hostaddr": "10.0.0.2", 00:21:00.194 "hostsvcid": "60000", 00:21:00.194 "adrfam": "ipv4", 00:21:00.194 "trsvcid": "4420", 00:21:00.194 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:00.194 "multipath": "disable", 00:21:00.194 "method": "bdev_nvme_attach_controller", 00:21:00.194 "req_id": 1 00:21:00.194 } 00:21:00.194 Got JSON-RPC error response 00:21:00.194 response: 00:21:00.194 { 00:21:00.194 "code": -114, 00:21:00.194 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:21:00.194 } 00:21:00.194 15:31:17 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:21:00.194 15:31:17 -- common/autotest_common.sh@641 -- # es=1 00:21:00.194 15:31:17 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:00.194 15:31:17 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:00.194 15:31:17 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:00.194 15:31:17 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:00.194 15:31:17 -- common/autotest_common.sh@638 -- # local es=0 00:21:00.194 15:31:17 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:00.194 15:31:17 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:21:00.194 15:31:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:00.194 15:31:17 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:21:00.194 15:31:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:00.194 15:31:17 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:00.194 15:31:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:00.194 15:31:17 -- common/autotest_common.sh@10 -- # set +x 00:21:00.194 request: 00:21:00.194 { 00:21:00.194 "name": "NVMe0", 00:21:00.194 "trtype": "tcp", 00:21:00.194 "traddr": "10.0.0.2", 00:21:00.194 "hostaddr": "10.0.0.2", 00:21:00.194 "hostsvcid": "60000", 00:21:00.194 "adrfam": "ipv4", 00:21:00.194 "trsvcid": "4420", 00:21:00.194 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:00.194 "multipath": "failover", 00:21:00.194 "method": "bdev_nvme_attach_controller", 00:21:00.194 "req_id": 1 00:21:00.194 } 00:21:00.194 Got JSON-RPC error response 00:21:00.194 response: 00:21:00.194 { 00:21:00.194 "code": -114, 00:21:00.194 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:00.194 } 00:21:00.194 15:31:17 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:21:00.194 15:31:17 -- common/autotest_common.sh@641 -- # es=1 00:21:00.194 15:31:17 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:00.194 15:31:17 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:00.194 15:31:17 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:00.194 15:31:17 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:00.194 15:31:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:00.194 15:31:17 -- common/autotest_common.sh@10 -- # set +x 00:21:00.458 00:21:00.458 15:31:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:00.458 15:31:17 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:00.458 15:31:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:00.458 15:31:17 -- common/autotest_common.sh@10 -- # set +x 00:21:00.458 15:31:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:00.458 15:31:17 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:21:00.458 15:31:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:00.458 15:31:17 -- common/autotest_common.sh@10 -- # set +x 00:21:00.458 00:21:00.458 15:31:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:00.458 15:31:17 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:00.458 15:31:17 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:21:00.458 15:31:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:00.458 15:31:17 -- common/autotest_common.sh@10 -- # set +x 00:21:00.458 15:31:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:00.458 15:31:17 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:21:00.458 15:31:17 -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:01.846 0 00:21:01.846 15:31:18 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:21:01.846 15:31:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:01.846 15:31:18 -- common/autotest_common.sh@10 -- # set +x 00:21:01.846 15:31:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:01.846 15:31:18 -- host/multicontroller.sh@100 -- # killprocess 1691273 00:21:01.846 15:31:18 -- common/autotest_common.sh@936 -- # '[' -z 1691273 ']' 00:21:01.846 15:31:18 -- common/autotest_common.sh@940 -- # kill -0 1691273 00:21:01.846 15:31:18 -- common/autotest_common.sh@941 -- # uname 00:21:01.846 15:31:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:01.846 15:31:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1691273 00:21:01.846 15:31:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:01.846 15:31:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:01.846 15:31:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1691273' 00:21:01.846 killing process with pid 1691273 00:21:01.846 15:31:18 -- common/autotest_common.sh@955 -- # kill 1691273 00:21:01.846 15:31:18 -- common/autotest_common.sh@960 -- # wait 1691273 00:21:01.846 15:31:19 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:01.846 15:31:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:01.846 15:31:19 -- common/autotest_common.sh@10 -- # set +x 00:21:01.846 15:31:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:01.846 15:31:19 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:01.846 15:31:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:01.846 15:31:19 -- common/autotest_common.sh@10 -- # set +x 00:21:01.846 15:31:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:01.846 15:31:19 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:21:01.846 15:31:19 -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:01.846 15:31:19 -- common/autotest_common.sh@1598 -- # read -r file 00:21:01.846 15:31:19 -- common/autotest_common.sh@1597 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:21:01.846 15:31:19 -- common/autotest_common.sh@1597 -- # sort -u 00:21:01.846 15:31:19 -- common/autotest_common.sh@1599 -- # cat 00:21:01.846 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:01.846 [2024-04-26 15:31:16.538656] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:21:01.846 [2024-04-26 15:31:16.538712] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1691273 ] 00:21:01.846 EAL: No free 2048 kB hugepages reported on node 1 00:21:01.846 [2024-04-26 15:31:16.598426] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.846 [2024-04-26 15:31:16.660860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:01.846 [2024-04-26 15:31:17.785241] bdev.c:4551:bdev_name_add: *ERROR*: Bdev name 57851eac-ec59-4b6a-85dd-523b68620159 already exists 00:21:01.846 [2024-04-26 15:31:17.785273] bdev.c:7668:bdev_register: *ERROR*: Unable to add uuid:57851eac-ec59-4b6a-85dd-523b68620159 alias for bdev NVMe1n1 00:21:01.846 [2024-04-26 15:31:17.785283] bdev_nvme.c:4276:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:21:01.846 Running I/O for 1 seconds... 00:21:01.846 00:21:01.846 Latency(us) 00:21:01.846 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:01.846 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:21:01.846 NVMe0n1 : 1.00 28404.45 110.95 0.00 0.00 4498.51 2088.96 15619.41 00:21:01.846 =================================================================================================================== 00:21:01.846 Total : 28404.45 110.95 0.00 0.00 4498.51 2088.96 15619.41 00:21:01.847 Received shutdown signal, test time was about 1.000000 seconds 00:21:01.847 00:21:01.847 Latency(us) 00:21:01.847 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:01.847 =================================================================================================================== 00:21:01.847 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:01.847 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:01.847 15:31:19 -- common/autotest_common.sh@1604 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:01.847 15:31:19 -- common/autotest_common.sh@1598 -- # read -r file 00:21:01.847 15:31:19 -- host/multicontroller.sh@108 -- # nvmftestfini 00:21:01.847 15:31:19 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:01.847 15:31:19 -- nvmf/common.sh@117 -- # sync 00:21:01.847 15:31:19 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:01.847 15:31:19 -- nvmf/common.sh@120 -- # set +e 00:21:01.847 15:31:19 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:01.847 15:31:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:01.847 rmmod nvme_tcp 00:21:01.847 rmmod nvme_fabrics 00:21:01.847 rmmod nvme_keyring 00:21:01.847 15:31:19 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:01.847 15:31:19 -- nvmf/common.sh@124 -- # set -e 00:21:01.847 15:31:19 -- nvmf/common.sh@125 -- # return 0 00:21:01.847 15:31:19 -- nvmf/common.sh@478 -- # '[' -n 1691122 ']' 00:21:01.847 15:31:19 -- nvmf/common.sh@479 -- # killprocess 1691122 00:21:01.847 15:31:19 -- common/autotest_common.sh@936 -- # '[' -z 1691122 ']' 00:21:01.847 15:31:19 -- common/autotest_common.sh@940 -- # kill -0 1691122 00:21:01.847 15:31:19 -- common/autotest_common.sh@941 -- # uname 00:21:01.847 15:31:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:01.847 15:31:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1691122 00:21:01.847 15:31:19 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:01.847 15:31:19 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:01.847 15:31:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1691122' 00:21:01.847 killing process with pid 1691122 00:21:01.847 15:31:19 -- common/autotest_common.sh@955 -- # kill 1691122 00:21:01.847 15:31:19 -- common/autotest_common.sh@960 -- # wait 1691122 00:21:02.108 15:31:19 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:02.108 15:31:19 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:02.108 15:31:19 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:02.108 15:31:19 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:02.108 15:31:19 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:02.108 15:31:19 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:02.108 15:31:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:02.108 15:31:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:04.657 15:31:21 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:04.657 00:21:04.657 real 0m13.590s 00:21:04.657 user 0m16.157s 00:21:04.657 sys 0m6.248s 00:21:04.657 15:31:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:04.657 15:31:21 -- common/autotest_common.sh@10 -- # set +x 00:21:04.657 ************************************ 00:21:04.657 END TEST nvmf_multicontroller 00:21:04.657 ************************************ 00:21:04.657 15:31:21 -- nvmf/nvmf.sh@90 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:04.657 15:31:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:04.657 15:31:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:04.657 15:31:21 -- common/autotest_common.sh@10 -- # set +x 00:21:04.657 ************************************ 00:21:04.657 START TEST nvmf_aer 00:21:04.657 ************************************ 00:21:04.657 15:31:21 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:04.657 * Looking for test storage... 00:21:04.657 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:04.657 15:31:21 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:04.657 15:31:21 -- nvmf/common.sh@7 -- # uname -s 00:21:04.657 15:31:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:04.657 15:31:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:04.657 15:31:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:04.657 15:31:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:04.657 15:31:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:04.657 15:31:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:04.657 15:31:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:04.657 15:31:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:04.657 15:31:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:04.657 15:31:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:04.657 15:31:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:04.657 15:31:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:04.657 15:31:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:04.657 15:31:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:04.657 15:31:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:04.657 15:31:21 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:04.657 15:31:21 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:04.657 15:31:21 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:04.657 15:31:21 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:04.657 15:31:21 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:04.657 15:31:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.657 15:31:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.658 15:31:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.658 15:31:21 -- paths/export.sh@5 -- # export PATH 00:21:04.658 15:31:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.658 15:31:21 -- nvmf/common.sh@47 -- # : 0 00:21:04.658 15:31:21 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:04.658 15:31:21 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:04.658 15:31:21 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:04.658 15:31:21 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:04.658 15:31:21 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:04.658 15:31:21 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:04.658 15:31:21 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:04.658 15:31:21 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:04.658 15:31:21 -- host/aer.sh@11 -- # nvmftestinit 00:21:04.658 15:31:21 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:04.658 15:31:21 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:04.658 15:31:21 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:04.658 15:31:21 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:04.658 15:31:21 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:04.658 15:31:21 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:04.658 15:31:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:04.658 15:31:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:04.658 15:31:21 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:04.658 15:31:21 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:04.658 15:31:21 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:04.658 15:31:21 -- common/autotest_common.sh@10 -- # set +x 00:21:11.243 15:31:28 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:11.243 15:31:28 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:11.243 15:31:28 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:11.243 15:31:28 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:11.243 15:31:28 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:11.243 15:31:28 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:11.243 15:31:28 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:11.243 15:31:28 -- nvmf/common.sh@295 -- # net_devs=() 00:21:11.243 15:31:28 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:11.243 15:31:28 -- nvmf/common.sh@296 -- # e810=() 00:21:11.243 15:31:28 -- nvmf/common.sh@296 -- # local -ga e810 00:21:11.243 15:31:28 -- nvmf/common.sh@297 -- # x722=() 00:21:11.243 15:31:28 -- nvmf/common.sh@297 -- # local -ga x722 00:21:11.243 15:31:28 -- nvmf/common.sh@298 -- # mlx=() 00:21:11.243 15:31:28 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:11.243 15:31:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:11.243 15:31:28 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:11.243 15:31:28 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:11.243 15:31:28 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:11.243 15:31:28 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:11.243 15:31:28 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:11.243 15:31:28 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:11.243 15:31:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:11.243 15:31:28 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:11.243 15:31:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:11.243 15:31:28 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:11.243 15:31:28 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:11.243 15:31:28 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:11.243 15:31:28 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:11.243 15:31:28 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:11.243 15:31:28 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:11.243 15:31:28 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:11.243 15:31:28 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:11.243 15:31:28 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:11.243 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:11.243 15:31:28 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:11.243 15:31:28 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:11.243 15:31:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:11.243 15:31:28 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:11.243 15:31:28 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:11.243 15:31:28 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:11.243 15:31:28 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:11.243 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:11.243 15:31:28 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:11.243 15:31:28 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:11.243 15:31:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:11.243 15:31:28 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:11.243 15:31:28 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:11.243 15:31:28 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:11.243 15:31:28 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:11.243 15:31:28 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:11.243 15:31:28 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:11.243 15:31:28 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:11.243 15:31:28 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:11.243 15:31:28 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:11.243 15:31:28 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:11.243 Found net devices under 0000:31:00.0: cvl_0_0 00:21:11.243 15:31:28 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:11.243 15:31:28 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:11.243 15:31:28 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:11.243 15:31:28 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:11.243 15:31:28 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:11.243 15:31:28 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:11.243 Found net devices under 0000:31:00.1: cvl_0_1 00:21:11.243 15:31:28 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:11.243 15:31:28 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:11.243 15:31:28 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:11.243 15:31:28 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:11.243 15:31:28 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:11.243 15:31:28 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:11.243 15:31:28 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:11.243 15:31:28 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:11.243 15:31:28 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:11.243 15:31:28 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:11.243 15:31:28 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:11.243 15:31:28 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:11.243 15:31:28 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:11.243 15:31:28 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:11.243 15:31:28 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:11.243 15:31:28 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:11.243 15:31:28 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:11.243 15:31:28 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:11.243 15:31:28 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:11.243 15:31:28 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:11.243 15:31:28 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:11.504 15:31:28 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:11.504 15:31:28 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:11.504 15:31:28 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:11.504 15:31:28 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:11.504 15:31:28 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:11.504 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:11.504 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.647 ms 00:21:11.504 00:21:11.504 --- 10.0.0.2 ping statistics --- 00:21:11.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:11.504 rtt min/avg/max/mdev = 0.647/0.647/0.647/0.000 ms 00:21:11.504 15:31:28 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:11.504 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:11.504 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.248 ms 00:21:11.504 00:21:11.504 --- 10.0.0.1 ping statistics --- 00:21:11.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:11.504 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:21:11.504 15:31:28 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:11.504 15:31:28 -- nvmf/common.sh@411 -- # return 0 00:21:11.504 15:31:28 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:11.504 15:31:28 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:11.504 15:31:28 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:11.504 15:31:28 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:11.504 15:31:28 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:11.504 15:31:28 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:11.504 15:31:28 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:11.504 15:31:28 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:21:11.504 15:31:28 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:11.504 15:31:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:11.504 15:31:28 -- common/autotest_common.sh@10 -- # set +x 00:21:11.504 15:31:28 -- nvmf/common.sh@470 -- # nvmfpid=1696021 00:21:11.504 15:31:28 -- nvmf/common.sh@471 -- # waitforlisten 1696021 00:21:11.504 15:31:28 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:11.504 15:31:28 -- common/autotest_common.sh@817 -- # '[' -z 1696021 ']' 00:21:11.504 15:31:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:11.504 15:31:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:11.504 15:31:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:11.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:11.504 15:31:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:11.504 15:31:28 -- common/autotest_common.sh@10 -- # set +x 00:21:11.504 [2024-04-26 15:31:28.907462] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:21:11.504 [2024-04-26 15:31:28.907511] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:11.504 EAL: No free 2048 kB hugepages reported on node 1 00:21:11.764 [2024-04-26 15:31:28.976121] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:11.764 [2024-04-26 15:31:29.039510] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:11.764 [2024-04-26 15:31:29.039547] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:11.764 [2024-04-26 15:31:29.039554] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:11.764 [2024-04-26 15:31:29.039561] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:11.764 [2024-04-26 15:31:29.039566] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:11.764 [2024-04-26 15:31:29.039630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:11.764 [2024-04-26 15:31:29.039765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:11.764 [2024-04-26 15:31:29.039782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:11.764 [2024-04-26 15:31:29.039788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:12.335 15:31:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:12.335 15:31:29 -- common/autotest_common.sh@850 -- # return 0 00:21:12.335 15:31:29 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:12.335 15:31:29 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:12.335 15:31:29 -- common/autotest_common.sh@10 -- # set +x 00:21:12.335 15:31:29 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:12.335 15:31:29 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:12.335 15:31:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:12.335 15:31:29 -- common/autotest_common.sh@10 -- # set +x 00:21:12.335 [2024-04-26 15:31:29.718386] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:12.335 15:31:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:12.335 15:31:29 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:21:12.335 15:31:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:12.335 15:31:29 -- common/autotest_common.sh@10 -- # set +x 00:21:12.335 Malloc0 00:21:12.335 15:31:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:12.335 15:31:29 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:21:12.335 15:31:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:12.335 15:31:29 -- common/autotest_common.sh@10 -- # set +x 00:21:12.335 15:31:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:12.335 15:31:29 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:12.335 15:31:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:12.335 15:31:29 -- common/autotest_common.sh@10 -- # set +x 00:21:12.335 15:31:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:12.335 15:31:29 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:12.335 15:31:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:12.335 15:31:29 -- common/autotest_common.sh@10 -- # set +x 00:21:12.335 [2024-04-26 15:31:29.758698] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:12.335 15:31:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:12.335 15:31:29 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:21:12.335 15:31:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:12.335 15:31:29 -- common/autotest_common.sh@10 -- # set +x 00:21:12.335 [2024-04-26 15:31:29.766506] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:21:12.335 [ 00:21:12.335 { 00:21:12.335 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:12.335 "subtype": "Discovery", 00:21:12.335 "listen_addresses": [], 00:21:12.335 "allow_any_host": true, 00:21:12.335 "hosts": [] 00:21:12.335 }, 00:21:12.335 { 00:21:12.335 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:12.335 "subtype": "NVMe", 00:21:12.335 "listen_addresses": [ 00:21:12.335 { 00:21:12.335 "transport": "TCP", 00:21:12.335 "trtype": "TCP", 00:21:12.335 "adrfam": "IPv4", 00:21:12.335 "traddr": "10.0.0.2", 00:21:12.335 "trsvcid": "4420" 00:21:12.335 } 00:21:12.335 ], 00:21:12.335 "allow_any_host": true, 00:21:12.335 "hosts": [], 00:21:12.335 "serial_number": "SPDK00000000000001", 00:21:12.335 "model_number": "SPDK bdev Controller", 00:21:12.335 "max_namespaces": 2, 00:21:12.335 "min_cntlid": 1, 00:21:12.335 "max_cntlid": 65519, 00:21:12.335 "namespaces": [ 00:21:12.335 { 00:21:12.335 "nsid": 1, 00:21:12.335 "bdev_name": "Malloc0", 00:21:12.335 "name": "Malloc0", 00:21:12.335 "nguid": "267BB9DB47534C5AACAA94A6C7883FD0", 00:21:12.335 "uuid": "267bb9db-4753-4c5a-acaa-94a6c7883fd0" 00:21:12.335 } 00:21:12.335 ] 00:21:12.335 } 00:21:12.335 ] 00:21:12.335 15:31:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:12.335 15:31:29 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:21:12.335 15:31:29 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:21:12.335 15:31:29 -- host/aer.sh@33 -- # aerpid=1696368 00:21:12.335 15:31:29 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:21:12.335 15:31:29 -- common/autotest_common.sh@1251 -- # local i=0 00:21:12.335 15:31:29 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:12.335 15:31:29 -- common/autotest_common.sh@1253 -- # '[' 0 -lt 200 ']' 00:21:12.335 15:31:29 -- common/autotest_common.sh@1254 -- # i=1 00:21:12.335 15:31:29 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:21:12.335 15:31:29 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:21:12.595 EAL: No free 2048 kB hugepages reported on node 1 00:21:12.595 15:31:29 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:12.595 15:31:29 -- common/autotest_common.sh@1253 -- # '[' 1 -lt 200 ']' 00:21:12.595 15:31:29 -- common/autotest_common.sh@1254 -- # i=2 00:21:12.595 15:31:29 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:21:12.595 15:31:29 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:12.595 15:31:29 -- common/autotest_common.sh@1253 -- # '[' 2 -lt 200 ']' 00:21:12.595 15:31:29 -- common/autotest_common.sh@1254 -- # i=3 00:21:12.595 15:31:29 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:21:12.857 15:31:30 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:12.857 15:31:30 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:12.857 15:31:30 -- common/autotest_common.sh@1262 -- # return 0 00:21:12.857 15:31:30 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:21:12.857 15:31:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:12.857 15:31:30 -- common/autotest_common.sh@10 -- # set +x 00:21:12.857 Malloc1 00:21:12.857 15:31:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:12.857 15:31:30 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:21:12.857 15:31:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:12.857 15:31:30 -- common/autotest_common.sh@10 -- # set +x 00:21:12.857 15:31:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:12.857 15:31:30 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:21:12.857 15:31:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:12.857 15:31:30 -- common/autotest_common.sh@10 -- # set +x 00:21:12.857 [ 00:21:12.857 { 00:21:12.857 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:12.857 "subtype": "Discovery", 00:21:12.857 "listen_addresses": [], 00:21:12.857 "allow_any_host": true, 00:21:12.857 "hosts": [] 00:21:12.857 }, 00:21:12.857 { 00:21:12.857 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:12.857 "subtype": "NVMe", 00:21:12.857 "listen_addresses": [ 00:21:12.857 { 00:21:12.857 "transport": "TCP", 00:21:12.857 "trtype": "TCP", 00:21:12.857 "adrfam": "IPv4", 00:21:12.857 "traddr": "10.0.0.2", 00:21:12.857 "trsvcid": "4420" 00:21:12.857 } 00:21:12.857 ], 00:21:12.857 "allow_any_host": true, 00:21:12.857 "hosts": [], 00:21:12.857 "serial_number": "SPDK00000000000001", 00:21:12.857 "model_number": "SPDK bdev Controller", 00:21:12.857 "max_namespaces": 2, 00:21:12.857 "min_cntlid": 1, 00:21:12.857 "max_cntlid": 65519, 00:21:12.857 "namespaces": [ 00:21:12.857 { 00:21:12.857 "nsid": 1, 00:21:12.857 "bdev_name": "Malloc0", 00:21:12.857 "name": "Malloc0", 00:21:12.857 "nguid": "267BB9DB47534C5AACAA94A6C7883FD0", 00:21:12.857 "uuid": "267bb9db-4753-4c5a-acaa-94a6c7883fd0" 00:21:12.857 }, 00:21:12.857 { 00:21:12.857 "nsid": 2, 00:21:12.857 "bdev_name": "Malloc1", 00:21:12.857 "name": "Malloc1", 00:21:12.857 "nguid": "4E01B136809848749AB4CA6747A2E8EC", 00:21:12.857 "uuid": "4e01b136-8098-4874-9ab4-ca6747a2e8ec" 00:21:12.857 } 00:21:12.857 ] 00:21:12.857 } 00:21:12.857 ] 00:21:12.857 15:31:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:12.857 15:31:30 -- host/aer.sh@43 -- # wait 1696368 00:21:12.857 Asynchronous Event Request test 00:21:12.857 Attaching to 10.0.0.2 00:21:12.857 Attached to 10.0.0.2 00:21:12.857 Registering asynchronous event callbacks... 00:21:12.857 Starting namespace attribute notice tests for all controllers... 00:21:12.857 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:12.857 aer_cb - Changed Namespace 00:21:12.857 Cleaning up... 00:21:12.857 15:31:30 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:12.857 15:31:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:12.857 15:31:30 -- common/autotest_common.sh@10 -- # set +x 00:21:12.857 15:31:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:12.857 15:31:30 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:21:12.857 15:31:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:12.857 15:31:30 -- common/autotest_common.sh@10 -- # set +x 00:21:12.857 15:31:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:12.857 15:31:30 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:12.857 15:31:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:12.857 15:31:30 -- common/autotest_common.sh@10 -- # set +x 00:21:12.857 15:31:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:12.857 15:31:30 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:21:12.857 15:31:30 -- host/aer.sh@51 -- # nvmftestfini 00:21:12.857 15:31:30 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:12.857 15:31:30 -- nvmf/common.sh@117 -- # sync 00:21:12.857 15:31:30 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:12.857 15:31:30 -- nvmf/common.sh@120 -- # set +e 00:21:12.857 15:31:30 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:12.857 15:31:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:12.857 rmmod nvme_tcp 00:21:12.857 rmmod nvme_fabrics 00:21:12.857 rmmod nvme_keyring 00:21:12.857 15:31:30 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:12.857 15:31:30 -- nvmf/common.sh@124 -- # set -e 00:21:12.857 15:31:30 -- nvmf/common.sh@125 -- # return 0 00:21:12.857 15:31:30 -- nvmf/common.sh@478 -- # '[' -n 1696021 ']' 00:21:12.857 15:31:30 -- nvmf/common.sh@479 -- # killprocess 1696021 00:21:12.857 15:31:30 -- common/autotest_common.sh@936 -- # '[' -z 1696021 ']' 00:21:12.857 15:31:30 -- common/autotest_common.sh@940 -- # kill -0 1696021 00:21:12.857 15:31:30 -- common/autotest_common.sh@941 -- # uname 00:21:12.857 15:31:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:12.857 15:31:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1696021 00:21:12.857 15:31:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:12.857 15:31:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:13.118 15:31:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1696021' 00:21:13.118 killing process with pid 1696021 00:21:13.118 15:31:30 -- common/autotest_common.sh@955 -- # kill 1696021 00:21:13.118 [2024-04-26 15:31:30.306151] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:21:13.118 15:31:30 -- common/autotest_common.sh@960 -- # wait 1696021 00:21:13.118 15:31:30 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:13.118 15:31:30 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:13.118 15:31:30 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:13.118 15:31:30 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:13.118 15:31:30 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:13.118 15:31:30 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:13.118 15:31:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:13.118 15:31:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:15.667 15:31:32 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:15.667 00:21:15.667 real 0m10.841s 00:21:15.667 user 0m7.615s 00:21:15.667 sys 0m5.580s 00:21:15.667 15:31:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:15.667 15:31:32 -- common/autotest_common.sh@10 -- # set +x 00:21:15.667 ************************************ 00:21:15.667 END TEST nvmf_aer 00:21:15.667 ************************************ 00:21:15.667 15:31:32 -- nvmf/nvmf.sh@91 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:15.667 15:31:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:15.667 15:31:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:15.667 15:31:32 -- common/autotest_common.sh@10 -- # set +x 00:21:15.667 ************************************ 00:21:15.667 START TEST nvmf_async_init 00:21:15.667 ************************************ 00:21:15.667 15:31:32 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:15.667 * Looking for test storage... 00:21:15.667 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:15.667 15:31:32 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:15.667 15:31:32 -- nvmf/common.sh@7 -- # uname -s 00:21:15.667 15:31:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:15.667 15:31:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:15.667 15:31:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:15.667 15:31:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:15.667 15:31:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:15.667 15:31:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:15.667 15:31:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:15.667 15:31:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:15.667 15:31:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:15.667 15:31:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:15.667 15:31:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:15.667 15:31:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:15.667 15:31:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:15.667 15:31:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:15.667 15:31:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:15.667 15:31:32 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:15.667 15:31:32 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:15.667 15:31:32 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:15.667 15:31:32 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:15.667 15:31:32 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:15.668 15:31:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.668 15:31:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.668 15:31:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.668 15:31:32 -- paths/export.sh@5 -- # export PATH 00:21:15.668 15:31:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.668 15:31:32 -- nvmf/common.sh@47 -- # : 0 00:21:15.668 15:31:32 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:15.668 15:31:32 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:15.668 15:31:32 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:15.668 15:31:32 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:15.668 15:31:32 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:15.668 15:31:32 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:15.668 15:31:32 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:15.668 15:31:32 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:15.668 15:31:32 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:21:15.668 15:31:32 -- host/async_init.sh@14 -- # null_block_size=512 00:21:15.668 15:31:32 -- host/async_init.sh@15 -- # null_bdev=null0 00:21:15.668 15:31:32 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:21:15.668 15:31:32 -- host/async_init.sh@20 -- # uuidgen 00:21:15.668 15:31:32 -- host/async_init.sh@20 -- # tr -d - 00:21:15.668 15:31:32 -- host/async_init.sh@20 -- # nguid=c24358ee45b949828b9f11d13e8d1585 00:21:15.668 15:31:32 -- host/async_init.sh@22 -- # nvmftestinit 00:21:15.668 15:31:32 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:15.668 15:31:32 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:15.668 15:31:32 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:15.668 15:31:32 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:15.668 15:31:32 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:15.668 15:31:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:15.668 15:31:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:15.668 15:31:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:15.668 15:31:32 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:15.668 15:31:32 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:15.668 15:31:32 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:15.668 15:31:32 -- common/autotest_common.sh@10 -- # set +x 00:21:22.256 15:31:39 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:22.256 15:31:39 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:22.256 15:31:39 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:22.256 15:31:39 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:22.256 15:31:39 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:22.256 15:31:39 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:22.256 15:31:39 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:22.256 15:31:39 -- nvmf/common.sh@295 -- # net_devs=() 00:21:22.256 15:31:39 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:22.256 15:31:39 -- nvmf/common.sh@296 -- # e810=() 00:21:22.256 15:31:39 -- nvmf/common.sh@296 -- # local -ga e810 00:21:22.256 15:31:39 -- nvmf/common.sh@297 -- # x722=() 00:21:22.256 15:31:39 -- nvmf/common.sh@297 -- # local -ga x722 00:21:22.256 15:31:39 -- nvmf/common.sh@298 -- # mlx=() 00:21:22.256 15:31:39 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:22.256 15:31:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:22.256 15:31:39 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:22.256 15:31:39 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:22.256 15:31:39 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:22.256 15:31:39 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:22.256 15:31:39 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:22.256 15:31:39 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:22.256 15:31:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:22.256 15:31:39 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:22.256 15:31:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:22.256 15:31:39 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:22.256 15:31:39 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:22.256 15:31:39 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:22.256 15:31:39 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:22.256 15:31:39 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:22.256 15:31:39 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:22.256 15:31:39 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:22.256 15:31:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:22.256 15:31:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:22.256 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:22.256 15:31:39 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:22.256 15:31:39 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:22.256 15:31:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:22.256 15:31:39 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:22.256 15:31:39 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:22.256 15:31:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:22.256 15:31:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:22.256 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:22.256 15:31:39 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:22.256 15:31:39 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:22.256 15:31:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:22.256 15:31:39 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:22.256 15:31:39 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:22.256 15:31:39 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:22.256 15:31:39 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:22.256 15:31:39 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:22.256 15:31:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:22.256 15:31:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:22.256 15:31:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:22.256 15:31:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:22.257 15:31:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:22.257 Found net devices under 0000:31:00.0: cvl_0_0 00:21:22.257 15:31:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:22.257 15:31:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:22.257 15:31:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:22.257 15:31:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:22.257 15:31:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:22.257 15:31:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:22.257 Found net devices under 0000:31:00.1: cvl_0_1 00:21:22.257 15:31:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:22.257 15:31:39 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:22.257 15:31:39 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:22.257 15:31:39 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:22.257 15:31:39 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:22.257 15:31:39 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:22.257 15:31:39 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:22.257 15:31:39 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:22.257 15:31:39 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:22.257 15:31:39 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:22.257 15:31:39 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:22.257 15:31:39 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:22.257 15:31:39 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:22.257 15:31:39 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:22.257 15:31:39 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:22.257 15:31:39 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:22.257 15:31:39 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:22.257 15:31:39 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:22.257 15:31:39 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:22.257 15:31:39 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:22.257 15:31:39 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:22.257 15:31:39 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:22.518 15:31:39 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:22.518 15:31:39 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:22.518 15:31:39 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:22.518 15:31:39 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:22.518 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:22.518 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.716 ms 00:21:22.518 00:21:22.518 --- 10.0.0.2 ping statistics --- 00:21:22.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:22.518 rtt min/avg/max/mdev = 0.716/0.716/0.716/0.000 ms 00:21:22.518 15:31:39 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:22.518 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:22.518 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.232 ms 00:21:22.518 00:21:22.518 --- 10.0.0.1 ping statistics --- 00:21:22.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:22.518 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:21:22.518 15:31:39 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:22.518 15:31:39 -- nvmf/common.sh@411 -- # return 0 00:21:22.518 15:31:39 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:22.518 15:31:39 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:22.518 15:31:39 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:22.518 15:31:39 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:22.518 15:31:39 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:22.518 15:31:39 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:22.518 15:31:39 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:22.518 15:31:39 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:21:22.518 15:31:39 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:22.518 15:31:39 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:22.518 15:31:39 -- common/autotest_common.sh@10 -- # set +x 00:21:22.518 15:31:39 -- nvmf/common.sh@470 -- # nvmfpid=1700508 00:21:22.518 15:31:39 -- nvmf/common.sh@471 -- # waitforlisten 1700508 00:21:22.518 15:31:39 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:22.518 15:31:39 -- common/autotest_common.sh@817 -- # '[' -z 1700508 ']' 00:21:22.518 15:31:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:22.518 15:31:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:22.518 15:31:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:22.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:22.518 15:31:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:22.518 15:31:39 -- common/autotest_common.sh@10 -- # set +x 00:21:22.518 [2024-04-26 15:31:39.921990] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:21:22.519 [2024-04-26 15:31:39.922057] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:22.519 EAL: No free 2048 kB hugepages reported on node 1 00:21:22.779 [2024-04-26 15:31:39.993570] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:22.779 [2024-04-26 15:31:40.068008] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:22.779 [2024-04-26 15:31:40.068050] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:22.779 [2024-04-26 15:31:40.068057] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:22.779 [2024-04-26 15:31:40.068064] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:22.779 [2024-04-26 15:31:40.068070] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:22.779 [2024-04-26 15:31:40.068089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:23.353 15:31:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:23.353 15:31:40 -- common/autotest_common.sh@850 -- # return 0 00:21:23.353 15:31:40 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:23.353 15:31:40 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:23.353 15:31:40 -- common/autotest_common.sh@10 -- # set +x 00:21:23.353 15:31:40 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:23.353 15:31:40 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:21:23.353 15:31:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:23.353 15:31:40 -- common/autotest_common.sh@10 -- # set +x 00:21:23.353 [2024-04-26 15:31:40.731584] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:23.353 15:31:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:23.353 15:31:40 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:21:23.353 15:31:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:23.353 15:31:40 -- common/autotest_common.sh@10 -- # set +x 00:21:23.353 null0 00:21:23.353 15:31:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:23.353 15:31:40 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:21:23.353 15:31:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:23.353 15:31:40 -- common/autotest_common.sh@10 -- # set +x 00:21:23.353 15:31:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:23.353 15:31:40 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:21:23.353 15:31:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:23.353 15:31:40 -- common/autotest_common.sh@10 -- # set +x 00:21:23.353 15:31:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:23.353 15:31:40 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g c24358ee45b949828b9f11d13e8d1585 00:21:23.353 15:31:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:23.353 15:31:40 -- common/autotest_common.sh@10 -- # set +x 00:21:23.353 15:31:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:23.353 15:31:40 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:23.353 15:31:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:23.353 15:31:40 -- common/autotest_common.sh@10 -- # set +x 00:21:23.353 [2024-04-26 15:31:40.787821] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:23.353 15:31:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:23.353 15:31:40 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:21:23.353 15:31:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:23.353 15:31:40 -- common/autotest_common.sh@10 -- # set +x 00:21:23.615 nvme0n1 00:21:23.615 15:31:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:23.615 15:31:41 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:23.615 15:31:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:23.615 15:31:41 -- common/autotest_common.sh@10 -- # set +x 00:21:23.615 [ 00:21:23.615 { 00:21:23.615 "name": "nvme0n1", 00:21:23.615 "aliases": [ 00:21:23.615 "c24358ee-45b9-4982-8b9f-11d13e8d1585" 00:21:23.615 ], 00:21:23.615 "product_name": "NVMe disk", 00:21:23.615 "block_size": 512, 00:21:23.615 "num_blocks": 2097152, 00:21:23.615 "uuid": "c24358ee-45b9-4982-8b9f-11d13e8d1585", 00:21:23.615 "assigned_rate_limits": { 00:21:23.615 "rw_ios_per_sec": 0, 00:21:23.615 "rw_mbytes_per_sec": 0, 00:21:23.615 "r_mbytes_per_sec": 0, 00:21:23.615 "w_mbytes_per_sec": 0 00:21:23.615 }, 00:21:23.615 "claimed": false, 00:21:23.615 "zoned": false, 00:21:23.615 "supported_io_types": { 00:21:23.615 "read": true, 00:21:23.615 "write": true, 00:21:23.615 "unmap": false, 00:21:23.615 "write_zeroes": true, 00:21:23.615 "flush": true, 00:21:23.615 "reset": true, 00:21:23.615 "compare": true, 00:21:23.615 "compare_and_write": true, 00:21:23.615 "abort": true, 00:21:23.615 "nvme_admin": true, 00:21:23.615 "nvme_io": true 00:21:23.615 }, 00:21:23.615 "memory_domains": [ 00:21:23.615 { 00:21:23.615 "dma_device_id": "system", 00:21:23.615 "dma_device_type": 1 00:21:23.615 } 00:21:23.615 ], 00:21:23.615 "driver_specific": { 00:21:23.615 "nvme": [ 00:21:23.615 { 00:21:23.615 "trid": { 00:21:23.615 "trtype": "TCP", 00:21:23.615 "adrfam": "IPv4", 00:21:23.615 "traddr": "10.0.0.2", 00:21:23.615 "trsvcid": "4420", 00:21:23.615 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:23.615 }, 00:21:23.615 "ctrlr_data": { 00:21:23.615 "cntlid": 1, 00:21:23.615 "vendor_id": "0x8086", 00:21:23.615 "model_number": "SPDK bdev Controller", 00:21:23.615 "serial_number": "00000000000000000000", 00:21:23.615 "firmware_revision": "24.05", 00:21:23.615 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:23.615 "oacs": { 00:21:23.615 "security": 0, 00:21:23.615 "format": 0, 00:21:23.615 "firmware": 0, 00:21:23.615 "ns_manage": 0 00:21:23.615 }, 00:21:23.615 "multi_ctrlr": true, 00:21:23.615 "ana_reporting": false 00:21:23.615 }, 00:21:23.615 "vs": { 00:21:23.615 "nvme_version": "1.3" 00:21:23.615 }, 00:21:23.615 "ns_data": { 00:21:23.615 "id": 1, 00:21:23.615 "can_share": true 00:21:23.615 } 00:21:23.615 } 00:21:23.615 ], 00:21:23.615 "mp_policy": "active_passive" 00:21:23.615 } 00:21:23.615 } 00:21:23.615 ] 00:21:23.615 15:31:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:23.615 15:31:41 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:21:23.615 15:31:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:23.615 15:31:41 -- common/autotest_common.sh@10 -- # set +x 00:21:23.615 [2024-04-26 15:31:41.052363] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:23.615 [2024-04-26 15:31:41.052422] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1345280 (9): Bad file descriptor 00:21:23.876 [2024-04-26 15:31:41.183928] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:23.876 15:31:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:23.876 15:31:41 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:23.876 15:31:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:23.876 15:31:41 -- common/autotest_common.sh@10 -- # set +x 00:21:23.876 [ 00:21:23.876 { 00:21:23.876 "name": "nvme0n1", 00:21:23.876 "aliases": [ 00:21:23.876 "c24358ee-45b9-4982-8b9f-11d13e8d1585" 00:21:23.876 ], 00:21:23.876 "product_name": "NVMe disk", 00:21:23.876 "block_size": 512, 00:21:23.876 "num_blocks": 2097152, 00:21:23.876 "uuid": "c24358ee-45b9-4982-8b9f-11d13e8d1585", 00:21:23.876 "assigned_rate_limits": { 00:21:23.876 "rw_ios_per_sec": 0, 00:21:23.876 "rw_mbytes_per_sec": 0, 00:21:23.876 "r_mbytes_per_sec": 0, 00:21:23.876 "w_mbytes_per_sec": 0 00:21:23.876 }, 00:21:23.876 "claimed": false, 00:21:23.876 "zoned": false, 00:21:23.876 "supported_io_types": { 00:21:23.876 "read": true, 00:21:23.876 "write": true, 00:21:23.876 "unmap": false, 00:21:23.876 "write_zeroes": true, 00:21:23.876 "flush": true, 00:21:23.876 "reset": true, 00:21:23.876 "compare": true, 00:21:23.876 "compare_and_write": true, 00:21:23.876 "abort": true, 00:21:23.876 "nvme_admin": true, 00:21:23.876 "nvme_io": true 00:21:23.876 }, 00:21:23.876 "memory_domains": [ 00:21:23.876 { 00:21:23.876 "dma_device_id": "system", 00:21:23.876 "dma_device_type": 1 00:21:23.876 } 00:21:23.876 ], 00:21:23.876 "driver_specific": { 00:21:23.876 "nvme": [ 00:21:23.876 { 00:21:23.876 "trid": { 00:21:23.876 "trtype": "TCP", 00:21:23.876 "adrfam": "IPv4", 00:21:23.876 "traddr": "10.0.0.2", 00:21:23.876 "trsvcid": "4420", 00:21:23.876 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:23.876 }, 00:21:23.876 "ctrlr_data": { 00:21:23.876 "cntlid": 2, 00:21:23.876 "vendor_id": "0x8086", 00:21:23.876 "model_number": "SPDK bdev Controller", 00:21:23.876 "serial_number": "00000000000000000000", 00:21:23.876 "firmware_revision": "24.05", 00:21:23.876 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:23.876 "oacs": { 00:21:23.876 "security": 0, 00:21:23.876 "format": 0, 00:21:23.876 "firmware": 0, 00:21:23.876 "ns_manage": 0 00:21:23.876 }, 00:21:23.876 "multi_ctrlr": true, 00:21:23.876 "ana_reporting": false 00:21:23.876 }, 00:21:23.876 "vs": { 00:21:23.876 "nvme_version": "1.3" 00:21:23.876 }, 00:21:23.877 "ns_data": { 00:21:23.877 "id": 1, 00:21:23.877 "can_share": true 00:21:23.877 } 00:21:23.877 } 00:21:23.877 ], 00:21:23.877 "mp_policy": "active_passive" 00:21:23.877 } 00:21:23.877 } 00:21:23.877 ] 00:21:23.877 15:31:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:23.877 15:31:41 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:23.877 15:31:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:23.877 15:31:41 -- common/autotest_common.sh@10 -- # set +x 00:21:23.877 15:31:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:23.877 15:31:41 -- host/async_init.sh@53 -- # mktemp 00:21:23.877 15:31:41 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.SP4UwH7xnC 00:21:23.877 15:31:41 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:23.877 15:31:41 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.SP4UwH7xnC 00:21:23.877 15:31:41 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:21:23.877 15:31:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:23.877 15:31:41 -- common/autotest_common.sh@10 -- # set +x 00:21:23.877 15:31:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:23.877 15:31:41 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:21:23.877 15:31:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:23.877 15:31:41 -- common/autotest_common.sh@10 -- # set +x 00:21:23.877 [2024-04-26 15:31:41.248980] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:23.877 [2024-04-26 15:31:41.249093] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:23.877 15:31:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:23.877 15:31:41 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.SP4UwH7xnC 00:21:23.877 15:31:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:23.877 15:31:41 -- common/autotest_common.sh@10 -- # set +x 00:21:23.877 [2024-04-26 15:31:41.261008] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:23.877 15:31:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:23.877 15:31:41 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.SP4UwH7xnC 00:21:23.877 15:31:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:23.877 15:31:41 -- common/autotest_common.sh@10 -- # set +x 00:21:23.877 [2024-04-26 15:31:41.273043] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:23.877 [2024-04-26 15:31:41.273079] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:24.139 nvme0n1 00:21:24.139 15:31:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:24.139 15:31:41 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:24.139 15:31:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:24.139 15:31:41 -- common/autotest_common.sh@10 -- # set +x 00:21:24.139 [ 00:21:24.139 { 00:21:24.139 "name": "nvme0n1", 00:21:24.139 "aliases": [ 00:21:24.139 "c24358ee-45b9-4982-8b9f-11d13e8d1585" 00:21:24.139 ], 00:21:24.139 "product_name": "NVMe disk", 00:21:24.139 "block_size": 512, 00:21:24.139 "num_blocks": 2097152, 00:21:24.139 "uuid": "c24358ee-45b9-4982-8b9f-11d13e8d1585", 00:21:24.139 "assigned_rate_limits": { 00:21:24.139 "rw_ios_per_sec": 0, 00:21:24.139 "rw_mbytes_per_sec": 0, 00:21:24.139 "r_mbytes_per_sec": 0, 00:21:24.139 "w_mbytes_per_sec": 0 00:21:24.139 }, 00:21:24.139 "claimed": false, 00:21:24.139 "zoned": false, 00:21:24.139 "supported_io_types": { 00:21:24.139 "read": true, 00:21:24.139 "write": true, 00:21:24.139 "unmap": false, 00:21:24.139 "write_zeroes": true, 00:21:24.139 "flush": true, 00:21:24.139 "reset": true, 00:21:24.139 "compare": true, 00:21:24.139 "compare_and_write": true, 00:21:24.139 "abort": true, 00:21:24.139 "nvme_admin": true, 00:21:24.139 "nvme_io": true 00:21:24.139 }, 00:21:24.139 "memory_domains": [ 00:21:24.139 { 00:21:24.139 "dma_device_id": "system", 00:21:24.139 "dma_device_type": 1 00:21:24.139 } 00:21:24.139 ], 00:21:24.139 "driver_specific": { 00:21:24.139 "nvme": [ 00:21:24.139 { 00:21:24.139 "trid": { 00:21:24.139 "trtype": "TCP", 00:21:24.139 "adrfam": "IPv4", 00:21:24.139 "traddr": "10.0.0.2", 00:21:24.139 "trsvcid": "4421", 00:21:24.139 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:24.139 }, 00:21:24.139 "ctrlr_data": { 00:21:24.139 "cntlid": 3, 00:21:24.139 "vendor_id": "0x8086", 00:21:24.139 "model_number": "SPDK bdev Controller", 00:21:24.139 "serial_number": "00000000000000000000", 00:21:24.139 "firmware_revision": "24.05", 00:21:24.139 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:24.139 "oacs": { 00:21:24.139 "security": 0, 00:21:24.139 "format": 0, 00:21:24.139 "firmware": 0, 00:21:24.139 "ns_manage": 0 00:21:24.139 }, 00:21:24.139 "multi_ctrlr": true, 00:21:24.139 "ana_reporting": false 00:21:24.139 }, 00:21:24.139 "vs": { 00:21:24.139 "nvme_version": "1.3" 00:21:24.139 }, 00:21:24.139 "ns_data": { 00:21:24.139 "id": 1, 00:21:24.139 "can_share": true 00:21:24.139 } 00:21:24.139 } 00:21:24.139 ], 00:21:24.139 "mp_policy": "active_passive" 00:21:24.139 } 00:21:24.139 } 00:21:24.139 ] 00:21:24.139 15:31:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:24.139 15:31:41 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:24.139 15:31:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:24.139 15:31:41 -- common/autotest_common.sh@10 -- # set +x 00:21:24.139 15:31:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:24.139 15:31:41 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.SP4UwH7xnC 00:21:24.139 15:31:41 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:21:24.139 15:31:41 -- host/async_init.sh@78 -- # nvmftestfini 00:21:24.139 15:31:41 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:24.139 15:31:41 -- nvmf/common.sh@117 -- # sync 00:21:24.139 15:31:41 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:24.139 15:31:41 -- nvmf/common.sh@120 -- # set +e 00:21:24.139 15:31:41 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:24.139 15:31:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:24.139 rmmod nvme_tcp 00:21:24.139 rmmod nvme_fabrics 00:21:24.139 rmmod nvme_keyring 00:21:24.139 15:31:41 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:24.139 15:31:41 -- nvmf/common.sh@124 -- # set -e 00:21:24.139 15:31:41 -- nvmf/common.sh@125 -- # return 0 00:21:24.139 15:31:41 -- nvmf/common.sh@478 -- # '[' -n 1700508 ']' 00:21:24.139 15:31:41 -- nvmf/common.sh@479 -- # killprocess 1700508 00:21:24.139 15:31:41 -- common/autotest_common.sh@936 -- # '[' -z 1700508 ']' 00:21:24.139 15:31:41 -- common/autotest_common.sh@940 -- # kill -0 1700508 00:21:24.139 15:31:41 -- common/autotest_common.sh@941 -- # uname 00:21:24.139 15:31:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:24.139 15:31:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1700508 00:21:24.139 15:31:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:24.139 15:31:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:24.139 15:31:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1700508' 00:21:24.139 killing process with pid 1700508 00:21:24.139 15:31:41 -- common/autotest_common.sh@955 -- # kill 1700508 00:21:24.139 [2024-04-26 15:31:41.506551] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:24.139 [2024-04-26 15:31:41.506578] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:24.139 15:31:41 -- common/autotest_common.sh@960 -- # wait 1700508 00:21:24.401 15:31:41 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:24.401 15:31:41 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:24.401 15:31:41 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:24.401 15:31:41 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:24.401 15:31:41 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:24.401 15:31:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:24.401 15:31:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:24.401 15:31:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:26.316 15:31:43 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:26.316 00:21:26.316 real 0m10.983s 00:21:26.316 user 0m3.837s 00:21:26.316 sys 0m5.571s 00:21:26.316 15:31:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:26.316 15:31:43 -- common/autotest_common.sh@10 -- # set +x 00:21:26.316 ************************************ 00:21:26.316 END TEST nvmf_async_init 00:21:26.316 ************************************ 00:21:26.316 15:31:43 -- nvmf/nvmf.sh@92 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:26.317 15:31:43 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:26.317 15:31:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:26.317 15:31:43 -- common/autotest_common.sh@10 -- # set +x 00:21:26.578 ************************************ 00:21:26.578 START TEST dma 00:21:26.578 ************************************ 00:21:26.578 15:31:43 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:26.578 * Looking for test storage... 00:21:26.578 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:26.578 15:31:43 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:26.578 15:31:43 -- nvmf/common.sh@7 -- # uname -s 00:21:26.578 15:31:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:26.578 15:31:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:26.578 15:31:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:26.578 15:31:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:26.578 15:31:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:26.578 15:31:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:26.578 15:31:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:26.578 15:31:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:26.578 15:31:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:26.578 15:31:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:26.578 15:31:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:26.578 15:31:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:26.578 15:31:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:26.578 15:31:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:26.578 15:31:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:26.578 15:31:43 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:26.578 15:31:43 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:26.578 15:31:43 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:26.578 15:31:43 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:26.578 15:31:43 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:26.578 15:31:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.578 15:31:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.578 15:31:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.578 15:31:43 -- paths/export.sh@5 -- # export PATH 00:21:26.579 15:31:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.579 15:31:43 -- nvmf/common.sh@47 -- # : 0 00:21:26.579 15:31:43 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:26.579 15:31:43 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:26.579 15:31:43 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:26.579 15:31:43 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:26.579 15:31:43 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:26.579 15:31:43 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:26.579 15:31:43 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:26.579 15:31:43 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:26.579 15:31:43 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:21:26.579 15:31:43 -- host/dma.sh@13 -- # exit 0 00:21:26.579 00:21:26.579 real 0m0.135s 00:21:26.579 user 0m0.059s 00:21:26.579 sys 0m0.084s 00:21:26.579 15:31:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:26.579 15:31:43 -- common/autotest_common.sh@10 -- # set +x 00:21:26.579 ************************************ 00:21:26.579 END TEST dma 00:21:26.579 ************************************ 00:21:26.841 15:31:44 -- nvmf/nvmf.sh@95 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:26.841 15:31:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:26.841 15:31:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:26.841 15:31:44 -- common/autotest_common.sh@10 -- # set +x 00:21:26.841 ************************************ 00:21:26.841 START TEST nvmf_identify 00:21:26.841 ************************************ 00:21:26.841 15:31:44 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:26.841 * Looking for test storage... 00:21:26.841 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:26.841 15:31:44 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:26.841 15:31:44 -- nvmf/common.sh@7 -- # uname -s 00:21:26.841 15:31:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:26.841 15:31:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:26.841 15:31:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:26.841 15:31:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:26.841 15:31:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:26.841 15:31:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:26.841 15:31:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:26.841 15:31:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:26.841 15:31:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:26.841 15:31:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:26.841 15:31:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:26.841 15:31:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:26.841 15:31:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:26.841 15:31:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:26.841 15:31:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:26.841 15:31:44 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:26.841 15:31:44 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:26.841 15:31:44 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:26.841 15:31:44 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:26.841 15:31:44 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:26.841 15:31:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.841 15:31:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.841 15:31:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.841 15:31:44 -- paths/export.sh@5 -- # export PATH 00:21:26.841 15:31:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.841 15:31:44 -- nvmf/common.sh@47 -- # : 0 00:21:26.841 15:31:44 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:26.841 15:31:44 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:26.841 15:31:44 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:26.841 15:31:44 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:26.841 15:31:44 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:26.841 15:31:44 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:26.841 15:31:44 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:26.841 15:31:44 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:26.841 15:31:44 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:26.841 15:31:44 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:26.841 15:31:44 -- host/identify.sh@14 -- # nvmftestinit 00:21:26.841 15:31:44 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:26.841 15:31:44 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:26.841 15:31:44 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:26.841 15:31:44 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:26.841 15:31:44 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:26.841 15:31:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:26.841 15:31:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:26.841 15:31:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:26.842 15:31:44 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:26.842 15:31:44 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:26.842 15:31:44 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:26.842 15:31:44 -- common/autotest_common.sh@10 -- # set +x 00:21:33.435 15:31:50 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:33.435 15:31:50 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:33.435 15:31:50 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:33.435 15:31:50 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:33.435 15:31:50 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:33.435 15:31:50 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:33.435 15:31:50 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:33.435 15:31:50 -- nvmf/common.sh@295 -- # net_devs=() 00:21:33.435 15:31:50 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:33.435 15:31:50 -- nvmf/common.sh@296 -- # e810=() 00:21:33.435 15:31:50 -- nvmf/common.sh@296 -- # local -ga e810 00:21:33.435 15:31:50 -- nvmf/common.sh@297 -- # x722=() 00:21:33.435 15:31:50 -- nvmf/common.sh@297 -- # local -ga x722 00:21:33.435 15:31:50 -- nvmf/common.sh@298 -- # mlx=() 00:21:33.435 15:31:50 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:33.435 15:31:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:33.435 15:31:50 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:33.435 15:31:50 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:33.435 15:31:50 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:33.435 15:31:50 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:33.435 15:31:50 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:33.435 15:31:50 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:33.435 15:31:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:33.435 15:31:50 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:33.435 15:31:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:33.435 15:31:50 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:33.435 15:31:50 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:33.435 15:31:50 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:33.435 15:31:50 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:33.435 15:31:50 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:33.435 15:31:50 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:33.435 15:31:50 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:33.435 15:31:50 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:33.435 15:31:50 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:33.435 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:33.435 15:31:50 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:33.435 15:31:50 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:33.435 15:31:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:33.435 15:31:50 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:33.436 15:31:50 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:33.436 15:31:50 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:33.436 15:31:50 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:33.436 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:33.436 15:31:50 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:33.436 15:31:50 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:33.436 15:31:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:33.436 15:31:50 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:33.436 15:31:50 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:33.436 15:31:50 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:33.436 15:31:50 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:33.436 15:31:50 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:33.436 15:31:50 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:33.436 15:31:50 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:33.436 15:31:50 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:33.436 15:31:50 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:33.436 15:31:50 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:33.436 Found net devices under 0000:31:00.0: cvl_0_0 00:21:33.436 15:31:50 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:33.436 15:31:50 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:33.436 15:31:50 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:33.436 15:31:50 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:33.436 15:31:50 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:33.436 15:31:50 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:33.436 Found net devices under 0000:31:00.1: cvl_0_1 00:21:33.436 15:31:50 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:33.436 15:31:50 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:33.436 15:31:50 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:33.436 15:31:50 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:33.436 15:31:50 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:33.436 15:31:50 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:33.436 15:31:50 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:33.436 15:31:50 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:33.436 15:31:50 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:33.436 15:31:50 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:33.436 15:31:50 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:33.436 15:31:50 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:33.436 15:31:50 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:33.436 15:31:50 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:33.436 15:31:50 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:33.436 15:31:50 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:33.436 15:31:50 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:33.436 15:31:50 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:33.436 15:31:50 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:33.436 15:31:50 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:33.436 15:31:50 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:33.436 15:31:50 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:33.436 15:31:50 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:33.436 15:31:50 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:33.436 15:31:50 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:33.436 15:31:50 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:33.436 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:33.436 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.589 ms 00:21:33.436 00:21:33.436 --- 10.0.0.2 ping statistics --- 00:21:33.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:33.436 rtt min/avg/max/mdev = 0.589/0.589/0.589/0.000 ms 00:21:33.436 15:31:50 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:33.436 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:33.436 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:21:33.436 00:21:33.436 --- 10.0.0.1 ping statistics --- 00:21:33.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:33.436 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:21:33.436 15:31:50 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:33.436 15:31:50 -- nvmf/common.sh@411 -- # return 0 00:21:33.436 15:31:50 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:33.436 15:31:50 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:33.436 15:31:50 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:33.436 15:31:50 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:33.436 15:31:50 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:33.436 15:31:50 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:33.436 15:31:50 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:33.436 15:31:50 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:21:33.436 15:31:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:33.436 15:31:50 -- common/autotest_common.sh@10 -- # set +x 00:21:33.436 15:31:50 -- host/identify.sh@19 -- # nvmfpid=1705115 00:21:33.436 15:31:50 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:33.436 15:31:50 -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:33.436 15:31:50 -- host/identify.sh@23 -- # waitforlisten 1705115 00:21:33.436 15:31:50 -- common/autotest_common.sh@817 -- # '[' -z 1705115 ']' 00:21:33.436 15:31:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:33.436 15:31:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:33.436 15:31:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:33.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:33.436 15:31:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:33.436 15:31:50 -- common/autotest_common.sh@10 -- # set +x 00:21:33.697 [2024-04-26 15:31:50.913208] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:21:33.697 [2024-04-26 15:31:50.913283] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:33.697 EAL: No free 2048 kB hugepages reported on node 1 00:21:33.697 [2024-04-26 15:31:50.986229] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:33.697 [2024-04-26 15:31:51.062351] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:33.697 [2024-04-26 15:31:51.062391] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:33.697 [2024-04-26 15:31:51.062399] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:33.697 [2024-04-26 15:31:51.062406] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:33.697 [2024-04-26 15:31:51.062415] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:33.697 [2024-04-26 15:31:51.062558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:33.697 [2024-04-26 15:31:51.062671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:33.697 [2024-04-26 15:31:51.062791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:33.697 [2024-04-26 15:31:51.062792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:34.270 15:31:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:34.270 15:31:51 -- common/autotest_common.sh@850 -- # return 0 00:21:34.270 15:31:51 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:34.270 15:31:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:34.270 15:31:51 -- common/autotest_common.sh@10 -- # set +x 00:21:34.270 [2024-04-26 15:31:51.699301] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:34.270 15:31:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:34.270 15:31:51 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:21:34.270 15:31:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:34.270 15:31:51 -- common/autotest_common.sh@10 -- # set +x 00:21:34.531 15:31:51 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:34.531 15:31:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:34.531 15:31:51 -- common/autotest_common.sh@10 -- # set +x 00:21:34.531 Malloc0 00:21:34.531 15:31:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:34.531 15:31:51 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:34.531 15:31:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:34.531 15:31:51 -- common/autotest_common.sh@10 -- # set +x 00:21:34.531 15:31:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:34.531 15:31:51 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:21:34.531 15:31:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:34.531 15:31:51 -- common/autotest_common.sh@10 -- # set +x 00:21:34.531 15:31:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:34.531 15:31:51 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:34.531 15:31:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:34.531 15:31:51 -- common/autotest_common.sh@10 -- # set +x 00:21:34.531 [2024-04-26 15:31:51.798796] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:34.531 15:31:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:34.531 15:31:51 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:34.531 15:31:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:34.531 15:31:51 -- common/autotest_common.sh@10 -- # set +x 00:21:34.531 15:31:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:34.531 15:31:51 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:21:34.531 15:31:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:34.531 15:31:51 -- common/autotest_common.sh@10 -- # set +x 00:21:34.531 [2024-04-26 15:31:51.822628] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:21:34.531 [ 00:21:34.531 { 00:21:34.531 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:34.531 "subtype": "Discovery", 00:21:34.531 "listen_addresses": [ 00:21:34.531 { 00:21:34.531 "transport": "TCP", 00:21:34.531 "trtype": "TCP", 00:21:34.531 "adrfam": "IPv4", 00:21:34.531 "traddr": "10.0.0.2", 00:21:34.531 "trsvcid": "4420" 00:21:34.531 } 00:21:34.531 ], 00:21:34.531 "allow_any_host": true, 00:21:34.531 "hosts": [] 00:21:34.531 }, 00:21:34.531 { 00:21:34.531 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:34.531 "subtype": "NVMe", 00:21:34.531 "listen_addresses": [ 00:21:34.531 { 00:21:34.531 "transport": "TCP", 00:21:34.531 "trtype": "TCP", 00:21:34.531 "adrfam": "IPv4", 00:21:34.531 "traddr": "10.0.0.2", 00:21:34.531 "trsvcid": "4420" 00:21:34.531 } 00:21:34.531 ], 00:21:34.531 "allow_any_host": true, 00:21:34.531 "hosts": [], 00:21:34.531 "serial_number": "SPDK00000000000001", 00:21:34.531 "model_number": "SPDK bdev Controller", 00:21:34.531 "max_namespaces": 32, 00:21:34.531 "min_cntlid": 1, 00:21:34.531 "max_cntlid": 65519, 00:21:34.531 "namespaces": [ 00:21:34.531 { 00:21:34.531 "nsid": 1, 00:21:34.531 "bdev_name": "Malloc0", 00:21:34.531 "name": "Malloc0", 00:21:34.531 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:21:34.531 "eui64": "ABCDEF0123456789", 00:21:34.531 "uuid": "65e54625-d689-43eb-ac48-1b70868f347c" 00:21:34.531 } 00:21:34.531 ] 00:21:34.531 } 00:21:34.531 ] 00:21:34.531 15:31:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:34.531 15:31:51 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:21:34.531 [2024-04-26 15:31:51.861204] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:21:34.531 [2024-04-26 15:31:51.861270] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1705263 ] 00:21:34.531 EAL: No free 2048 kB hugepages reported on node 1 00:21:34.531 [2024-04-26 15:31:51.895789] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:21:34.531 [2024-04-26 15:31:51.899844] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:34.531 [2024-04-26 15:31:51.899850] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:34.532 [2024-04-26 15:31:51.899862] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:34.532 [2024-04-26 15:31:51.899869] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:34.532 [2024-04-26 15:31:51.900263] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:21:34.532 [2024-04-26 15:31:51.900293] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1707c30 0 00:21:34.532 [2024-04-26 15:31:51.914847] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:34.532 [2024-04-26 15:31:51.914857] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:34.532 [2024-04-26 15:31:51.914861] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:34.532 [2024-04-26 15:31:51.914865] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:34.532 [2024-04-26 15:31:51.914898] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:34.532 [2024-04-26 15:31:51.914904] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:34.532 [2024-04-26 15:31:51.914908] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1707c30) 00:21:34.532 [2024-04-26 15:31:51.914921] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:34.532 [2024-04-26 15:31:51.914936] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x176f980, cid 0, qid 0 00:21:34.532 [2024-04-26 15:31:51.922848] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:34.532 [2024-04-26 15:31:51.922857] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:34.532 [2024-04-26 15:31:51.922861] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:34.532 [2024-04-26 15:31:51.922865] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x176f980) on tqpair=0x1707c30 00:21:34.532 [2024-04-26 15:31:51.922875] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:34.532 [2024-04-26 15:31:51.922881] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:21:34.532 [2024-04-26 15:31:51.922886] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:21:34.532 [2024-04-26 15:31:51.922898] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:34.532 [2024-04-26 15:31:51.922902] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:34.532 [2024-04-26 15:31:51.922909] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1707c30) 00:21:34.532 [2024-04-26 15:31:51.922916] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.532 [2024-04-26 15:31:51.922929] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x176f980, cid 0, qid 0 00:21:34.532 [2024-04-26 15:31:51.923133] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:34.532 [2024-04-26 15:31:51.923140] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:34.532 [2024-04-26 15:31:51.923143] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:34.532 [2024-04-26 15:31:51.923147] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x176f980) on tqpair=0x1707c30 00:21:34.532 [2024-04-26 15:31:51.923153] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:21:34.532 [2024-04-26 15:31:51.923160] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:21:34.532 [2024-04-26 15:31:51.923166] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:34.532 [2024-04-26 15:31:51.923170] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:34.532 [2024-04-26 15:31:51.923173] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1707c30) 00:21:34.532 [2024-04-26 15:31:51.923180] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.532 [2024-04-26 15:31:51.923190] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x176f980, cid 0, qid 0 00:21:34.532 [2024-04-26 15:31:51.923377] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:34.532 [2024-04-26 15:31:51.923383] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:34.532 [2024-04-26 15:31:51.923387] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:34.532 [2024-04-26 15:31:51.923390] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x176f980) on tqpair=0x1707c30 00:21:34.532 [2024-04-26 15:31:51.923396] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:21:34.532 [2024-04-26 15:31:51.923403] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:21:34.532 [2024-04-26 15:31:51.923410] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:34.532 [2024-04-26 15:31:51.923414] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:34.532 [2024-04-26 15:31:51.923417] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1707c30) 00:21:34.532 [2024-04-26 15:31:51.923424] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.532 [2024-04-26 15:31:51.923433] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x176f980, cid 0, qid 0 00:21:34.532 [2024-04-26 15:31:51.923608] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:34.532 [2024-04-26 15:31:51.923615] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:34.532 [2024-04-26 15:31:51.923618] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:34.532 [2024-04-26 15:31:51.923622] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x176f980) on tqpair=0x1707c30 00:21:34.532 [2024-04-26 15:31:51.923627] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:34.532 [2024-04-26 15:31:51.923636] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:34.532 [2024-04-26 15:31:51.923640] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:34.532 [2024-04-26 15:31:51.923643] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1707c30) 00:21:34.532 [2024-04-26 15:31:51.923650] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.532 [2024-04-26 15:31:51.923662] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x176f980, cid 0, qid 0 00:21:34.532 [2024-04-26 15:31:51.923852] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:34.532 [2024-04-26 15:31:51.923859] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:34.532 [2024-04-26 15:31:51.923862] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:34.532 [2024-04-26 15:31:51.923866] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x176f980) on tqpair=0x1707c30 00:21:34.532 [2024-04-26 15:31:51.923871] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:21:34.532 [2024-04-26 15:31:51.923876] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:21:34.532 [2024-04-26 15:31:51.923883] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:34.532 [2024-04-26 15:31:51.923988] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:21:34.532 [2024-04-26 15:31:51.923992] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:34.532 [2024-04-26 15:31:51.924000] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:34.532 [2024-04-26 15:31:51.924004] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:34.532 [2024-04-26 15:31:51.924007] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1707c30) 00:21:34.532 [2024-04-26 15:31:51.924014] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.532 [2024-04-26 15:31:51.924024] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x176f980, cid 0, qid 0 00:21:34.532 [2024-04-26 15:31:51.924202] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:34.532 [2024-04-26 15:31:51.924209] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:34.532 [2024-04-26 15:31:51.924212] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:34.532 [2024-04-26 15:31:51.924216] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x176f980) on tqpair=0x1707c30 00:21:34.532 [2024-04-26 15:31:51.924221] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:34.532 [2024-04-26 15:31:51.924230] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:34.532 [2024-04-26 15:31:51.924234] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:34.532 [2024-04-26 15:31:51.924237] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1707c30) 00:21:34.532 [2024-04-26 15:31:51.924244] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.532 [2024-04-26 15:31:51.924253] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x176f980, cid 0, qid 0 00:21:34.532 [2024-04-26 15:31:51.924426] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:34.532 [2024-04-26 15:31:51.924432] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:34.532 [2024-04-26 15:31:51.924435] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:34.532 [2024-04-26 15:31:51.924439] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x176f980) on tqpair=0x1707c30 00:21:34.532 [2024-04-26 15:31:51.924444] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:34.532 [2024-04-26 15:31:51.924449] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:21:34.532 [2024-04-26 15:31:51.924456] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:21:34.532 [2024-04-26 15:31:51.924469] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:21:34.532 [2024-04-26 15:31:51.924479] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:34.532 [2024-04-26 15:31:51.924483] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1707c30) 00:21:34.532 [2024-04-26 15:31:51.924490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.532 [2024-04-26 15:31:51.924500] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x176f980, cid 0, qid 0 00:21:34.532 [2024-04-26 15:31:51.924715] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:34.532 [2024-04-26 15:31:51.924722] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:34.532 [2024-04-26 15:31:51.924725] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:34.532 [2024-04-26 15:31:51.924729] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1707c30): datao=0, datal=4096, cccid=0 00:21:34.532 [2024-04-26 15:31:51.924734] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x176f980) on tqpair(0x1707c30): expected_datao=0, payload_size=4096 00:21:34.532 [2024-04-26 15:31:51.924738] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:34.532 [2024-04-26 15:31:51.924757] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:34.532 [2024-04-26 15:31:51.924762] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:34.532 [2024-04-26 15:31:51.965038] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:34.532 [2024-04-26 15:31:51.965050] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:34.532 [2024-04-26 15:31:51.965053] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:34.532 [2024-04-26 15:31:51.965057] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x176f980) on tqpair=0x1707c30 00:21:34.532 [2024-04-26 15:31:51.965065] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:21:34.532 [2024-04-26 15:31:51.965070] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:21:34.532 [2024-04-26 15:31:51.965075] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:21:34.532 [2024-04-26 15:31:51.965080] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:21:34.532 [2024-04-26 15:31:51.965084] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:21:34.532 [2024-04-26 15:31:51.965089] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:21:34.532 [2024-04-26 15:31:51.965097] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:21:34.532 [2024-04-26 15:31:51.965104] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:34.532 [2024-04-26 15:31:51.965108] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:34.532 [2024-04-26 15:31:51.965112] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1707c30) 00:21:34.532 [2024-04-26 15:31:51.965120] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:34.532 [2024-04-26 15:31:51.965132] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x176f980, cid 0, qid 0 00:21:34.532 [2024-04-26 15:31:51.965320] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:34.532 [2024-04-26 15:31:51.965327] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:34.532 [2024-04-26 15:31:51.965330] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:34.532 [2024-04-26 15:31:51.965334] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x176f980) on tqpair=0x1707c30 00:21:34.532 [2024-04-26 15:31:51.965345] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:34.532 [2024-04-26 15:31:51.965349] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:34.532 [2024-04-26 15:31:51.965352] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1707c30) 00:21:34.532 [2024-04-26 15:31:51.965358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.532 [2024-04-26 15:31:51.965364] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:34.532 [2024-04-26 15:31:51.965368] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:34.532 [2024-04-26 15:31:51.965372] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1707c30) 00:21:34.532 [2024-04-26 15:31:51.965377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.532 [2024-04-26 15:31:51.965383] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:34.532 [2024-04-26 15:31:51.965387] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:34.532 [2024-04-26 15:31:51.965390] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1707c30) 00:21:34.532 [2024-04-26 15:31:51.965396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.532 [2024-04-26 15:31:51.965402] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:34.532 [2024-04-26 15:31:51.965406] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:34.532 [2024-04-26 15:31:51.965409] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1707c30) 00:21:34.532 [2024-04-26 15:31:51.965415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.532 [2024-04-26 15:31:51.965419] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:21:34.532 [2024-04-26 15:31:51.965430] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:34.532 [2024-04-26 15:31:51.965436] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:34.532 [2024-04-26 15:31:51.965440] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1707c30) 00:21:34.532 [2024-04-26 15:31:51.965447] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.532 [2024-04-26 15:31:51.965458] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x176f980, cid 0, qid 0 00:21:34.532 [2024-04-26 15:31:51.965464] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x176fae0, cid 1, qid 0 00:21:34.532 [2024-04-26 15:31:51.965468] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x176fc40, cid 2, qid 0 00:21:34.532 [2024-04-26 15:31:51.965473] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x176fda0, cid 3, qid 0 00:21:34.532 [2024-04-26 15:31:51.965478] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x176ff00, cid 4, qid 0 00:21:34.532 [2024-04-26 15:31:51.965730] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:34.532 [2024-04-26 15:31:51.965736] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:34.532 [2024-04-26 15:31:51.965740] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:34.532 [2024-04-26 15:31:51.965743] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x176ff00) on tqpair=0x1707c30 00:21:34.532 [2024-04-26 15:31:51.965749] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:21:34.532 [2024-04-26 15:31:51.965754] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:21:34.532 [2024-04-26 15:31:51.965764] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:34.532 [2024-04-26 15:31:51.965770] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1707c30) 00:21:34.533 [2024-04-26 15:31:51.965777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.533 [2024-04-26 15:31:51.965786] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x176ff00, cid 4, qid 0 00:21:34.533 [2024-04-26 15:31:51.965989] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:34.533 [2024-04-26 15:31:51.965996] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:34.533 [2024-04-26 15:31:51.966000] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:34.533 [2024-04-26 15:31:51.966004] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1707c30): datao=0, datal=4096, cccid=4 00:21:34.533 [2024-04-26 15:31:51.966008] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x176ff00) on tqpair(0x1707c30): expected_datao=0, payload_size=4096 00:21:34.533 [2024-04-26 15:31:51.966012] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:34.533 [2024-04-26 15:31:51.966019] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:34.533 [2024-04-26 15:31:51.966022] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:34.533 [2024-04-26 15:31:51.966176] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:34.533 [2024-04-26 15:31:51.966182] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:34.533 [2024-04-26 15:31:51.966186] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:34.533 [2024-04-26 15:31:51.966189] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x176ff00) on tqpair=0x1707c30 00:21:34.533 [2024-04-26 15:31:51.966201] nvme_ctrlr.c:4036:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:21:34.533 [2024-04-26 15:31:51.966219] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:34.533 [2024-04-26 15:31:51.966223] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1707c30) 00:21:34.533 [2024-04-26 15:31:51.966230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.533 [2024-04-26 15:31:51.966236] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:34.533 [2024-04-26 15:31:51.966240] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:34.533 [2024-04-26 15:31:51.966243] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1707c30) 00:21:34.533 [2024-04-26 15:31:51.966249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.533 [2024-04-26 15:31:51.966263] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x176ff00, cid 4, qid 0 00:21:34.533 [2024-04-26 15:31:51.966269] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1770060, cid 5, qid 0 00:21:34.533 [2024-04-26 15:31:51.966548] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:34.533 [2024-04-26 15:31:51.966554] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:34.533 [2024-04-26 15:31:51.966557] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:34.533 [2024-04-26 15:31:51.966561] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1707c30): datao=0, datal=1024, cccid=4 00:21:34.533 [2024-04-26 15:31:51.966565] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x176ff00) on tqpair(0x1707c30): expected_datao=0, payload_size=1024 00:21:34.533 [2024-04-26 15:31:51.966569] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:34.533 [2024-04-26 15:31:51.966576] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:34.533 [2024-04-26 15:31:51.966579] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:34.533 [2024-04-26 15:31:51.966585] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:34.533 [2024-04-26 15:31:51.966590] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:34.533 [2024-04-26 15:31:51.966596] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:34.533 [2024-04-26 15:31:51.966599] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1770060) on tqpair=0x1707c30 00:21:34.797 [2024-04-26 15:31:52.010844] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:34.797 [2024-04-26 15:31:52.010855] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:34.797 [2024-04-26 15:31:52.010858] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:34.797 [2024-04-26 15:31:52.010862] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x176ff00) on tqpair=0x1707c30 00:21:34.797 [2024-04-26 15:31:52.010874] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:34.797 [2024-04-26 15:31:52.010878] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1707c30) 00:21:34.797 [2024-04-26 15:31:52.010884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.797 [2024-04-26 15:31:52.010899] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x176ff00, cid 4, qid 0 00:21:34.797 [2024-04-26 15:31:52.011081] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:34.797 [2024-04-26 15:31:52.011087] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:34.798 [2024-04-26 15:31:52.011091] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:34.798 [2024-04-26 15:31:52.011094] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1707c30): datao=0, datal=3072, cccid=4 00:21:34.798 [2024-04-26 15:31:52.011099] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x176ff00) on tqpair(0x1707c30): expected_datao=0, payload_size=3072 00:21:34.798 [2024-04-26 15:31:52.011103] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:34.798 [2024-04-26 15:31:52.011124] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:34.798 [2024-04-26 15:31:52.011128] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:34.798 [2024-04-26 15:31:52.052076] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:34.798 [2024-04-26 15:31:52.052086] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:34.798 [2024-04-26 15:31:52.052090] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:34.798 [2024-04-26 15:31:52.052094] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x176ff00) on tqpair=0x1707c30 00:21:34.798 [2024-04-26 15:31:52.052104] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:34.798 [2024-04-26 15:31:52.052108] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1707c30) 00:21:34.798 [2024-04-26 15:31:52.052115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.798 [2024-04-26 15:31:52.052129] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x176ff00, cid 4, qid 0 00:21:34.798 [2024-04-26 15:31:52.052326] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:34.798 [2024-04-26 15:31:52.052332] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:34.798 [2024-04-26 15:31:52.052335] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:34.798 [2024-04-26 15:31:52.052339] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1707c30): datao=0, datal=8, cccid=4 00:21:34.798 [2024-04-26 15:31:52.052343] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x176ff00) on tqpair(0x1707c30): expected_datao=0, payload_size=8 00:21:34.798 [2024-04-26 15:31:52.052348] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:34.798 [2024-04-26 15:31:52.052354] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:34.798 [2024-04-26 15:31:52.052358] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:34.798 [2024-04-26 15:31:52.093077] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:34.798 [2024-04-26 15:31:52.093086] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:34.798 [2024-04-26 15:31:52.093089] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:34.798 [2024-04-26 15:31:52.093096] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x176ff00) on tqpair=0x1707c30 00:21:34.798 ===================================================== 00:21:34.798 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:21:34.798 ===================================================== 00:21:34.798 Controller Capabilities/Features 00:21:34.798 ================================ 00:21:34.798 Vendor ID: 0000 00:21:34.798 Subsystem Vendor ID: 0000 00:21:34.798 Serial Number: .................... 00:21:34.798 Model Number: ........................................ 00:21:34.798 Firmware Version: 24.05 00:21:34.798 Recommended Arb Burst: 0 00:21:34.798 IEEE OUI Identifier: 00 00 00 00:21:34.798 Multi-path I/O 00:21:34.798 May have multiple subsystem ports: No 00:21:34.798 May have multiple controllers: No 00:21:34.798 Associated with SR-IOV VF: No 00:21:34.798 Max Data Transfer Size: 131072 00:21:34.798 Max Number of Namespaces: 0 00:21:34.798 Max Number of I/O Queues: 1024 00:21:34.798 NVMe Specification Version (VS): 1.3 00:21:34.798 NVMe Specification Version (Identify): 1.3 00:21:34.798 Maximum Queue Entries: 128 00:21:34.798 Contiguous Queues Required: Yes 00:21:34.798 Arbitration Mechanisms Supported 00:21:34.798 Weighted Round Robin: Not Supported 00:21:34.798 Vendor Specific: Not Supported 00:21:34.798 Reset Timeout: 15000 ms 00:21:34.798 Doorbell Stride: 4 bytes 00:21:34.798 NVM Subsystem Reset: Not Supported 00:21:34.798 Command Sets Supported 00:21:34.798 NVM Command Set: Supported 00:21:34.798 Boot Partition: Not Supported 00:21:34.798 Memory Page Size Minimum: 4096 bytes 00:21:34.798 Memory Page Size Maximum: 4096 bytes 00:21:34.798 Persistent Memory Region: Not Supported 00:21:34.798 Optional Asynchronous Events Supported 00:21:34.798 Namespace Attribute Notices: Not Supported 00:21:34.798 Firmware Activation Notices: Not Supported 00:21:34.798 ANA Change Notices: Not Supported 00:21:34.798 PLE Aggregate Log Change Notices: Not Supported 00:21:34.798 LBA Status Info Alert Notices: Not Supported 00:21:34.798 EGE Aggregate Log Change Notices: Not Supported 00:21:34.798 Normal NVM Subsystem Shutdown event: Not Supported 00:21:34.798 Zone Descriptor Change Notices: Not Supported 00:21:34.798 Discovery Log Change Notices: Supported 00:21:34.798 Controller Attributes 00:21:34.798 128-bit Host Identifier: Not Supported 00:21:34.798 Non-Operational Permissive Mode: Not Supported 00:21:34.798 NVM Sets: Not Supported 00:21:34.798 Read Recovery Levels: Not Supported 00:21:34.798 Endurance Groups: Not Supported 00:21:34.798 Predictable Latency Mode: Not Supported 00:21:34.798 Traffic Based Keep ALive: Not Supported 00:21:34.798 Namespace Granularity: Not Supported 00:21:34.798 SQ Associations: Not Supported 00:21:34.798 UUID List: Not Supported 00:21:34.798 Multi-Domain Subsystem: Not Supported 00:21:34.798 Fixed Capacity Management: Not Supported 00:21:34.798 Variable Capacity Management: Not Supported 00:21:34.798 Delete Endurance Group: Not Supported 00:21:34.798 Delete NVM Set: Not Supported 00:21:34.798 Extended LBA Formats Supported: Not Supported 00:21:34.798 Flexible Data Placement Supported: Not Supported 00:21:34.798 00:21:34.798 Controller Memory Buffer Support 00:21:34.798 ================================ 00:21:34.798 Supported: No 00:21:34.798 00:21:34.798 Persistent Memory Region Support 00:21:34.798 ================================ 00:21:34.798 Supported: No 00:21:34.798 00:21:34.798 Admin Command Set Attributes 00:21:34.798 ============================ 00:21:34.798 Security Send/Receive: Not Supported 00:21:34.798 Format NVM: Not Supported 00:21:34.798 Firmware Activate/Download: Not Supported 00:21:34.798 Namespace Management: Not Supported 00:21:34.798 Device Self-Test: Not Supported 00:21:34.798 Directives: Not Supported 00:21:34.798 NVMe-MI: Not Supported 00:21:34.798 Virtualization Management: Not Supported 00:21:34.798 Doorbell Buffer Config: Not Supported 00:21:34.798 Get LBA Status Capability: Not Supported 00:21:34.798 Command & Feature Lockdown Capability: Not Supported 00:21:34.798 Abort Command Limit: 1 00:21:34.798 Async Event Request Limit: 4 00:21:34.798 Number of Firmware Slots: N/A 00:21:34.798 Firmware Slot 1 Read-Only: N/A 00:21:34.798 Firmware Activation Without Reset: N/A 00:21:34.798 Multiple Update Detection Support: N/A 00:21:34.798 Firmware Update Granularity: No Information Provided 00:21:34.798 Per-Namespace SMART Log: No 00:21:34.798 Asymmetric Namespace Access Log Page: Not Supported 00:21:34.798 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:21:34.798 Command Effects Log Page: Not Supported 00:21:34.798 Get Log Page Extended Data: Supported 00:21:34.798 Telemetry Log Pages: Not Supported 00:21:34.798 Persistent Event Log Pages: Not Supported 00:21:34.798 Supported Log Pages Log Page: May Support 00:21:34.798 Commands Supported & Effects Log Page: Not Supported 00:21:34.798 Feature Identifiers & Effects Log Page:May Support 00:21:34.798 NVMe-MI Commands & Effects Log Page: May Support 00:21:34.798 Data Area 4 for Telemetry Log: Not Supported 00:21:34.798 Error Log Page Entries Supported: 128 00:21:34.798 Keep Alive: Not Supported 00:21:34.798 00:21:34.798 NVM Command Set Attributes 00:21:34.798 ========================== 00:21:34.798 Submission Queue Entry Size 00:21:34.798 Max: 1 00:21:34.798 Min: 1 00:21:34.798 Completion Queue Entry Size 00:21:34.798 Max: 1 00:21:34.798 Min: 1 00:21:34.798 Number of Namespaces: 0 00:21:34.798 Compare Command: Not Supported 00:21:34.798 Write Uncorrectable Command: Not Supported 00:21:34.798 Dataset Management Command: Not Supported 00:21:34.798 Write Zeroes Command: Not Supported 00:21:34.798 Set Features Save Field: Not Supported 00:21:34.798 Reservations: Not Supported 00:21:34.798 Timestamp: Not Supported 00:21:34.798 Copy: Not Supported 00:21:34.798 Volatile Write Cache: Not Present 00:21:34.798 Atomic Write Unit (Normal): 1 00:21:34.798 Atomic Write Unit (PFail): 1 00:21:34.798 Atomic Compare & Write Unit: 1 00:21:34.798 Fused Compare & Write: Supported 00:21:34.798 Scatter-Gather List 00:21:34.798 SGL Command Set: Supported 00:21:34.798 SGL Keyed: Supported 00:21:34.798 SGL Bit Bucket Descriptor: Not Supported 00:21:34.798 SGL Metadata Pointer: Not Supported 00:21:34.798 Oversized SGL: Not Supported 00:21:34.798 SGL Metadata Address: Not Supported 00:21:34.798 SGL Offset: Supported 00:21:34.798 Transport SGL Data Block: Not Supported 00:21:34.798 Replay Protected Memory Block: Not Supported 00:21:34.798 00:21:34.798 Firmware Slot Information 00:21:34.798 ========================= 00:21:34.798 Active slot: 0 00:21:34.798 00:21:34.798 00:21:34.798 Error Log 00:21:34.798 ========= 00:21:34.798 00:21:34.798 Active Namespaces 00:21:34.798 ================= 00:21:34.798 Discovery Log Page 00:21:34.798 ================== 00:21:34.799 Generation Counter: 2 00:21:34.799 Number of Records: 2 00:21:34.799 Record Format: 0 00:21:34.799 00:21:34.799 Discovery Log Entry 0 00:21:34.799 ---------------------- 00:21:34.799 Transport Type: 3 (TCP) 00:21:34.799 Address Family: 1 (IPv4) 00:21:34.799 Subsystem Type: 3 (Current Discovery Subsystem) 00:21:34.799 Entry Flags: 00:21:34.799 Duplicate Returned Information: 1 00:21:34.799 Explicit Persistent Connection Support for Discovery: 1 00:21:34.799 Transport Requirements: 00:21:34.799 Secure Channel: Not Required 00:21:34.799 Port ID: 0 (0x0000) 00:21:34.799 Controller ID: 65535 (0xffff) 00:21:34.799 Admin Max SQ Size: 128 00:21:34.799 Transport Service Identifier: 4420 00:21:34.799 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:21:34.799 Transport Address: 10.0.0.2 00:21:34.799 Discovery Log Entry 1 00:21:34.799 ---------------------- 00:21:34.799 Transport Type: 3 (TCP) 00:21:34.799 Address Family: 1 (IPv4) 00:21:34.799 Subsystem Type: 2 (NVM Subsystem) 00:21:34.799 Entry Flags: 00:21:34.799 Duplicate Returned Information: 0 00:21:34.799 Explicit Persistent Connection Support for Discovery: 0 00:21:34.799 Transport Requirements: 00:21:34.799 Secure Channel: Not Required 00:21:34.799 Port ID: 0 (0x0000) 00:21:34.799 Controller ID: 65535 (0xffff) 00:21:34.799 Admin Max SQ Size: 128 00:21:34.799 Transport Service Identifier: 4420 00:21:34.799 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:21:34.799 Transport Address: 10.0.0.2 [2024-04-26 15:31:52.093180] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:21:34.799 [2024-04-26 15:31:52.093192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.799 [2024-04-26 15:31:52.093199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.799 [2024-04-26 15:31:52.093205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.799 [2024-04-26 15:31:52.093210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.799 [2024-04-26 15:31:52.093219] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:34.799 [2024-04-26 15:31:52.093223] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:34.799 [2024-04-26 15:31:52.093226] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1707c30) 00:21:34.799 [2024-04-26 15:31:52.093234] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.799 [2024-04-26 15:31:52.093246] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x176fda0, cid 3, qid 0 00:21:34.799 [2024-04-26 15:31:52.093369] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:34.799 [2024-04-26 15:31:52.093375] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:34.799 [2024-04-26 15:31:52.093378] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:34.799 [2024-04-26 15:31:52.093382] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x176fda0) on tqpair=0x1707c30 00:21:34.799 [2024-04-26 15:31:52.093389] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:34.799 [2024-04-26 15:31:52.093393] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:34.799 [2024-04-26 15:31:52.093397] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1707c30) 00:21:34.799 [2024-04-26 15:31:52.093404] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.799 [2024-04-26 15:31:52.093416] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x176fda0, cid 3, qid 0 00:21:34.799 [2024-04-26 15:31:52.093620] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:34.799 [2024-04-26 15:31:52.093626] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:34.799 [2024-04-26 15:31:52.093629] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:34.799 [2024-04-26 15:31:52.093633] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x176fda0) on tqpair=0x1707c30 00:21:34.799 [2024-04-26 15:31:52.093638] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:21:34.799 [2024-04-26 15:31:52.093643] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:21:34.799 [2024-04-26 15:31:52.093652] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:34.799 [2024-04-26 15:31:52.093655] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:34.799 [2024-04-26 15:31:52.093659] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1707c30) 00:21:34.799 [2024-04-26 15:31:52.093665] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.799 [2024-04-26 15:31:52.093675] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x176fda0, cid 3, qid 0 00:21:34.799 [2024-04-26 15:31:52.097846] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:34.799 [2024-04-26 15:31:52.097854] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:34.799 [2024-04-26 15:31:52.097857] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:34.799 [2024-04-26 15:31:52.097863] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x176fda0) on tqpair=0x1707c30 00:21:34.799 [2024-04-26 15:31:52.097874] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:34.799 [2024-04-26 15:31:52.097878] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:34.799 [2024-04-26 15:31:52.097882] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1707c30) 00:21:34.799 [2024-04-26 15:31:52.097889] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.799 [2024-04-26 15:31:52.097900] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x176fda0, cid 3, qid 0 00:21:34.799 [2024-04-26 15:31:52.098084] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:34.799 [2024-04-26 15:31:52.098090] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:34.799 [2024-04-26 15:31:52.098093] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:34.799 [2024-04-26 15:31:52.098097] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x176fda0) on tqpair=0x1707c30 00:21:34.799 [2024-04-26 15:31:52.098105] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:21:34.799 00:21:34.799 15:31:52 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:21:34.799 [2024-04-26 15:31:52.141166] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:21:34.799 [2024-04-26 15:31:52.141230] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1705268 ] 00:21:34.799 EAL: No free 2048 kB hugepages reported on node 1 00:21:34.799 [2024-04-26 15:31:52.175334] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:21:34.799 [2024-04-26 15:31:52.175372] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:34.799 [2024-04-26 15:31:52.175377] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:34.799 [2024-04-26 15:31:52.175387] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:34.799 [2024-04-26 15:31:52.175394] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:34.799 [2024-04-26 15:31:52.175810] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:21:34.799 [2024-04-26 15:31:52.175834] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x6e5c30 0 00:21:34.799 [2024-04-26 15:31:52.181847] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:34.799 [2024-04-26 15:31:52.181857] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:34.799 [2024-04-26 15:31:52.181861] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:34.799 [2024-04-26 15:31:52.181864] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:34.799 [2024-04-26 15:31:52.181894] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:34.799 [2024-04-26 15:31:52.181899] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:34.799 [2024-04-26 15:31:52.181903] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6e5c30) 00:21:34.799 [2024-04-26 15:31:52.181913] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:34.799 [2024-04-26 15:31:52.181928] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74d980, cid 0, qid 0 00:21:34.799 [2024-04-26 15:31:52.189846] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:34.799 [2024-04-26 15:31:52.189858] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:34.799 [2024-04-26 15:31:52.189862] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:34.799 [2024-04-26 15:31:52.189866] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x74d980) on tqpair=0x6e5c30 00:21:34.799 [2024-04-26 15:31:52.189877] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:34.799 [2024-04-26 15:31:52.189883] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:21:34.799 [2024-04-26 15:31:52.189888] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:21:34.799 [2024-04-26 15:31:52.189899] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:34.799 [2024-04-26 15:31:52.189903] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:34.799 [2024-04-26 15:31:52.189906] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6e5c30) 00:21:34.799 [2024-04-26 15:31:52.189914] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.799 [2024-04-26 15:31:52.189926] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74d980, cid 0, qid 0 00:21:34.799 [2024-04-26 15:31:52.190128] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:34.799 [2024-04-26 15:31:52.190135] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:34.800 [2024-04-26 15:31:52.190139] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:34.800 [2024-04-26 15:31:52.190142] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x74d980) on tqpair=0x6e5c30 00:21:34.800 [2024-04-26 15:31:52.190147] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:21:34.800 [2024-04-26 15:31:52.190154] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:21:34.800 [2024-04-26 15:31:52.190161] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:34.800 [2024-04-26 15:31:52.190165] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:34.800 [2024-04-26 15:31:52.190168] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6e5c30) 00:21:34.800 [2024-04-26 15:31:52.190175] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.800 [2024-04-26 15:31:52.190185] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74d980, cid 0, qid 0 00:21:34.800 [2024-04-26 15:31:52.190369] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:34.800 [2024-04-26 15:31:52.190375] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:34.800 [2024-04-26 15:31:52.190379] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:34.800 [2024-04-26 15:31:52.190382] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x74d980) on tqpair=0x6e5c30 00:21:34.800 [2024-04-26 15:31:52.190387] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:21:34.800 [2024-04-26 15:31:52.190395] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:21:34.800 [2024-04-26 15:31:52.190401] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:34.800 [2024-04-26 15:31:52.190405] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:34.800 [2024-04-26 15:31:52.190408] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6e5c30) 00:21:34.800 [2024-04-26 15:31:52.190415] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.800 [2024-04-26 15:31:52.190425] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74d980, cid 0, qid 0 00:21:34.800 [2024-04-26 15:31:52.190612] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:34.800 [2024-04-26 15:31:52.190621] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:34.800 [2024-04-26 15:31:52.190624] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:34.800 [2024-04-26 15:31:52.190628] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x74d980) on tqpair=0x6e5c30 00:21:34.800 [2024-04-26 15:31:52.190633] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:34.800 [2024-04-26 15:31:52.190642] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:34.800 [2024-04-26 15:31:52.190645] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:34.800 [2024-04-26 15:31:52.190649] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6e5c30) 00:21:34.800 [2024-04-26 15:31:52.190655] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.800 [2024-04-26 15:31:52.190665] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74d980, cid 0, qid 0 00:21:34.800 [2024-04-26 15:31:52.190880] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:34.800 [2024-04-26 15:31:52.190886] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:34.800 [2024-04-26 15:31:52.190890] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:34.800 [2024-04-26 15:31:52.190893] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x74d980) on tqpair=0x6e5c30 00:21:34.800 [2024-04-26 15:31:52.190898] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:21:34.800 [2024-04-26 15:31:52.190902] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:21:34.800 [2024-04-26 15:31:52.190910] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:34.800 [2024-04-26 15:31:52.191015] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:21:34.800 [2024-04-26 15:31:52.191019] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:34.800 [2024-04-26 15:31:52.191026] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:34.800 [2024-04-26 15:31:52.191030] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:34.800 [2024-04-26 15:31:52.191033] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6e5c30) 00:21:34.800 [2024-04-26 15:31:52.191040] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.800 [2024-04-26 15:31:52.191050] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74d980, cid 0, qid 0 00:21:34.800 [2024-04-26 15:31:52.191211] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:34.800 [2024-04-26 15:31:52.191217] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:34.800 [2024-04-26 15:31:52.191220] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:34.800 [2024-04-26 15:31:52.191224] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x74d980) on tqpair=0x6e5c30 00:21:34.800 [2024-04-26 15:31:52.191228] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:34.800 [2024-04-26 15:31:52.191237] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:34.800 [2024-04-26 15:31:52.191241] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:34.800 [2024-04-26 15:31:52.191244] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6e5c30) 00:21:34.800 [2024-04-26 15:31:52.191251] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.800 [2024-04-26 15:31:52.191260] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74d980, cid 0, qid 0 00:21:34.800 [2024-04-26 15:31:52.191451] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:34.800 [2024-04-26 15:31:52.191459] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:34.800 [2024-04-26 15:31:52.191462] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:34.800 [2024-04-26 15:31:52.191466] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x74d980) on tqpair=0x6e5c30 00:21:34.800 [2024-04-26 15:31:52.191470] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:34.800 [2024-04-26 15:31:52.191475] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:21:34.800 [2024-04-26 15:31:52.191482] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:21:34.800 [2024-04-26 15:31:52.191489] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:21:34.800 [2024-04-26 15:31:52.191500] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:34.800 [2024-04-26 15:31:52.191503] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6e5c30) 00:21:34.800 [2024-04-26 15:31:52.191510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.800 [2024-04-26 15:31:52.191520] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74d980, cid 0, qid 0 00:21:34.800 [2024-04-26 15:31:52.191744] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:34.800 [2024-04-26 15:31:52.191751] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:34.800 [2024-04-26 15:31:52.191754] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:34.800 [2024-04-26 15:31:52.191758] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6e5c30): datao=0, datal=4096, cccid=0 00:21:34.800 [2024-04-26 15:31:52.191762] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x74d980) on tqpair(0x6e5c30): expected_datao=0, payload_size=4096 00:21:34.800 [2024-04-26 15:31:52.191767] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:34.800 [2024-04-26 15:31:52.191784] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:34.800 [2024-04-26 15:31:52.191788] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:34.800 [2024-04-26 15:31:52.232845] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:34.800 [2024-04-26 15:31:52.232855] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:34.800 [2024-04-26 15:31:52.232859] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:34.800 [2024-04-26 15:31:52.232863] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x74d980) on tqpair=0x6e5c30 00:21:34.800 [2024-04-26 15:31:52.232870] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:21:34.800 [2024-04-26 15:31:52.232875] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:21:34.800 [2024-04-26 15:31:52.232879] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:21:34.800 [2024-04-26 15:31:52.232883] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:21:34.800 [2024-04-26 15:31:52.232887] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:21:34.800 [2024-04-26 15:31:52.232892] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:21:34.800 [2024-04-26 15:31:52.232900] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:21:34.800 [2024-04-26 15:31:52.232907] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:34.800 [2024-04-26 15:31:52.232911] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:34.800 [2024-04-26 15:31:52.232914] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6e5c30) 00:21:34.800 [2024-04-26 15:31:52.232925] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:34.800 [2024-04-26 15:31:52.232937] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74d980, cid 0, qid 0 00:21:34.800 [2024-04-26 15:31:52.233167] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:34.800 [2024-04-26 15:31:52.233174] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:34.800 [2024-04-26 15:31:52.233177] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:34.800 [2024-04-26 15:31:52.233181] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x74d980) on tqpair=0x6e5c30 00:21:34.800 [2024-04-26 15:31:52.233188] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:34.800 [2024-04-26 15:31:52.233192] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:34.800 [2024-04-26 15:31:52.233195] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6e5c30) 00:21:34.800 [2024-04-26 15:31:52.233201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.800 [2024-04-26 15:31:52.233207] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:34.801 [2024-04-26 15:31:52.233211] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:34.801 [2024-04-26 15:31:52.233214] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x6e5c30) 00:21:34.801 [2024-04-26 15:31:52.233220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.801 [2024-04-26 15:31:52.233226] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:34.801 [2024-04-26 15:31:52.233230] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:34.801 [2024-04-26 15:31:52.233233] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x6e5c30) 00:21:34.801 [2024-04-26 15:31:52.233239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.801 [2024-04-26 15:31:52.233245] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:34.801 [2024-04-26 15:31:52.233249] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:34.801 [2024-04-26 15:31:52.233252] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6e5c30) 00:21:34.801 [2024-04-26 15:31:52.233258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.801 [2024-04-26 15:31:52.233263] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:34.801 [2024-04-26 15:31:52.233273] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:34.801 [2024-04-26 15:31:52.233280] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:34.801 [2024-04-26 15:31:52.233283] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6e5c30) 00:21:34.801 [2024-04-26 15:31:52.233290] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.801 [2024-04-26 15:31:52.233302] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74d980, cid 0, qid 0 00:21:34.801 [2024-04-26 15:31:52.233306] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74dae0, cid 1, qid 0 00:21:34.801 [2024-04-26 15:31:52.233311] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74dc40, cid 2, qid 0 00:21:34.801 [2024-04-26 15:31:52.233316] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74dda0, cid 3, qid 0 00:21:34.801 [2024-04-26 15:31:52.233320] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74df00, cid 4, qid 0 00:21:34.801 [2024-04-26 15:31:52.233530] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:34.801 [2024-04-26 15:31:52.233539] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:34.801 [2024-04-26 15:31:52.233542] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:34.801 [2024-04-26 15:31:52.233546] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x74df00) on tqpair=0x6e5c30 00:21:34.801 [2024-04-26 15:31:52.233550] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:21:34.801 [2024-04-26 15:31:52.233555] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:21:34.801 [2024-04-26 15:31:52.233564] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:21:34.801 [2024-04-26 15:31:52.233570] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:21:34.801 [2024-04-26 15:31:52.233576] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:34.801 [2024-04-26 15:31:52.233580] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:34.801 [2024-04-26 15:31:52.233584] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6e5c30) 00:21:34.801 [2024-04-26 15:31:52.233590] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:34.801 [2024-04-26 15:31:52.233600] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74df00, cid 4, qid 0 00:21:34.801 [2024-04-26 15:31:52.233793] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:34.801 [2024-04-26 15:31:52.233800] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:34.801 [2024-04-26 15:31:52.233803] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:34.801 [2024-04-26 15:31:52.233807] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x74df00) on tqpair=0x6e5c30 00:21:34.801 [2024-04-26 15:31:52.233862] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:21:34.801 [2024-04-26 15:31:52.233872] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:21:34.801 [2024-04-26 15:31:52.233880] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:34.801 [2024-04-26 15:31:52.233883] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6e5c30) 00:21:34.801 [2024-04-26 15:31:52.233890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.801 [2024-04-26 15:31:52.233900] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74df00, cid 4, qid 0 00:21:34.801 [2024-04-26 15:31:52.234113] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:34.801 [2024-04-26 15:31:52.234120] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:34.801 [2024-04-26 15:31:52.234123] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:34.801 [2024-04-26 15:31:52.234127] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6e5c30): datao=0, datal=4096, cccid=4 00:21:34.801 [2024-04-26 15:31:52.234131] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x74df00) on tqpair(0x6e5c30): expected_datao=0, payload_size=4096 00:21:34.801 [2024-04-26 15:31:52.234136] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:34.801 [2024-04-26 15:31:52.234143] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:34.801 [2024-04-26 15:31:52.234146] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:34.801 [2024-04-26 15:31:52.234345] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:34.801 [2024-04-26 15:31:52.234351] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:34.801 [2024-04-26 15:31:52.234355] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:34.801 [2024-04-26 15:31:52.234359] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x74df00) on tqpair=0x6e5c30 00:21:34.801 [2024-04-26 15:31:52.234369] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:21:34.801 [2024-04-26 15:31:52.234381] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:21:34.801 [2024-04-26 15:31:52.234390] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:21:34.801 [2024-04-26 15:31:52.234397] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:34.801 [2024-04-26 15:31:52.234400] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6e5c30) 00:21:34.801 [2024-04-26 15:31:52.234407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.801 [2024-04-26 15:31:52.234417] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74df00, cid 4, qid 0 00:21:34.801 [2024-04-26 15:31:52.234641] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:34.801 [2024-04-26 15:31:52.234648] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:34.801 [2024-04-26 15:31:52.234651] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:34.801 [2024-04-26 15:31:52.234654] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6e5c30): datao=0, datal=4096, cccid=4 00:21:34.801 [2024-04-26 15:31:52.234659] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x74df00) on tqpair(0x6e5c30): expected_datao=0, payload_size=4096 00:21:34.801 [2024-04-26 15:31:52.234663] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:34.801 [2024-04-26 15:31:52.234669] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:34.801 [2024-04-26 15:31:52.234673] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:34.801 [2024-04-26 15:31:52.234853] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:34.801 [2024-04-26 15:31:52.234860] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:34.801 [2024-04-26 15:31:52.234863] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:34.801 [2024-04-26 15:31:52.234867] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x74df00) on tqpair=0x6e5c30 00:21:34.801 [2024-04-26 15:31:52.234880] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:21:34.801 [2024-04-26 15:31:52.234888] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:21:34.801 [2024-04-26 15:31:52.234895] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:34.801 [2024-04-26 15:31:52.234899] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6e5c30) 00:21:34.801 [2024-04-26 15:31:52.234905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.801 [2024-04-26 15:31:52.234916] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74df00, cid 4, qid 0 00:21:34.801 [2024-04-26 15:31:52.235119] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:34.801 [2024-04-26 15:31:52.235126] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:34.801 [2024-04-26 15:31:52.235129] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:34.801 [2024-04-26 15:31:52.235133] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6e5c30): datao=0, datal=4096, cccid=4 00:21:34.801 [2024-04-26 15:31:52.235137] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x74df00) on tqpair(0x6e5c30): expected_datao=0, payload_size=4096 00:21:34.801 [2024-04-26 15:31:52.235141] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:34.801 [2024-04-26 15:31:52.235148] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:34.801 [2024-04-26 15:31:52.235151] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:34.801 [2024-04-26 15:31:52.235355] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:34.801 [2024-04-26 15:31:52.235363] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:34.801 [2024-04-26 15:31:52.235367] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:34.801 [2024-04-26 15:31:52.235370] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x74df00) on tqpair=0x6e5c30 00:21:34.801 [2024-04-26 15:31:52.235377] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:21:34.801 [2024-04-26 15:31:52.235385] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:21:34.801 [2024-04-26 15:31:52.235392] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:21:34.801 [2024-04-26 15:31:52.235398] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:21:34.801 [2024-04-26 15:31:52.235403] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:21:34.802 [2024-04-26 15:31:52.235408] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:21:34.802 [2024-04-26 15:31:52.235412] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:21:34.802 [2024-04-26 15:31:52.235417] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:21:34.802 [2024-04-26 15:31:52.235428] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:34.802 [2024-04-26 15:31:52.235432] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6e5c30) 00:21:34.802 [2024-04-26 15:31:52.235439] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.802 [2024-04-26 15:31:52.235445] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:34.802 [2024-04-26 15:31:52.235449] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:34.802 [2024-04-26 15:31:52.235452] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x6e5c30) 00:21:34.802 [2024-04-26 15:31:52.235458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.802 [2024-04-26 15:31:52.235470] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74df00, cid 4, qid 0 00:21:34.802 [2024-04-26 15:31:52.235475] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74e060, cid 5, qid 0 00:21:34.802 [2024-04-26 15:31:52.235696] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:34.802 [2024-04-26 15:31:52.235702] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:34.802 [2024-04-26 15:31:52.235705] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:34.802 [2024-04-26 15:31:52.235709] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x74df00) on tqpair=0x6e5c30 00:21:34.802 [2024-04-26 15:31:52.235716] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:34.802 [2024-04-26 15:31:52.235721] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:34.802 [2024-04-26 15:31:52.235725] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:34.802 [2024-04-26 15:31:52.235728] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x74e060) on tqpair=0x6e5c30 00:21:34.802 [2024-04-26 15:31:52.235737] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:34.802 [2024-04-26 15:31:52.235741] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x6e5c30) 00:21:34.802 [2024-04-26 15:31:52.235747] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.802 [2024-04-26 15:31:52.235756] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74e060, cid 5, qid 0 00:21:34.802 [2024-04-26 15:31:52.235997] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:34.802 [2024-04-26 15:31:52.236004] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:34.802 [2024-04-26 15:31:52.236007] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:34.802 [2024-04-26 15:31:52.236011] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x74e060) on tqpair=0x6e5c30 00:21:34.802 [2024-04-26 15:31:52.236020] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:34.802 [2024-04-26 15:31:52.236023] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x6e5c30) 00:21:34.802 [2024-04-26 15:31:52.236030] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.802 [2024-04-26 15:31:52.236039] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74e060, cid 5, qid 0 00:21:34.802 [2024-04-26 15:31:52.236231] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:34.802 [2024-04-26 15:31:52.236237] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:34.802 [2024-04-26 15:31:52.236241] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:34.802 [2024-04-26 15:31:52.236244] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x74e060) on tqpair=0x6e5c30 00:21:34.802 [2024-04-26 15:31:52.236253] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:34.802 [2024-04-26 15:31:52.236256] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x6e5c30) 00:21:34.802 [2024-04-26 15:31:52.236263] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.802 [2024-04-26 15:31:52.236272] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74e060, cid 5, qid 0 00:21:34.802 [2024-04-26 15:31:52.236500] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:34.802 [2024-04-26 15:31:52.236507] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:34.802 [2024-04-26 15:31:52.236510] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:34.802 [2024-04-26 15:31:52.236514] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x74e060) on tqpair=0x6e5c30 00:21:34.802 [2024-04-26 15:31:52.236524] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:34.802 [2024-04-26 15:31:52.236528] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x6e5c30) 00:21:34.802 [2024-04-26 15:31:52.236534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.802 [2024-04-26 15:31:52.236542] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:34.802 [2024-04-26 15:31:52.236545] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6e5c30) 00:21:34.802 [2024-04-26 15:31:52.236551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.802 [2024-04-26 15:31:52.236558] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:34.802 [2024-04-26 15:31:52.236562] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x6e5c30) 00:21:34.802 [2024-04-26 15:31:52.236568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.802 [2024-04-26 15:31:52.236575] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:34.802 [2024-04-26 15:31:52.236579] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x6e5c30) 00:21:34.802 [2024-04-26 15:31:52.236585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.802 [2024-04-26 15:31:52.236595] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74e060, cid 5, qid 0 00:21:34.802 [2024-04-26 15:31:52.236600] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74df00, cid 4, qid 0 00:21:34.802 [2024-04-26 15:31:52.236607] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74e1c0, cid 6, qid 0 00:21:34.802 [2024-04-26 15:31:52.236611] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74e320, cid 7, qid 0 00:21:34.802 [2024-04-26 15:31:52.240846] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:34.802 [2024-04-26 15:31:52.240854] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:34.802 [2024-04-26 15:31:52.240857] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:34.802 [2024-04-26 15:31:52.240861] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6e5c30): datao=0, datal=8192, cccid=5 00:21:34.802 [2024-04-26 15:31:52.240865] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x74e060) on tqpair(0x6e5c30): expected_datao=0, payload_size=8192 00:21:34.802 [2024-04-26 15:31:52.240869] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:34.802 [2024-04-26 15:31:52.240876] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:34.802 [2024-04-26 15:31:52.240879] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:34.802 [2024-04-26 15:31:52.240885] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:34.802 [2024-04-26 15:31:52.240890] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:34.802 [2024-04-26 15:31:52.240894] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:34.802 [2024-04-26 15:31:52.240897] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6e5c30): datao=0, datal=512, cccid=4 00:21:34.802 [2024-04-26 15:31:52.240901] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x74df00) on tqpair(0x6e5c30): expected_datao=0, payload_size=512 00:21:34.802 [2024-04-26 15:31:52.240905] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:34.802 [2024-04-26 15:31:52.240912] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:34.802 [2024-04-26 15:31:52.240915] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:34.802 [2024-04-26 15:31:52.240920] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:34.802 [2024-04-26 15:31:52.240926] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:34.802 [2024-04-26 15:31:52.240929] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:34.802 [2024-04-26 15:31:52.240933] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6e5c30): datao=0, datal=512, cccid=6 00:21:34.802 [2024-04-26 15:31:52.240937] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x74e1c0) on tqpair(0x6e5c30): expected_datao=0, payload_size=512 00:21:34.802 [2024-04-26 15:31:52.240941] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:34.803 [2024-04-26 15:31:52.240947] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:34.803 [2024-04-26 15:31:52.240950] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:34.803 [2024-04-26 15:31:52.240956] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:34.803 [2024-04-26 15:31:52.240961] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:34.803 [2024-04-26 15:31:52.240965] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:34.803 [2024-04-26 15:31:52.240968] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6e5c30): datao=0, datal=4096, cccid=7 00:21:34.803 [2024-04-26 15:31:52.240972] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x74e320) on tqpair(0x6e5c30): expected_datao=0, payload_size=4096 00:21:34.803 [2024-04-26 15:31:52.240976] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:34.803 [2024-04-26 15:31:52.240983] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:34.803 [2024-04-26 15:31:52.240986] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:34.803 [2024-04-26 15:31:52.240992] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:34.803 [2024-04-26 15:31:52.240997] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:34.803 [2024-04-26 15:31:52.241000] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:34.803 [2024-04-26 15:31:52.241004] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x74e060) on tqpair=0x6e5c30 00:21:34.803 [2024-04-26 15:31:52.241018] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:34.803 [2024-04-26 15:31:52.241024] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:34.803 [2024-04-26 15:31:52.241027] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:34.803 [2024-04-26 15:31:52.241031] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x74df00) on tqpair=0x6e5c30 00:21:34.803 [2024-04-26 15:31:52.241039] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:34.803 [2024-04-26 15:31:52.241045] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:34.803 [2024-04-26 15:31:52.241048] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:34.803 [2024-04-26 15:31:52.241052] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x74e1c0) on tqpair=0x6e5c30 00:21:34.803 [2024-04-26 15:31:52.241059] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:34.803 [2024-04-26 15:31:52.241065] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:34.803 [2024-04-26 15:31:52.241068] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:34.803 [2024-04-26 15:31:52.241071] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x74e320) on tqpair=0x6e5c30 00:21:34.803 ===================================================== 00:21:34.803 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:34.803 ===================================================== 00:21:34.803 Controller Capabilities/Features 00:21:34.803 ================================ 00:21:34.803 Vendor ID: 8086 00:21:34.803 Subsystem Vendor ID: 8086 00:21:34.803 Serial Number: SPDK00000000000001 00:21:34.803 Model Number: SPDK bdev Controller 00:21:34.803 Firmware Version: 24.05 00:21:34.803 Recommended Arb Burst: 6 00:21:34.803 IEEE OUI Identifier: e4 d2 5c 00:21:34.803 Multi-path I/O 00:21:34.803 May have multiple subsystem ports: Yes 00:21:34.803 May have multiple controllers: Yes 00:21:34.803 Associated with SR-IOV VF: No 00:21:34.803 Max Data Transfer Size: 131072 00:21:34.803 Max Number of Namespaces: 32 00:21:34.803 Max Number of I/O Queues: 127 00:21:34.803 NVMe Specification Version (VS): 1.3 00:21:34.803 NVMe Specification Version (Identify): 1.3 00:21:34.803 Maximum Queue Entries: 128 00:21:34.803 Contiguous Queues Required: Yes 00:21:34.803 Arbitration Mechanisms Supported 00:21:34.803 Weighted Round Robin: Not Supported 00:21:34.803 Vendor Specific: Not Supported 00:21:34.803 Reset Timeout: 15000 ms 00:21:34.803 Doorbell Stride: 4 bytes 00:21:34.803 NVM Subsystem Reset: Not Supported 00:21:34.803 Command Sets Supported 00:21:34.803 NVM Command Set: Supported 00:21:34.803 Boot Partition: Not Supported 00:21:34.803 Memory Page Size Minimum: 4096 bytes 00:21:34.803 Memory Page Size Maximum: 4096 bytes 00:21:34.803 Persistent Memory Region: Not Supported 00:21:34.803 Optional Asynchronous Events Supported 00:21:34.803 Namespace Attribute Notices: Supported 00:21:34.803 Firmware Activation Notices: Not Supported 00:21:34.803 ANA Change Notices: Not Supported 00:21:34.803 PLE Aggregate Log Change Notices: Not Supported 00:21:34.803 LBA Status Info Alert Notices: Not Supported 00:21:34.803 EGE Aggregate Log Change Notices: Not Supported 00:21:34.803 Normal NVM Subsystem Shutdown event: Not Supported 00:21:34.803 Zone Descriptor Change Notices: Not Supported 00:21:34.803 Discovery Log Change Notices: Not Supported 00:21:34.803 Controller Attributes 00:21:34.803 128-bit Host Identifier: Supported 00:21:34.803 Non-Operational Permissive Mode: Not Supported 00:21:34.803 NVM Sets: Not Supported 00:21:34.803 Read Recovery Levels: Not Supported 00:21:34.803 Endurance Groups: Not Supported 00:21:34.803 Predictable Latency Mode: Not Supported 00:21:34.803 Traffic Based Keep ALive: Not Supported 00:21:34.803 Namespace Granularity: Not Supported 00:21:34.803 SQ Associations: Not Supported 00:21:34.803 UUID List: Not Supported 00:21:34.803 Multi-Domain Subsystem: Not Supported 00:21:34.803 Fixed Capacity Management: Not Supported 00:21:34.803 Variable Capacity Management: Not Supported 00:21:34.803 Delete Endurance Group: Not Supported 00:21:34.803 Delete NVM Set: Not Supported 00:21:34.803 Extended LBA Formats Supported: Not Supported 00:21:34.803 Flexible Data Placement Supported: Not Supported 00:21:34.803 00:21:34.803 Controller Memory Buffer Support 00:21:34.803 ================================ 00:21:34.803 Supported: No 00:21:34.803 00:21:34.803 Persistent Memory Region Support 00:21:34.803 ================================ 00:21:34.803 Supported: No 00:21:34.803 00:21:34.803 Admin Command Set Attributes 00:21:34.803 ============================ 00:21:34.803 Security Send/Receive: Not Supported 00:21:34.803 Format NVM: Not Supported 00:21:34.803 Firmware Activate/Download: Not Supported 00:21:34.803 Namespace Management: Not Supported 00:21:34.803 Device Self-Test: Not Supported 00:21:34.803 Directives: Not Supported 00:21:34.803 NVMe-MI: Not Supported 00:21:34.803 Virtualization Management: Not Supported 00:21:34.803 Doorbell Buffer Config: Not Supported 00:21:34.803 Get LBA Status Capability: Not Supported 00:21:34.803 Command & Feature Lockdown Capability: Not Supported 00:21:34.803 Abort Command Limit: 4 00:21:34.803 Async Event Request Limit: 4 00:21:34.803 Number of Firmware Slots: N/A 00:21:34.803 Firmware Slot 1 Read-Only: N/A 00:21:34.803 Firmware Activation Without Reset: N/A 00:21:34.803 Multiple Update Detection Support: N/A 00:21:34.803 Firmware Update Granularity: No Information Provided 00:21:34.803 Per-Namespace SMART Log: No 00:21:34.803 Asymmetric Namespace Access Log Page: Not Supported 00:21:34.803 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:21:34.803 Command Effects Log Page: Supported 00:21:34.803 Get Log Page Extended Data: Supported 00:21:34.803 Telemetry Log Pages: Not Supported 00:21:34.803 Persistent Event Log Pages: Not Supported 00:21:34.803 Supported Log Pages Log Page: May Support 00:21:34.803 Commands Supported & Effects Log Page: Not Supported 00:21:34.803 Feature Identifiers & Effects Log Page:May Support 00:21:34.803 NVMe-MI Commands & Effects Log Page: May Support 00:21:34.803 Data Area 4 for Telemetry Log: Not Supported 00:21:34.803 Error Log Page Entries Supported: 128 00:21:34.803 Keep Alive: Supported 00:21:34.803 Keep Alive Granularity: 10000 ms 00:21:34.803 00:21:34.803 NVM Command Set Attributes 00:21:34.803 ========================== 00:21:34.803 Submission Queue Entry Size 00:21:34.803 Max: 64 00:21:34.803 Min: 64 00:21:34.803 Completion Queue Entry Size 00:21:34.803 Max: 16 00:21:34.803 Min: 16 00:21:34.803 Number of Namespaces: 32 00:21:34.803 Compare Command: Supported 00:21:34.803 Write Uncorrectable Command: Not Supported 00:21:34.803 Dataset Management Command: Supported 00:21:34.803 Write Zeroes Command: Supported 00:21:34.803 Set Features Save Field: Not Supported 00:21:34.803 Reservations: Supported 00:21:34.803 Timestamp: Not Supported 00:21:34.803 Copy: Supported 00:21:34.803 Volatile Write Cache: Present 00:21:34.803 Atomic Write Unit (Normal): 1 00:21:34.803 Atomic Write Unit (PFail): 1 00:21:34.803 Atomic Compare & Write Unit: 1 00:21:34.803 Fused Compare & Write: Supported 00:21:34.803 Scatter-Gather List 00:21:34.803 SGL Command Set: Supported 00:21:34.803 SGL Keyed: Supported 00:21:34.803 SGL Bit Bucket Descriptor: Not Supported 00:21:34.803 SGL Metadata Pointer: Not Supported 00:21:34.803 Oversized SGL: Not Supported 00:21:34.803 SGL Metadata Address: Not Supported 00:21:34.803 SGL Offset: Supported 00:21:34.803 Transport SGL Data Block: Not Supported 00:21:34.803 Replay Protected Memory Block: Not Supported 00:21:34.803 00:21:34.803 Firmware Slot Information 00:21:34.803 ========================= 00:21:34.803 Active slot: 1 00:21:34.803 Slot 1 Firmware Revision: 24.05 00:21:34.803 00:21:34.803 00:21:34.803 Commands Supported and Effects 00:21:34.803 ============================== 00:21:34.803 Admin Commands 00:21:34.803 -------------- 00:21:34.803 Get Log Page (02h): Supported 00:21:34.803 Identify (06h): Supported 00:21:34.803 Abort (08h): Supported 00:21:34.803 Set Features (09h): Supported 00:21:34.804 Get Features (0Ah): Supported 00:21:34.804 Asynchronous Event Request (0Ch): Supported 00:21:34.804 Keep Alive (18h): Supported 00:21:34.804 I/O Commands 00:21:34.804 ------------ 00:21:34.804 Flush (00h): Supported LBA-Change 00:21:34.804 Write (01h): Supported LBA-Change 00:21:34.804 Read (02h): Supported 00:21:34.804 Compare (05h): Supported 00:21:34.804 Write Zeroes (08h): Supported LBA-Change 00:21:34.804 Dataset Management (09h): Supported LBA-Change 00:21:34.804 Copy (19h): Supported LBA-Change 00:21:34.804 Unknown (79h): Supported LBA-Change 00:21:34.804 Unknown (7Ah): Supported 00:21:34.804 00:21:34.804 Error Log 00:21:34.804 ========= 00:21:34.804 00:21:34.804 Arbitration 00:21:34.804 =========== 00:21:34.804 Arbitration Burst: 1 00:21:34.804 00:21:34.804 Power Management 00:21:34.804 ================ 00:21:34.804 Number of Power States: 1 00:21:34.804 Current Power State: Power State #0 00:21:34.804 Power State #0: 00:21:34.804 Max Power: 0.00 W 00:21:34.804 Non-Operational State: Operational 00:21:34.804 Entry Latency: Not Reported 00:21:34.804 Exit Latency: Not Reported 00:21:34.804 Relative Read Throughput: 0 00:21:34.804 Relative Read Latency: 0 00:21:34.804 Relative Write Throughput: 0 00:21:34.804 Relative Write Latency: 0 00:21:34.804 Idle Power: Not Reported 00:21:34.804 Active Power: Not Reported 00:21:34.804 Non-Operational Permissive Mode: Not Supported 00:21:34.804 00:21:34.804 Health Information 00:21:34.804 ================== 00:21:34.804 Critical Warnings: 00:21:34.804 Available Spare Space: OK 00:21:34.804 Temperature: OK 00:21:34.804 Device Reliability: OK 00:21:34.804 Read Only: No 00:21:34.804 Volatile Memory Backup: OK 00:21:34.804 Current Temperature: 0 Kelvin (-273 Celsius) 00:21:34.804 Temperature Threshold: [2024-04-26 15:31:52.241173] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:34.804 [2024-04-26 15:31:52.241178] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x6e5c30) 00:21:34.804 [2024-04-26 15:31:52.241185] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.804 [2024-04-26 15:31:52.241197] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74e320, cid 7, qid 0 00:21:34.804 [2024-04-26 15:31:52.241418] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:34.804 [2024-04-26 15:31:52.241425] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:34.804 [2024-04-26 15:31:52.241428] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:34.804 [2024-04-26 15:31:52.241432] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x74e320) on tqpair=0x6e5c30 00:21:34.804 [2024-04-26 15:31:52.241457] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:21:34.804 [2024-04-26 15:31:52.241468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.804 [2024-04-26 15:31:52.241475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.804 [2024-04-26 15:31:52.241480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.804 [2024-04-26 15:31:52.241486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.804 [2024-04-26 15:31:52.241494] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:34.804 [2024-04-26 15:31:52.241498] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:34.804 [2024-04-26 15:31:52.241501] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6e5c30) 00:21:34.804 [2024-04-26 15:31:52.241508] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.804 [2024-04-26 15:31:52.241519] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74dda0, cid 3, qid 0 00:21:34.804 [2024-04-26 15:31:52.241721] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:34.804 [2024-04-26 15:31:52.241727] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:34.804 [2024-04-26 15:31:52.241730] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:34.804 [2024-04-26 15:31:52.241734] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x74dda0) on tqpair=0x6e5c30 00:21:34.804 [2024-04-26 15:31:52.241741] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:34.804 [2024-04-26 15:31:52.241747] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:34.804 [2024-04-26 15:31:52.241750] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6e5c30) 00:21:34.804 [2024-04-26 15:31:52.241757] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.804 [2024-04-26 15:31:52.241769] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74dda0, cid 3, qid 0 00:21:34.804 [2024-04-26 15:31:52.241970] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:34.804 [2024-04-26 15:31:52.241977] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:34.804 [2024-04-26 15:31:52.241981] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:34.804 [2024-04-26 15:31:52.241984] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x74dda0) on tqpair=0x6e5c30 00:21:34.804 [2024-04-26 15:31:52.241989] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:21:34.804 [2024-04-26 15:31:52.241993] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:21:34.804 [2024-04-26 15:31:52.242002] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:34.804 [2024-04-26 15:31:52.242006] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:34.804 [2024-04-26 15:31:52.242010] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6e5c30) 00:21:34.804 [2024-04-26 15:31:52.242016] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.804 [2024-04-26 15:31:52.242026] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74dda0, cid 3, qid 0 00:21:34.804 [2024-04-26 15:31:52.242203] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:34.804 [2024-04-26 15:31:52.242209] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:34.804 [2024-04-26 15:31:52.242212] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:34.804 [2024-04-26 15:31:52.242216] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x74dda0) on tqpair=0x6e5c30 00:21:34.804 [2024-04-26 15:31:52.242226] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:34.804 [2024-04-26 15:31:52.242230] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:34.804 [2024-04-26 15:31:52.242233] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6e5c30) 00:21:34.804 [2024-04-26 15:31:52.242240] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.804 [2024-04-26 15:31:52.242249] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74dda0, cid 3, qid 0 00:21:34.804 [2024-04-26 15:31:52.242424] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:34.804 [2024-04-26 15:31:52.242431] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:34.804 [2024-04-26 15:31:52.242434] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:34.804 [2024-04-26 15:31:52.242438] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x74dda0) on tqpair=0x6e5c30 00:21:34.804 [2024-04-26 15:31:52.242447] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:34.804 [2024-04-26 15:31:52.242451] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:34.804 [2024-04-26 15:31:52.242454] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6e5c30) 00:21:34.804 [2024-04-26 15:31:52.242461] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.804 [2024-04-26 15:31:52.242470] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74dda0, cid 3, qid 0 00:21:34.804 [2024-04-26 15:31:52.242727] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:34.804 [2024-04-26 15:31:52.242733] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:34.804 [2024-04-26 15:31:52.242736] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:35.066 [2024-04-26 15:31:52.242740] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x74dda0) on tqpair=0x6e5c30 00:21:35.066 [2024-04-26 15:31:52.242755] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:35.066 [2024-04-26 15:31:52.242759] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:35.066 [2024-04-26 15:31:52.242764] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6e5c30) 00:21:35.066 [2024-04-26 15:31:52.242771] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.066 [2024-04-26 15:31:52.242782] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74dda0, cid 3, qid 0 00:21:35.066 [2024-04-26 15:31:52.242980] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:35.066 [2024-04-26 15:31:52.242988] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:35.066 [2024-04-26 15:31:52.242991] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:35.066 [2024-04-26 15:31:52.242995] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x74dda0) on tqpair=0x6e5c30 00:21:35.066 [2024-04-26 15:31:52.243004] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:35.066 [2024-04-26 15:31:52.243008] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:35.066 [2024-04-26 15:31:52.243011] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6e5c30) 00:21:35.066 [2024-04-26 15:31:52.243018] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.066 [2024-04-26 15:31:52.243028] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74dda0, cid 3, qid 0 00:21:35.066 [2024-04-26 15:31:52.243201] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:35.066 [2024-04-26 15:31:52.243207] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:35.066 [2024-04-26 15:31:52.243210] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:35.066 [2024-04-26 15:31:52.243214] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x74dda0) on tqpair=0x6e5c30 00:21:35.066 [2024-04-26 15:31:52.243224] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:35.066 [2024-04-26 15:31:52.243228] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:35.066 [2024-04-26 15:31:52.243231] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6e5c30) 00:21:35.066 [2024-04-26 15:31:52.243238] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.066 [2024-04-26 15:31:52.243247] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74dda0, cid 3, qid 0 00:21:35.066 [2024-04-26 15:31:52.243433] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:35.066 [2024-04-26 15:31:52.243441] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:35.066 [2024-04-26 15:31:52.243444] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:35.066 [2024-04-26 15:31:52.243448] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x74dda0) on tqpair=0x6e5c30 00:21:35.066 [2024-04-26 15:31:52.243457] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:35.066 [2024-04-26 15:31:52.243461] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:35.066 [2024-04-26 15:31:52.243464] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6e5c30) 00:21:35.066 [2024-04-26 15:31:52.243471] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.066 [2024-04-26 15:31:52.243480] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74dda0, cid 3, qid 0 00:21:35.066 [2024-04-26 15:31:52.243684] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:35.066 [2024-04-26 15:31:52.243690] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:35.066 [2024-04-26 15:31:52.243693] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:35.066 [2024-04-26 15:31:52.243697] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x74dda0) on tqpair=0x6e5c30 00:21:35.066 [2024-04-26 15:31:52.243708] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:35.066 [2024-04-26 15:31:52.243712] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:35.066 [2024-04-26 15:31:52.243716] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6e5c30) 00:21:35.066 [2024-04-26 15:31:52.243722] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.066 [2024-04-26 15:31:52.243732] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74dda0, cid 3, qid 0 00:21:35.066 [2024-04-26 15:31:52.243887] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:35.066 [2024-04-26 15:31:52.243893] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:35.066 [2024-04-26 15:31:52.243897] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:35.066 [2024-04-26 15:31:52.243900] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x74dda0) on tqpair=0x6e5c30 00:21:35.066 [2024-04-26 15:31:52.243910] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:35.066 [2024-04-26 15:31:52.243914] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:35.066 [2024-04-26 15:31:52.243917] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6e5c30) 00:21:35.066 [2024-04-26 15:31:52.243924] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.066 [2024-04-26 15:31:52.243933] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74dda0, cid 3, qid 0 00:21:35.066 [2024-04-26 15:31:52.244128] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:35.066 [2024-04-26 15:31:52.244134] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:35.066 [2024-04-26 15:31:52.244137] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:35.066 [2024-04-26 15:31:52.244141] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x74dda0) on tqpair=0x6e5c30 00:21:35.066 [2024-04-26 15:31:52.244150] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:35.066 [2024-04-26 15:31:52.244154] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:35.066 [2024-04-26 15:31:52.244158] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6e5c30) 00:21:35.066 [2024-04-26 15:31:52.244164] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.066 [2024-04-26 15:31:52.244174] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74dda0, cid 3, qid 0 00:21:35.066 [2024-04-26 15:31:52.244389] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:35.066 [2024-04-26 15:31:52.244396] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:35.066 [2024-04-26 15:31:52.244399] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:35.066 [2024-04-26 15:31:52.244403] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x74dda0) on tqpair=0x6e5c30 00:21:35.066 [2024-04-26 15:31:52.244412] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:35.066 [2024-04-26 15:31:52.244416] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:35.066 [2024-04-26 15:31:52.244419] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6e5c30) 00:21:35.066 [2024-04-26 15:31:52.244426] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.066 [2024-04-26 15:31:52.244435] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74dda0, cid 3, qid 0 00:21:35.066 [2024-04-26 15:31:52.244591] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:35.066 [2024-04-26 15:31:52.244597] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:35.066 [2024-04-26 15:31:52.244600] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:35.066 [2024-04-26 15:31:52.244604] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x74dda0) on tqpair=0x6e5c30 00:21:35.066 [2024-04-26 15:31:52.244613] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:35.066 [2024-04-26 15:31:52.244617] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:35.066 [2024-04-26 15:31:52.244622] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6e5c30) 00:21:35.066 [2024-04-26 15:31:52.244629] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.066 [2024-04-26 15:31:52.244638] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74dda0, cid 3, qid 0 00:21:35.067 [2024-04-26 15:31:52.248845] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:35.067 [2024-04-26 15:31:52.248854] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:35.067 [2024-04-26 15:31:52.248857] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:35.067 [2024-04-26 15:31:52.248861] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x74dda0) on tqpair=0x6e5c30 00:21:35.067 [2024-04-26 15:31:52.248871] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:35.067 [2024-04-26 15:31:52.248874] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:35.067 [2024-04-26 15:31:52.248878] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6e5c30) 00:21:35.067 [2024-04-26 15:31:52.248884] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.067 [2024-04-26 15:31:52.248895] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74dda0, cid 3, qid 0 00:21:35.067 [2024-04-26 15:31:52.249082] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:35.067 [2024-04-26 15:31:52.249088] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:35.067 [2024-04-26 15:31:52.249091] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:35.067 [2024-04-26 15:31:52.249095] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x74dda0) on tqpair=0x6e5c30 00:21:35.067 [2024-04-26 15:31:52.249102] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:21:35.067 0 Kelvin (-273 Celsius) 00:21:35.067 Available Spare: 0% 00:21:35.067 Available Spare Threshold: 0% 00:21:35.067 Life Percentage Used: 0% 00:21:35.067 Data Units Read: 0 00:21:35.067 Data Units Written: 0 00:21:35.067 Host Read Commands: 0 00:21:35.067 Host Write Commands: 0 00:21:35.067 Controller Busy Time: 0 minutes 00:21:35.067 Power Cycles: 0 00:21:35.067 Power On Hours: 0 hours 00:21:35.067 Unsafe Shutdowns: 0 00:21:35.067 Unrecoverable Media Errors: 0 00:21:35.067 Lifetime Error Log Entries: 0 00:21:35.067 Warning Temperature Time: 0 minutes 00:21:35.067 Critical Temperature Time: 0 minutes 00:21:35.067 00:21:35.067 Number of Queues 00:21:35.067 ================ 00:21:35.067 Number of I/O Submission Queues: 127 00:21:35.067 Number of I/O Completion Queues: 127 00:21:35.067 00:21:35.067 Active Namespaces 00:21:35.067 ================= 00:21:35.067 Namespace ID:1 00:21:35.067 Error Recovery Timeout: Unlimited 00:21:35.067 Command Set Identifier: NVM (00h) 00:21:35.067 Deallocate: Supported 00:21:35.067 Deallocated/Unwritten Error: Not Supported 00:21:35.067 Deallocated Read Value: Unknown 00:21:35.067 Deallocate in Write Zeroes: Not Supported 00:21:35.067 Deallocated Guard Field: 0xFFFF 00:21:35.067 Flush: Supported 00:21:35.067 Reservation: Supported 00:21:35.067 Namespace Sharing Capabilities: Multiple Controllers 00:21:35.067 Size (in LBAs): 131072 (0GiB) 00:21:35.067 Capacity (in LBAs): 131072 (0GiB) 00:21:35.067 Utilization (in LBAs): 131072 (0GiB) 00:21:35.067 NGUID: ABCDEF0123456789ABCDEF0123456789 00:21:35.067 EUI64: ABCDEF0123456789 00:21:35.067 UUID: 65e54625-d689-43eb-ac48-1b70868f347c 00:21:35.067 Thin Provisioning: Not Supported 00:21:35.067 Per-NS Atomic Units: Yes 00:21:35.067 Atomic Boundary Size (Normal): 0 00:21:35.067 Atomic Boundary Size (PFail): 0 00:21:35.067 Atomic Boundary Offset: 0 00:21:35.067 Maximum Single Source Range Length: 65535 00:21:35.067 Maximum Copy Length: 65535 00:21:35.067 Maximum Source Range Count: 1 00:21:35.067 NGUID/EUI64 Never Reused: No 00:21:35.067 Namespace Write Protected: No 00:21:35.067 Number of LBA Formats: 1 00:21:35.067 Current LBA Format: LBA Format #00 00:21:35.067 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:35.067 00:21:35.067 15:31:52 -- host/identify.sh@51 -- # sync 00:21:35.067 15:31:52 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:35.067 15:31:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:35.067 15:31:52 -- common/autotest_common.sh@10 -- # set +x 00:21:35.067 15:31:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:35.067 15:31:52 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:21:35.067 15:31:52 -- host/identify.sh@56 -- # nvmftestfini 00:21:35.067 15:31:52 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:35.067 15:31:52 -- nvmf/common.sh@117 -- # sync 00:21:35.067 15:31:52 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:35.067 15:31:52 -- nvmf/common.sh@120 -- # set +e 00:21:35.067 15:31:52 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:35.067 15:31:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:35.067 rmmod nvme_tcp 00:21:35.067 rmmod nvme_fabrics 00:21:35.067 rmmod nvme_keyring 00:21:35.067 15:31:52 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:35.067 15:31:52 -- nvmf/common.sh@124 -- # set -e 00:21:35.067 15:31:52 -- nvmf/common.sh@125 -- # return 0 00:21:35.067 15:31:52 -- nvmf/common.sh@478 -- # '[' -n 1705115 ']' 00:21:35.067 15:31:52 -- nvmf/common.sh@479 -- # killprocess 1705115 00:21:35.067 15:31:52 -- common/autotest_common.sh@936 -- # '[' -z 1705115 ']' 00:21:35.067 15:31:52 -- common/autotest_common.sh@940 -- # kill -0 1705115 00:21:35.067 15:31:52 -- common/autotest_common.sh@941 -- # uname 00:21:35.067 15:31:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:35.067 15:31:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1705115 00:21:35.067 15:31:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:35.067 15:31:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:35.067 15:31:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1705115' 00:21:35.067 killing process with pid 1705115 00:21:35.067 15:31:52 -- common/autotest_common.sh@955 -- # kill 1705115 00:21:35.067 [2024-04-26 15:31:52.396873] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:21:35.067 15:31:52 -- common/autotest_common.sh@960 -- # wait 1705115 00:21:35.327 15:31:52 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:35.328 15:31:52 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:35.328 15:31:52 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:35.328 15:31:52 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:35.328 15:31:52 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:35.328 15:31:52 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:35.328 15:31:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:35.328 15:31:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:37.241 15:31:54 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:37.241 00:21:37.241 real 0m10.451s 00:21:37.241 user 0m7.666s 00:21:37.241 sys 0m5.389s 00:21:37.241 15:31:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:37.241 15:31:54 -- common/autotest_common.sh@10 -- # set +x 00:21:37.241 ************************************ 00:21:37.241 END TEST nvmf_identify 00:21:37.241 ************************************ 00:21:37.241 15:31:54 -- nvmf/nvmf.sh@96 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:37.241 15:31:54 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:37.241 15:31:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:37.241 15:31:54 -- common/autotest_common.sh@10 -- # set +x 00:21:37.502 ************************************ 00:21:37.502 START TEST nvmf_perf 00:21:37.502 ************************************ 00:21:37.502 15:31:54 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:37.502 * Looking for test storage... 00:21:37.502 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:37.502 15:31:54 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:37.502 15:31:54 -- nvmf/common.sh@7 -- # uname -s 00:21:37.502 15:31:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:37.502 15:31:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:37.502 15:31:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:37.502 15:31:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:37.502 15:31:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:37.502 15:31:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:37.502 15:31:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:37.502 15:31:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:37.502 15:31:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:37.502 15:31:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:37.502 15:31:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:37.502 15:31:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:37.502 15:31:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:37.502 15:31:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:37.502 15:31:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:37.502 15:31:54 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:37.502 15:31:54 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:37.502 15:31:54 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:37.502 15:31:54 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:37.502 15:31:54 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:37.502 15:31:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.502 15:31:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.502 15:31:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.502 15:31:54 -- paths/export.sh@5 -- # export PATH 00:21:37.502 15:31:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.502 15:31:54 -- nvmf/common.sh@47 -- # : 0 00:21:37.502 15:31:54 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:37.502 15:31:54 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:37.502 15:31:54 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:37.502 15:31:54 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:37.502 15:31:54 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:37.502 15:31:54 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:37.502 15:31:54 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:37.502 15:31:54 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:37.502 15:31:54 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:37.502 15:31:54 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:37.502 15:31:54 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:37.502 15:31:54 -- host/perf.sh@17 -- # nvmftestinit 00:21:37.502 15:31:54 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:37.502 15:31:54 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:37.502 15:31:54 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:37.502 15:31:54 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:37.502 15:31:54 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:37.502 15:31:54 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:37.502 15:31:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:37.502 15:31:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:37.502 15:31:54 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:37.502 15:31:54 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:37.502 15:31:54 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:37.502 15:31:54 -- common/autotest_common.sh@10 -- # set +x 00:21:44.198 15:32:01 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:44.198 15:32:01 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:44.198 15:32:01 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:44.198 15:32:01 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:44.198 15:32:01 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:44.198 15:32:01 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:44.198 15:32:01 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:44.198 15:32:01 -- nvmf/common.sh@295 -- # net_devs=() 00:21:44.198 15:32:01 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:44.198 15:32:01 -- nvmf/common.sh@296 -- # e810=() 00:21:44.198 15:32:01 -- nvmf/common.sh@296 -- # local -ga e810 00:21:44.198 15:32:01 -- nvmf/common.sh@297 -- # x722=() 00:21:44.198 15:32:01 -- nvmf/common.sh@297 -- # local -ga x722 00:21:44.198 15:32:01 -- nvmf/common.sh@298 -- # mlx=() 00:21:44.198 15:32:01 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:44.198 15:32:01 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:44.198 15:32:01 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:44.198 15:32:01 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:44.198 15:32:01 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:44.198 15:32:01 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:44.198 15:32:01 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:44.198 15:32:01 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:44.198 15:32:01 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:44.198 15:32:01 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:44.198 15:32:01 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:44.198 15:32:01 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:44.198 15:32:01 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:44.198 15:32:01 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:44.198 15:32:01 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:44.198 15:32:01 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:44.198 15:32:01 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:44.198 15:32:01 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:44.198 15:32:01 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:44.198 15:32:01 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:44.198 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:44.198 15:32:01 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:44.198 15:32:01 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:44.198 15:32:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:44.198 15:32:01 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:44.198 15:32:01 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:44.198 15:32:01 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:44.198 15:32:01 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:44.198 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:44.198 15:32:01 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:44.198 15:32:01 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:44.198 15:32:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:44.198 15:32:01 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:44.198 15:32:01 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:44.198 15:32:01 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:44.198 15:32:01 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:44.198 15:32:01 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:44.198 15:32:01 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:44.198 15:32:01 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:44.198 15:32:01 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:44.198 15:32:01 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:44.198 15:32:01 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:44.198 Found net devices under 0000:31:00.0: cvl_0_0 00:21:44.198 15:32:01 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:44.198 15:32:01 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:44.198 15:32:01 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:44.198 15:32:01 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:44.198 15:32:01 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:44.198 15:32:01 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:44.198 Found net devices under 0000:31:00.1: cvl_0_1 00:21:44.198 15:32:01 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:44.198 15:32:01 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:44.198 15:32:01 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:44.198 15:32:01 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:44.198 15:32:01 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:44.199 15:32:01 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:44.199 15:32:01 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:44.199 15:32:01 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:44.199 15:32:01 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:44.199 15:32:01 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:44.199 15:32:01 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:44.199 15:32:01 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:44.199 15:32:01 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:44.199 15:32:01 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:44.199 15:32:01 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:44.199 15:32:01 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:44.199 15:32:01 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:44.199 15:32:01 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:44.199 15:32:01 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:44.199 15:32:01 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:44.199 15:32:01 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:44.199 15:32:01 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:44.199 15:32:01 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:44.199 15:32:01 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:44.199 15:32:01 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:44.199 15:32:01 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:44.199 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:44.199 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.662 ms 00:21:44.199 00:21:44.199 --- 10.0.0.2 ping statistics --- 00:21:44.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:44.199 rtt min/avg/max/mdev = 0.662/0.662/0.662/0.000 ms 00:21:44.199 15:32:01 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:44.199 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:44.199 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.376 ms 00:21:44.199 00:21:44.199 --- 10.0.0.1 ping statistics --- 00:21:44.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:44.199 rtt min/avg/max/mdev = 0.376/0.376/0.376/0.000 ms 00:21:44.199 15:32:01 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:44.199 15:32:01 -- nvmf/common.sh@411 -- # return 0 00:21:44.199 15:32:01 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:44.199 15:32:01 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:44.199 15:32:01 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:44.199 15:32:01 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:44.199 15:32:01 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:44.199 15:32:01 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:44.199 15:32:01 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:44.199 15:32:01 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:21:44.199 15:32:01 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:44.199 15:32:01 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:44.199 15:32:01 -- common/autotest_common.sh@10 -- # set +x 00:21:44.199 15:32:01 -- nvmf/common.sh@470 -- # nvmfpid=1709558 00:21:44.199 15:32:01 -- nvmf/common.sh@471 -- # waitforlisten 1709558 00:21:44.199 15:32:01 -- common/autotest_common.sh@817 -- # '[' -z 1709558 ']' 00:21:44.199 15:32:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:44.199 15:32:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:44.199 15:32:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:44.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:44.199 15:32:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:44.199 15:32:01 -- common/autotest_common.sh@10 -- # set +x 00:21:44.199 15:32:01 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:44.199 [2024-04-26 15:32:01.637721] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:21:44.199 [2024-04-26 15:32:01.637785] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:44.460 EAL: No free 2048 kB hugepages reported on node 1 00:21:44.460 [2024-04-26 15:32:01.709728] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:44.460 [2024-04-26 15:32:01.783356] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:44.460 [2024-04-26 15:32:01.783395] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:44.460 [2024-04-26 15:32:01.783405] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:44.460 [2024-04-26 15:32:01.783413] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:44.460 [2024-04-26 15:32:01.783419] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:44.460 [2024-04-26 15:32:01.783567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:44.460 [2024-04-26 15:32:01.783684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:44.460 [2024-04-26 15:32:01.783845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:44.460 [2024-04-26 15:32:01.783858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:45.031 15:32:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:45.031 15:32:02 -- common/autotest_common.sh@850 -- # return 0 00:21:45.031 15:32:02 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:45.031 15:32:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:45.031 15:32:02 -- common/autotest_common.sh@10 -- # set +x 00:21:45.031 15:32:02 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:45.031 15:32:02 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:21:45.031 15:32:02 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:21:45.602 15:32:02 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:21:45.602 15:32:02 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:21:45.862 15:32:03 -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:21:45.862 15:32:03 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:45.862 15:32:03 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:21:45.862 15:32:03 -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:21:45.862 15:32:03 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:21:45.862 15:32:03 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:21:45.862 15:32:03 -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:46.122 [2024-04-26 15:32:03.428124] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:46.122 15:32:03 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:46.382 15:32:03 -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:46.382 15:32:03 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:46.382 15:32:03 -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:46.382 15:32:03 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:21:46.641 15:32:03 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:46.901 [2024-04-26 15:32:04.102608] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:46.901 15:32:04 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:46.901 15:32:04 -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:21:46.901 15:32:04 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:21:46.901 15:32:04 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:21:46.901 15:32:04 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:21:48.286 Initializing NVMe Controllers 00:21:48.286 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:21:48.286 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:21:48.286 Initialization complete. Launching workers. 00:21:48.286 ======================================================== 00:21:48.286 Latency(us) 00:21:48.286 Device Information : IOPS MiB/s Average min max 00:21:48.286 PCIE (0000:65:00.0) NSID 1 from core 0: 80788.63 315.58 395.45 13.29 4578.54 00:21:48.286 ======================================================== 00:21:48.286 Total : 80788.63 315.58 395.45 13.29 4578.54 00:21:48.286 00:21:48.286 15:32:05 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:48.286 EAL: No free 2048 kB hugepages reported on node 1 00:21:49.674 Initializing NVMe Controllers 00:21:49.674 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:49.674 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:49.674 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:49.674 Initialization complete. Launching workers. 00:21:49.674 ======================================================== 00:21:49.674 Latency(us) 00:21:49.674 Device Information : IOPS MiB/s Average min max 00:21:49.674 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 81.00 0.32 12513.45 272.90 46075.29 00:21:49.674 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 56.00 0.22 17957.25 7962.62 47913.33 00:21:49.674 ======================================================== 00:21:49.674 Total : 137.00 0.54 14738.65 272.90 47913.33 00:21:49.674 00:21:49.674 15:32:06 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:49.674 EAL: No free 2048 kB hugepages reported on node 1 00:21:51.060 Initializing NVMe Controllers 00:21:51.060 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:51.060 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:51.060 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:51.060 Initialization complete. Launching workers. 00:21:51.060 ======================================================== 00:21:51.060 Latency(us) 00:21:51.060 Device Information : IOPS MiB/s Average min max 00:21:51.060 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10420.52 40.71 3070.93 526.13 9882.32 00:21:51.060 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3804.90 14.86 8435.45 6298.76 18900.81 00:21:51.060 ======================================================== 00:21:51.060 Total : 14225.42 55.57 4505.79 526.13 18900.81 00:21:51.060 00:21:51.060 15:32:08 -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:21:51.060 15:32:08 -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:21:51.060 15:32:08 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:51.060 EAL: No free 2048 kB hugepages reported on node 1 00:21:53.603 Initializing NVMe Controllers 00:21:53.603 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:53.603 Controller IO queue size 128, less than required. 00:21:53.603 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:53.603 Controller IO queue size 128, less than required. 00:21:53.603 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:53.603 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:53.603 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:53.603 Initialization complete. Launching workers. 00:21:53.603 ======================================================== 00:21:53.603 Latency(us) 00:21:53.603 Device Information : IOPS MiB/s Average min max 00:21:53.603 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1468.57 367.14 88738.86 58790.43 122974.09 00:21:53.603 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 577.44 144.36 226386.07 68874.64 324906.98 00:21:53.603 ======================================================== 00:21:53.603 Total : 2046.01 511.50 127586.55 58790.43 324906.98 00:21:53.603 00:21:53.603 15:32:10 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:21:53.603 EAL: No free 2048 kB hugepages reported on node 1 00:21:53.603 No valid NVMe controllers or AIO or URING devices found 00:21:53.603 Initializing NVMe Controllers 00:21:53.603 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:53.603 Controller IO queue size 128, less than required. 00:21:53.603 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:53.603 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:21:53.603 Controller IO queue size 128, less than required. 00:21:53.603 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:53.603 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:21:53.603 WARNING: Some requested NVMe devices were skipped 00:21:53.603 15:32:10 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:21:53.603 EAL: No free 2048 kB hugepages reported on node 1 00:21:56.147 Initializing NVMe Controllers 00:21:56.147 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:56.147 Controller IO queue size 128, less than required. 00:21:56.147 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:56.147 Controller IO queue size 128, less than required. 00:21:56.147 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:56.147 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:56.147 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:56.147 Initialization complete. Launching workers. 00:21:56.147 00:21:56.147 ==================== 00:21:56.147 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:21:56.147 TCP transport: 00:21:56.147 polls: 27129 00:21:56.147 idle_polls: 14219 00:21:56.147 sock_completions: 12910 00:21:56.147 nvme_completions: 5843 00:21:56.147 submitted_requests: 8798 00:21:56.147 queued_requests: 1 00:21:56.147 00:21:56.147 ==================== 00:21:56.147 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:21:56.147 TCP transport: 00:21:56.147 polls: 26293 00:21:56.147 idle_polls: 12889 00:21:56.147 sock_completions: 13404 00:21:56.147 nvme_completions: 6083 00:21:56.147 submitted_requests: 9170 00:21:56.147 queued_requests: 1 00:21:56.147 ======================================================== 00:21:56.147 Latency(us) 00:21:56.147 Device Information : IOPS MiB/s Average min max 00:21:56.147 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1460.48 365.12 89602.70 44827.63 153708.05 00:21:56.147 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1520.48 380.12 84891.96 45474.27 132641.98 00:21:56.147 ======================================================== 00:21:56.147 Total : 2980.96 745.24 87199.92 44827.63 153708.05 00:21:56.147 00:21:56.147 15:32:13 -- host/perf.sh@66 -- # sync 00:21:56.147 15:32:13 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:56.147 15:32:13 -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:21:56.147 15:32:13 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:21:56.147 15:32:13 -- host/perf.sh@114 -- # nvmftestfini 00:21:56.147 15:32:13 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:56.147 15:32:13 -- nvmf/common.sh@117 -- # sync 00:21:56.147 15:32:13 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:56.147 15:32:13 -- nvmf/common.sh@120 -- # set +e 00:21:56.147 15:32:13 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:56.147 15:32:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:56.147 rmmod nvme_tcp 00:21:56.148 rmmod nvme_fabrics 00:21:56.148 rmmod nvme_keyring 00:21:56.148 15:32:13 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:56.148 15:32:13 -- nvmf/common.sh@124 -- # set -e 00:21:56.148 15:32:13 -- nvmf/common.sh@125 -- # return 0 00:21:56.148 15:32:13 -- nvmf/common.sh@478 -- # '[' -n 1709558 ']' 00:21:56.148 15:32:13 -- nvmf/common.sh@479 -- # killprocess 1709558 00:21:56.148 15:32:13 -- common/autotest_common.sh@936 -- # '[' -z 1709558 ']' 00:21:56.148 15:32:13 -- common/autotest_common.sh@940 -- # kill -0 1709558 00:21:56.148 15:32:13 -- common/autotest_common.sh@941 -- # uname 00:21:56.148 15:32:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:56.148 15:32:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1709558 00:21:56.148 15:32:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:56.148 15:32:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:56.148 15:32:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1709558' 00:21:56.148 killing process with pid 1709558 00:21:56.148 15:32:13 -- common/autotest_common.sh@955 -- # kill 1709558 00:21:56.148 15:32:13 -- common/autotest_common.sh@960 -- # wait 1709558 00:21:58.058 15:32:15 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:58.058 15:32:15 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:58.058 15:32:15 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:58.058 15:32:15 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:58.058 15:32:15 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:58.058 15:32:15 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:58.058 15:32:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:58.058 15:32:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:00.601 15:32:17 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:00.601 00:22:00.601 real 0m22.727s 00:22:00.601 user 0m55.904s 00:22:00.601 sys 0m7.484s 00:22:00.601 15:32:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:00.601 15:32:17 -- common/autotest_common.sh@10 -- # set +x 00:22:00.601 ************************************ 00:22:00.601 END TEST nvmf_perf 00:22:00.601 ************************************ 00:22:00.601 15:32:17 -- nvmf/nvmf.sh@97 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:00.601 15:32:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:00.601 15:32:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:00.601 15:32:17 -- common/autotest_common.sh@10 -- # set +x 00:22:00.601 ************************************ 00:22:00.601 START TEST nvmf_fio_host 00:22:00.601 ************************************ 00:22:00.601 15:32:17 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:00.601 * Looking for test storage... 00:22:00.601 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:00.601 15:32:17 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:00.601 15:32:17 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:00.601 15:32:17 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:00.601 15:32:17 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:00.601 15:32:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.601 15:32:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.601 15:32:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.601 15:32:17 -- paths/export.sh@5 -- # export PATH 00:22:00.601 15:32:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.601 15:32:17 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:00.601 15:32:17 -- nvmf/common.sh@7 -- # uname -s 00:22:00.601 15:32:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:00.601 15:32:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:00.601 15:32:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:00.601 15:32:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:00.601 15:32:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:00.601 15:32:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:00.601 15:32:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:00.601 15:32:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:00.601 15:32:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:00.601 15:32:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:00.601 15:32:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:00.601 15:32:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:00.601 15:32:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:00.601 15:32:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:00.601 15:32:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:00.601 15:32:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:00.601 15:32:17 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:00.601 15:32:17 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:00.601 15:32:17 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:00.601 15:32:17 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:00.601 15:32:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.601 15:32:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.601 15:32:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.601 15:32:17 -- paths/export.sh@5 -- # export PATH 00:22:00.601 15:32:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.601 15:32:17 -- nvmf/common.sh@47 -- # : 0 00:22:00.602 15:32:17 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:00.602 15:32:17 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:00.602 15:32:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:00.602 15:32:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:00.602 15:32:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:00.602 15:32:17 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:00.602 15:32:17 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:00.602 15:32:17 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:00.602 15:32:17 -- host/fio.sh@12 -- # nvmftestinit 00:22:00.602 15:32:17 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:00.602 15:32:17 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:00.602 15:32:17 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:00.602 15:32:17 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:00.602 15:32:17 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:00.602 15:32:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:00.602 15:32:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:00.602 15:32:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:00.602 15:32:17 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:22:00.602 15:32:17 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:22:00.602 15:32:17 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:00.602 15:32:17 -- common/autotest_common.sh@10 -- # set +x 00:22:07.186 15:32:24 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:07.186 15:32:24 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:07.186 15:32:24 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:07.186 15:32:24 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:07.186 15:32:24 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:07.186 15:32:24 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:07.186 15:32:24 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:07.186 15:32:24 -- nvmf/common.sh@295 -- # net_devs=() 00:22:07.186 15:32:24 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:07.186 15:32:24 -- nvmf/common.sh@296 -- # e810=() 00:22:07.186 15:32:24 -- nvmf/common.sh@296 -- # local -ga e810 00:22:07.186 15:32:24 -- nvmf/common.sh@297 -- # x722=() 00:22:07.186 15:32:24 -- nvmf/common.sh@297 -- # local -ga x722 00:22:07.186 15:32:24 -- nvmf/common.sh@298 -- # mlx=() 00:22:07.186 15:32:24 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:07.186 15:32:24 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:07.186 15:32:24 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:07.186 15:32:24 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:07.186 15:32:24 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:07.186 15:32:24 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:07.186 15:32:24 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:07.186 15:32:24 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:07.186 15:32:24 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:07.186 15:32:24 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:07.186 15:32:24 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:07.186 15:32:24 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:07.186 15:32:24 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:07.186 15:32:24 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:07.186 15:32:24 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:07.186 15:32:24 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:07.186 15:32:24 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:07.186 15:32:24 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:07.186 15:32:24 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:07.186 15:32:24 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:07.186 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:07.186 15:32:24 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:07.186 15:32:24 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:07.186 15:32:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:07.186 15:32:24 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:07.186 15:32:24 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:07.186 15:32:24 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:07.186 15:32:24 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:07.186 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:07.186 15:32:24 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:07.186 15:32:24 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:07.186 15:32:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:07.186 15:32:24 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:07.186 15:32:24 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:07.186 15:32:24 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:07.186 15:32:24 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:07.186 15:32:24 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:07.186 15:32:24 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:07.186 15:32:24 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:07.186 15:32:24 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:07.186 15:32:24 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:07.186 15:32:24 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:07.186 Found net devices under 0000:31:00.0: cvl_0_0 00:22:07.186 15:32:24 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:07.186 15:32:24 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:07.186 15:32:24 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:07.186 15:32:24 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:07.186 15:32:24 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:07.186 15:32:24 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:07.186 Found net devices under 0000:31:00.1: cvl_0_1 00:22:07.186 15:32:24 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:07.186 15:32:24 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:22:07.186 15:32:24 -- nvmf/common.sh@403 -- # is_hw=yes 00:22:07.186 15:32:24 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:22:07.186 15:32:24 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:22:07.186 15:32:24 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:22:07.186 15:32:24 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:07.186 15:32:24 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:07.186 15:32:24 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:07.186 15:32:24 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:07.186 15:32:24 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:07.186 15:32:24 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:07.186 15:32:24 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:07.186 15:32:24 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:07.186 15:32:24 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:07.187 15:32:24 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:07.187 15:32:24 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:07.187 15:32:24 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:07.187 15:32:24 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:07.187 15:32:24 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:07.187 15:32:24 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:07.187 15:32:24 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:07.187 15:32:24 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:07.187 15:32:24 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:07.187 15:32:24 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:07.187 15:32:24 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:07.187 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:07.187 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.537 ms 00:22:07.187 00:22:07.187 --- 10.0.0.2 ping statistics --- 00:22:07.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:07.187 rtt min/avg/max/mdev = 0.537/0.537/0.537/0.000 ms 00:22:07.187 15:32:24 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:07.187 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:07.187 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:22:07.187 00:22:07.187 --- 10.0.0.1 ping statistics --- 00:22:07.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:07.187 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:22:07.187 15:32:24 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:07.187 15:32:24 -- nvmf/common.sh@411 -- # return 0 00:22:07.187 15:32:24 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:07.187 15:32:24 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:07.187 15:32:24 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:07.187 15:32:24 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:07.187 15:32:24 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:07.187 15:32:24 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:07.187 15:32:24 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:07.447 15:32:24 -- host/fio.sh@14 -- # [[ y != y ]] 00:22:07.447 15:32:24 -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:22:07.447 15:32:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:07.447 15:32:24 -- common/autotest_common.sh@10 -- # set +x 00:22:07.447 15:32:24 -- host/fio.sh@22 -- # nvmfpid=1716450 00:22:07.447 15:32:24 -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:07.447 15:32:24 -- host/fio.sh@21 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:07.447 15:32:24 -- host/fio.sh@26 -- # waitforlisten 1716450 00:22:07.447 15:32:24 -- common/autotest_common.sh@817 -- # '[' -z 1716450 ']' 00:22:07.447 15:32:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:07.447 15:32:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:07.447 15:32:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:07.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:07.447 15:32:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:07.447 15:32:24 -- common/autotest_common.sh@10 -- # set +x 00:22:07.447 [2024-04-26 15:32:24.725124] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:22:07.447 [2024-04-26 15:32:24.725209] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:07.447 EAL: No free 2048 kB hugepages reported on node 1 00:22:07.447 [2024-04-26 15:32:24.798737] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:07.447 [2024-04-26 15:32:24.871349] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:07.448 [2024-04-26 15:32:24.871392] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:07.448 [2024-04-26 15:32:24.871400] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:07.448 [2024-04-26 15:32:24.871408] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:07.448 [2024-04-26 15:32:24.871414] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:07.448 [2024-04-26 15:32:24.871573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:07.448 [2024-04-26 15:32:24.871705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:07.448 [2024-04-26 15:32:24.871918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:07.448 [2024-04-26 15:32:24.871918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:08.418 15:32:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:08.418 15:32:25 -- common/autotest_common.sh@850 -- # return 0 00:22:08.418 15:32:25 -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:08.418 15:32:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:08.418 15:32:25 -- common/autotest_common.sh@10 -- # set +x 00:22:08.418 [2024-04-26 15:32:25.489249] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:08.418 15:32:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:08.418 15:32:25 -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:22:08.418 15:32:25 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:08.418 15:32:25 -- common/autotest_common.sh@10 -- # set +x 00:22:08.418 15:32:25 -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:08.418 15:32:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:08.418 15:32:25 -- common/autotest_common.sh@10 -- # set +x 00:22:08.418 Malloc1 00:22:08.418 15:32:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:08.418 15:32:25 -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:08.418 15:32:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:08.418 15:32:25 -- common/autotest_common.sh@10 -- # set +x 00:22:08.418 15:32:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:08.418 15:32:25 -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:08.418 15:32:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:08.418 15:32:25 -- common/autotest_common.sh@10 -- # set +x 00:22:08.418 15:32:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:08.418 15:32:25 -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:08.418 15:32:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:08.418 15:32:25 -- common/autotest_common.sh@10 -- # set +x 00:22:08.418 [2024-04-26 15:32:25.572728] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:08.418 15:32:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:08.418 15:32:25 -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:08.418 15:32:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:08.418 15:32:25 -- common/autotest_common.sh@10 -- # set +x 00:22:08.418 15:32:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:08.418 15:32:25 -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:22:08.418 15:32:25 -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:08.418 15:32:25 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:08.418 15:32:25 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:22:08.418 15:32:25 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:08.418 15:32:25 -- common/autotest_common.sh@1325 -- # local sanitizers 00:22:08.418 15:32:25 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:08.418 15:32:25 -- common/autotest_common.sh@1327 -- # shift 00:22:08.418 15:32:25 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:22:08.418 15:32:25 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:08.418 15:32:25 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:08.418 15:32:25 -- common/autotest_common.sh@1331 -- # grep libasan 00:22:08.418 15:32:25 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:08.418 15:32:25 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:08.418 15:32:25 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:08.418 15:32:25 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:08.418 15:32:25 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:08.418 15:32:25 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:22:08.418 15:32:25 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:08.418 15:32:25 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:08.418 15:32:25 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:08.418 15:32:25 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:08.418 15:32:25 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:08.681 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:08.681 fio-3.35 00:22:08.681 Starting 1 thread 00:22:08.681 EAL: No free 2048 kB hugepages reported on node 1 00:22:11.228 00:22:11.228 test: (groupid=0, jobs=1): err= 0: pid=1716907: Fri Apr 26 15:32:28 2024 00:22:11.228 read: IOPS=10.5k, BW=41.2MiB/s (43.2MB/s)(82.6MiB/2004msec) 00:22:11.228 slat (usec): min=2, max=283, avg= 2.18, stdev= 2.81 00:22:11.228 clat (usec): min=3530, max=8981, avg=6690.13, stdev=1169.25 00:22:11.228 lat (usec): min=3532, max=8983, avg=6692.31, stdev=1169.22 00:22:11.228 clat percentiles (usec): 00:22:11.228 | 1.00th=[ 4555], 5.00th=[ 4817], 10.00th=[ 5014], 20.00th=[ 5276], 00:22:11.228 | 30.00th=[ 5604], 40.00th=[ 6849], 50.00th=[ 7111], 60.00th=[ 7308], 00:22:11.228 | 70.00th=[ 7504], 80.00th=[ 7701], 90.00th=[ 7963], 95.00th=[ 8160], 00:22:11.228 | 99.00th=[ 8455], 99.50th=[ 8586], 99.90th=[ 8848], 99.95th=[ 8848], 00:22:11.228 | 99.99th=[ 8979] 00:22:11.228 bw ( KiB/s): min=36680, max=54840, per=99.86%, avg=42132.00, stdev=8515.88, samples=4 00:22:11.228 iops : min= 9170, max=13710, avg=10533.00, stdev=2128.97, samples=4 00:22:11.228 write: IOPS=10.5k, BW=41.2MiB/s (43.2MB/s)(82.5MiB/2004msec); 0 zone resets 00:22:11.228 slat (usec): min=2, max=288, avg= 2.27, stdev= 2.15 00:22:11.228 clat (usec): min=2895, max=7725, avg=5371.80, stdev=931.77 00:22:11.228 lat (usec): min=2913, max=7727, avg=5374.07, stdev=931.76 00:22:11.228 clat percentiles (usec): 00:22:11.228 | 1.00th=[ 3654], 5.00th=[ 3884], 10.00th=[ 4015], 20.00th=[ 4228], 00:22:11.228 | 30.00th=[ 4490], 40.00th=[ 5473], 50.00th=[ 5735], 60.00th=[ 5866], 00:22:11.228 | 70.00th=[ 5997], 80.00th=[ 6194], 90.00th=[ 6390], 95.00th=[ 6521], 00:22:11.228 | 99.00th=[ 6849], 99.50th=[ 6980], 99.90th=[ 7242], 99.95th=[ 7308], 00:22:11.228 | 99.99th=[ 7635] 00:22:11.228 bw ( KiB/s): min=37640, max=54784, per=99.94%, avg=42146.00, stdev=8434.73, samples=4 00:22:11.228 iops : min= 9410, max=13696, avg=10536.50, stdev=2108.68, samples=4 00:22:11.228 lat (msec) : 4=4.56%, 10=95.44% 00:22:11.228 cpu : usr=69.70%, sys=28.56%, ctx=59, majf=0, minf=5 00:22:11.228 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:22:11.228 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:11.228 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:11.228 issued rwts: total=21138,21128,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:11.228 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:11.228 00:22:11.228 Run status group 0 (all jobs): 00:22:11.228 READ: bw=41.2MiB/s (43.2MB/s), 41.2MiB/s-41.2MiB/s (43.2MB/s-43.2MB/s), io=82.6MiB (86.6MB), run=2004-2004msec 00:22:11.228 WRITE: bw=41.2MiB/s (43.2MB/s), 41.2MiB/s-41.2MiB/s (43.2MB/s-43.2MB/s), io=82.5MiB (86.5MB), run=2004-2004msec 00:22:11.228 15:32:28 -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:11.228 15:32:28 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:11.228 15:32:28 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:22:11.228 15:32:28 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:11.228 15:32:28 -- common/autotest_common.sh@1325 -- # local sanitizers 00:22:11.228 15:32:28 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:11.228 15:32:28 -- common/autotest_common.sh@1327 -- # shift 00:22:11.228 15:32:28 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:22:11.228 15:32:28 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:11.228 15:32:28 -- common/autotest_common.sh@1331 -- # grep libasan 00:22:11.228 15:32:28 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:11.228 15:32:28 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:11.228 15:32:28 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:11.228 15:32:28 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:11.228 15:32:28 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:11.228 15:32:28 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:11.228 15:32:28 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:22:11.228 15:32:28 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:11.228 15:32:28 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:11.228 15:32:28 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:11.228 15:32:28 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:11.228 15:32:28 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:11.489 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:22:11.489 fio-3.35 00:22:11.489 Starting 1 thread 00:22:11.489 EAL: No free 2048 kB hugepages reported on node 1 00:22:14.039 00:22:14.039 test: (groupid=0, jobs=1): err= 0: pid=1717481: Fri Apr 26 15:32:31 2024 00:22:14.039 read: IOPS=9549, BW=149MiB/s (156MB/s)(300MiB/2009msec) 00:22:14.039 slat (usec): min=3, max=109, avg= 3.62, stdev= 1.55 00:22:14.039 clat (usec): min=1121, max=17975, avg=7989.15, stdev=1944.01 00:22:14.039 lat (usec): min=1125, max=17978, avg=7992.77, stdev=1944.16 00:22:14.039 clat percentiles (usec): 00:22:14.039 | 1.00th=[ 4424], 5.00th=[ 5145], 10.00th=[ 5604], 20.00th=[ 6259], 00:22:14.039 | 30.00th=[ 6849], 40.00th=[ 7308], 50.00th=[ 7767], 60.00th=[ 8356], 00:22:14.039 | 70.00th=[ 8979], 80.00th=[ 9765], 90.00th=[10552], 95.00th=[11207], 00:22:14.039 | 99.00th=[12911], 99.50th=[13698], 99.90th=[15795], 99.95th=[16450], 00:22:14.039 | 99.99th=[16909] 00:22:14.039 bw ( KiB/s): min=70912, max=83488, per=50.17%, avg=76656.00, stdev=5237.49, samples=4 00:22:14.039 iops : min= 4432, max= 5218, avg=4791.00, stdev=327.34, samples=4 00:22:14.039 write: IOPS=5597, BW=87.5MiB/s (91.7MB/s)(156MiB/1785msec); 0 zone resets 00:22:14.039 slat (usec): min=40, max=367, avg=41.17, stdev= 7.87 00:22:14.039 clat (usec): min=2571, max=16372, avg=9340.40, stdev=1650.75 00:22:14.039 lat (usec): min=2612, max=16412, avg=9381.57, stdev=1652.51 00:22:14.039 clat percentiles (usec): 00:22:14.039 | 1.00th=[ 6063], 5.00th=[ 7046], 10.00th=[ 7439], 20.00th=[ 7963], 00:22:14.039 | 30.00th=[ 8455], 40.00th=[ 8848], 50.00th=[ 9241], 60.00th=[ 9634], 00:22:14.039 | 70.00th=[10028], 80.00th=[10552], 90.00th=[11469], 95.00th=[12256], 00:22:14.039 | 99.00th=[13960], 99.50th=[15139], 99.90th=[15795], 99.95th=[15926], 00:22:14.039 | 99.99th=[16319] 00:22:14.039 bw ( KiB/s): min=73312, max=86528, per=88.90%, avg=79616.00, stdev=5584.59, samples=4 00:22:14.039 iops : min= 4582, max= 5408, avg=4976.00, stdev=349.04, samples=4 00:22:14.039 lat (msec) : 2=0.08%, 4=0.38%, 10=78.52%, 20=21.02% 00:22:14.039 cpu : usr=87.80%, sys=10.56%, ctx=14, majf=0, minf=14 00:22:14.039 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:22:14.039 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:14.039 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:14.039 issued rwts: total=19185,9991,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:14.039 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:14.039 00:22:14.039 Run status group 0 (all jobs): 00:22:14.039 READ: bw=149MiB/s (156MB/s), 149MiB/s-149MiB/s (156MB/s-156MB/s), io=300MiB (314MB), run=2009-2009msec 00:22:14.039 WRITE: bw=87.5MiB/s (91.7MB/s), 87.5MiB/s-87.5MiB/s (91.7MB/s-91.7MB/s), io=156MiB (164MB), run=1785-1785msec 00:22:14.039 15:32:31 -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:14.039 15:32:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:14.039 15:32:31 -- common/autotest_common.sh@10 -- # set +x 00:22:14.039 15:32:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:14.039 15:32:31 -- host/fio.sh@47 -- # '[' 0 -eq 1 ']' 00:22:14.039 15:32:31 -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:22:14.039 15:32:31 -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:22:14.039 15:32:31 -- host/fio.sh@84 -- # nvmftestfini 00:22:14.039 15:32:31 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:14.039 15:32:31 -- nvmf/common.sh@117 -- # sync 00:22:14.039 15:32:31 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:14.039 15:32:31 -- nvmf/common.sh@120 -- # set +e 00:22:14.039 15:32:31 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:14.039 15:32:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:14.039 rmmod nvme_tcp 00:22:14.039 rmmod nvme_fabrics 00:22:14.039 rmmod nvme_keyring 00:22:14.039 15:32:31 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:14.039 15:32:31 -- nvmf/common.sh@124 -- # set -e 00:22:14.039 15:32:31 -- nvmf/common.sh@125 -- # return 0 00:22:14.039 15:32:31 -- nvmf/common.sh@478 -- # '[' -n 1716450 ']' 00:22:14.039 15:32:31 -- nvmf/common.sh@479 -- # killprocess 1716450 00:22:14.039 15:32:31 -- common/autotest_common.sh@936 -- # '[' -z 1716450 ']' 00:22:14.039 15:32:31 -- common/autotest_common.sh@940 -- # kill -0 1716450 00:22:14.039 15:32:31 -- common/autotest_common.sh@941 -- # uname 00:22:14.039 15:32:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:14.039 15:32:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1716450 00:22:14.039 15:32:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:14.039 15:32:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:14.039 15:32:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1716450' 00:22:14.039 killing process with pid 1716450 00:22:14.039 15:32:31 -- common/autotest_common.sh@955 -- # kill 1716450 00:22:14.039 15:32:31 -- common/autotest_common.sh@960 -- # wait 1716450 00:22:14.039 15:32:31 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:14.039 15:32:31 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:14.039 15:32:31 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:14.039 15:32:31 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:14.039 15:32:31 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:14.039 15:32:31 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:14.039 15:32:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:14.039 15:32:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:16.589 15:32:33 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:16.589 00:22:16.589 real 0m15.807s 00:22:16.589 user 0m57.038s 00:22:16.589 sys 0m6.849s 00:22:16.589 15:32:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:16.589 15:32:33 -- common/autotest_common.sh@10 -- # set +x 00:22:16.589 ************************************ 00:22:16.589 END TEST nvmf_fio_host 00:22:16.589 ************************************ 00:22:16.589 15:32:33 -- nvmf/nvmf.sh@98 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:16.589 15:32:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:16.589 15:32:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:16.589 15:32:33 -- common/autotest_common.sh@10 -- # set +x 00:22:16.589 ************************************ 00:22:16.589 START TEST nvmf_failover 00:22:16.589 ************************************ 00:22:16.589 15:32:33 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:16.589 * Looking for test storage... 00:22:16.589 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:16.589 15:32:33 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:16.589 15:32:33 -- nvmf/common.sh@7 -- # uname -s 00:22:16.589 15:32:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:16.589 15:32:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:16.589 15:32:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:16.589 15:32:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:16.589 15:32:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:16.589 15:32:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:16.589 15:32:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:16.589 15:32:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:16.589 15:32:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:16.589 15:32:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:16.589 15:32:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:16.589 15:32:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:16.589 15:32:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:16.589 15:32:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:16.589 15:32:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:16.589 15:32:33 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:16.589 15:32:33 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:16.589 15:32:33 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:16.589 15:32:33 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:16.589 15:32:33 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:16.589 15:32:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.589 15:32:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.589 15:32:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.589 15:32:33 -- paths/export.sh@5 -- # export PATH 00:22:16.589 15:32:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.589 15:32:33 -- nvmf/common.sh@47 -- # : 0 00:22:16.589 15:32:33 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:16.589 15:32:33 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:16.589 15:32:33 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:16.589 15:32:33 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:16.589 15:32:33 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:16.589 15:32:33 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:16.589 15:32:33 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:16.589 15:32:33 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:16.589 15:32:33 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:16.589 15:32:33 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:16.589 15:32:33 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:16.589 15:32:33 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:16.589 15:32:33 -- host/failover.sh@18 -- # nvmftestinit 00:22:16.589 15:32:33 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:16.589 15:32:33 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:16.589 15:32:33 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:16.589 15:32:33 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:16.589 15:32:33 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:16.589 15:32:33 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:16.589 15:32:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:16.589 15:32:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:16.589 15:32:33 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:22:16.589 15:32:33 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:22:16.589 15:32:33 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:16.589 15:32:33 -- common/autotest_common.sh@10 -- # set +x 00:22:23.263 15:32:40 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:23.263 15:32:40 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:23.263 15:32:40 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:23.263 15:32:40 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:23.263 15:32:40 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:23.263 15:32:40 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:23.263 15:32:40 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:23.263 15:32:40 -- nvmf/common.sh@295 -- # net_devs=() 00:22:23.263 15:32:40 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:23.263 15:32:40 -- nvmf/common.sh@296 -- # e810=() 00:22:23.263 15:32:40 -- nvmf/common.sh@296 -- # local -ga e810 00:22:23.263 15:32:40 -- nvmf/common.sh@297 -- # x722=() 00:22:23.263 15:32:40 -- nvmf/common.sh@297 -- # local -ga x722 00:22:23.263 15:32:40 -- nvmf/common.sh@298 -- # mlx=() 00:22:23.263 15:32:40 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:23.263 15:32:40 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:23.263 15:32:40 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:23.263 15:32:40 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:23.263 15:32:40 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:23.263 15:32:40 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:23.263 15:32:40 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:23.263 15:32:40 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:23.263 15:32:40 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:23.263 15:32:40 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:23.263 15:32:40 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:23.263 15:32:40 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:23.263 15:32:40 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:23.263 15:32:40 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:23.263 15:32:40 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:23.263 15:32:40 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:23.263 15:32:40 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:23.263 15:32:40 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:23.263 15:32:40 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:23.263 15:32:40 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:23.263 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:23.263 15:32:40 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:23.263 15:32:40 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:23.263 15:32:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:23.263 15:32:40 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:23.263 15:32:40 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:23.263 15:32:40 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:23.263 15:32:40 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:23.263 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:23.263 15:32:40 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:23.263 15:32:40 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:23.263 15:32:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:23.263 15:32:40 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:23.263 15:32:40 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:23.263 15:32:40 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:23.263 15:32:40 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:23.263 15:32:40 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:23.263 15:32:40 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:23.263 15:32:40 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:23.263 15:32:40 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:23.263 15:32:40 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:23.263 15:32:40 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:23.263 Found net devices under 0000:31:00.0: cvl_0_0 00:22:23.263 15:32:40 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:23.263 15:32:40 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:23.263 15:32:40 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:23.263 15:32:40 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:23.263 15:32:40 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:23.263 15:32:40 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:23.263 Found net devices under 0000:31:00.1: cvl_0_1 00:22:23.263 15:32:40 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:23.263 15:32:40 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:22:23.263 15:32:40 -- nvmf/common.sh@403 -- # is_hw=yes 00:22:23.263 15:32:40 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:22:23.263 15:32:40 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:22:23.263 15:32:40 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:22:23.263 15:32:40 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:23.263 15:32:40 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:23.263 15:32:40 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:23.263 15:32:40 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:23.263 15:32:40 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:23.263 15:32:40 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:23.263 15:32:40 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:23.263 15:32:40 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:23.263 15:32:40 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:23.263 15:32:40 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:23.263 15:32:40 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:23.263 15:32:40 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:23.263 15:32:40 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:23.525 15:32:40 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:23.525 15:32:40 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:23.525 15:32:40 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:23.525 15:32:40 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:23.525 15:32:40 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:23.525 15:32:40 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:23.525 15:32:40 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:23.525 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:23.525 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.499 ms 00:22:23.525 00:22:23.525 --- 10.0.0.2 ping statistics --- 00:22:23.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:23.525 rtt min/avg/max/mdev = 0.499/0.499/0.499/0.000 ms 00:22:23.525 15:32:40 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:23.525 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:23.525 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.410 ms 00:22:23.525 00:22:23.525 --- 10.0.0.1 ping statistics --- 00:22:23.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:23.525 rtt min/avg/max/mdev = 0.410/0.410/0.410/0.000 ms 00:22:23.525 15:32:40 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:23.525 15:32:40 -- nvmf/common.sh@411 -- # return 0 00:22:23.525 15:32:40 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:23.525 15:32:40 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:23.525 15:32:40 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:23.525 15:32:40 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:23.525 15:32:40 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:23.525 15:32:40 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:23.525 15:32:40 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:23.525 15:32:40 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:22:23.525 15:32:40 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:23.525 15:32:40 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:23.525 15:32:40 -- common/autotest_common.sh@10 -- # set +x 00:22:23.525 15:32:40 -- nvmf/common.sh@470 -- # nvmfpid=1722203 00:22:23.525 15:32:40 -- nvmf/common.sh@471 -- # waitforlisten 1722203 00:22:23.525 15:32:40 -- common/autotest_common.sh@817 -- # '[' -z 1722203 ']' 00:22:23.525 15:32:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:23.525 15:32:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:23.525 15:32:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:23.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:23.525 15:32:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:23.525 15:32:40 -- common/autotest_common.sh@10 -- # set +x 00:22:23.525 15:32:40 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:23.787 [2024-04-26 15:32:41.018283] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:22:23.787 [2024-04-26 15:32:41.018351] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:23.787 EAL: No free 2048 kB hugepages reported on node 1 00:22:23.787 [2024-04-26 15:32:41.107388] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:23.787 [2024-04-26 15:32:41.199461] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:23.787 [2024-04-26 15:32:41.199522] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:23.787 [2024-04-26 15:32:41.199530] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:23.787 [2024-04-26 15:32:41.199537] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:23.787 [2024-04-26 15:32:41.199543] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:23.787 [2024-04-26 15:32:41.199674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:23.787 [2024-04-26 15:32:41.199850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:23.787 [2024-04-26 15:32:41.199894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:24.361 15:32:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:24.361 15:32:41 -- common/autotest_common.sh@850 -- # return 0 00:22:24.361 15:32:41 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:24.361 15:32:41 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:24.361 15:32:41 -- common/autotest_common.sh@10 -- # set +x 00:22:24.622 15:32:41 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:24.622 15:32:41 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:24.622 [2024-04-26 15:32:41.957649] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:24.622 15:32:41 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:24.882 Malloc0 00:22:24.882 15:32:42 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:25.142 15:32:42 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:25.142 15:32:42 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:25.410 [2024-04-26 15:32:42.644401] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:25.410 15:32:42 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:25.410 [2024-04-26 15:32:42.804812] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:25.410 15:32:42 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:25.691 [2024-04-26 15:32:42.961314] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:25.691 15:32:42 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:22:25.691 15:32:42 -- host/failover.sh@31 -- # bdevperf_pid=1722569 00:22:25.691 15:32:42 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:25.691 15:32:42 -- host/failover.sh@34 -- # waitforlisten 1722569 /var/tmp/bdevperf.sock 00:22:25.691 15:32:42 -- common/autotest_common.sh@817 -- # '[' -z 1722569 ']' 00:22:25.691 15:32:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:25.692 15:32:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:25.692 15:32:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:25.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:25.692 15:32:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:25.692 15:32:42 -- common/autotest_common.sh@10 -- # set +x 00:22:26.634 15:32:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:26.634 15:32:43 -- common/autotest_common.sh@850 -- # return 0 00:22:26.634 15:32:43 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:26.896 NVMe0n1 00:22:26.896 15:32:44 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:27.157 00:22:27.157 15:32:44 -- host/failover.sh@39 -- # run_test_pid=1722910 00:22:27.157 15:32:44 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:27.157 15:32:44 -- host/failover.sh@41 -- # sleep 1 00:22:28.544 15:32:45 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:28.544 [2024-04-26 15:32:45.738405] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109e250 is same with the state(5) to be set 00:22:28.544 [2024-04-26 15:32:45.738447] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109e250 is same with the state(5) to be set 00:22:28.544 [2024-04-26 15:32:45.738453] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109e250 is same with the state(5) to be set 00:22:28.544 [2024-04-26 15:32:45.738457] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109e250 is same with the state(5) to be set 00:22:28.544 [2024-04-26 15:32:45.738462] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109e250 is same with the state(5) to be set 00:22:28.544 [2024-04-26 15:32:45.738472] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109e250 is same with the state(5) to be set 00:22:28.544 [2024-04-26 15:32:45.738476] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109e250 is same with the state(5) to be set 00:22:28.544 [2024-04-26 15:32:45.738481] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109e250 is same with the state(5) to be set 00:22:28.544 [2024-04-26 15:32:45.738485] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109e250 is same with the state(5) to be set 00:22:28.544 [2024-04-26 15:32:45.738489] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109e250 is same with the state(5) to be set 00:22:28.544 [2024-04-26 15:32:45.738494] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109e250 is same with the state(5) to be set 00:22:28.544 [2024-04-26 15:32:45.738498] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109e250 is same with the state(5) to be set 00:22:28.544 [2024-04-26 15:32:45.738502] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109e250 is same with the state(5) to be set 00:22:28.544 [2024-04-26 15:32:45.738507] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109e250 is same with the state(5) to be set 00:22:28.544 [2024-04-26 15:32:45.738511] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109e250 is same with the state(5) to be set 00:22:28.544 [2024-04-26 15:32:45.738515] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109e250 is same with the state(5) to be set 00:22:28.544 [2024-04-26 15:32:45.738520] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109e250 is same with the state(5) to be set 00:22:28.544 [2024-04-26 15:32:45.738524] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109e250 is same with the state(5) to be set 00:22:28.545 [2024-04-26 15:32:45.738528] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109e250 is same with the state(5) to be set 00:22:28.545 [2024-04-26 15:32:45.738532] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109e250 is same with the state(5) to be set 00:22:28.545 [2024-04-26 15:32:45.738536] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109e250 is same with the state(5) to be set 00:22:28.545 [2024-04-26 15:32:45.738541] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109e250 is same with the state(5) to be set 00:22:28.545 [2024-04-26 15:32:45.738545] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109e250 is same with the state(5) to be set 00:22:28.545 [2024-04-26 15:32:45.738549] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109e250 is same with the state(5) to be set 00:22:28.545 [2024-04-26 15:32:45.738553] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109e250 is same with the state(5) to be set 00:22:28.545 [2024-04-26 15:32:45.738558] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109e250 is same with the state(5) to be set 00:22:28.545 [2024-04-26 15:32:45.738562] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109e250 is same with the state(5) to be set 00:22:28.545 [2024-04-26 15:32:45.738566] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109e250 is same with the state(5) to be set 00:22:28.545 [2024-04-26 15:32:45.738570] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109e250 is same with the state(5) to be set 00:22:28.545 [2024-04-26 15:32:45.738575] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109e250 is same with the state(5) to be set 00:22:28.545 [2024-04-26 15:32:45.738579] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109e250 is same with the state(5) to be set 00:22:28.545 [2024-04-26 15:32:45.738584] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109e250 is same with the state(5) to be set 00:22:28.545 15:32:45 -- host/failover.sh@45 -- # sleep 3 00:22:31.848 15:32:48 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:31.848 00:22:31.848 15:32:49 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:31.848 [2024-04-26 15:32:49.221924] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109edf0 is same with the state(5) to be set 00:22:31.848 [2024-04-26 15:32:49.221959] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109edf0 is same with the state(5) to be set 00:22:31.848 [2024-04-26 15:32:49.221965] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109edf0 is same with the state(5) to be set 00:22:31.848 [2024-04-26 15:32:49.221969] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109edf0 is same with the state(5) to be set 00:22:31.848 [2024-04-26 15:32:49.221974] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109edf0 is same with the state(5) to be set 00:22:31.848 [2024-04-26 15:32:49.221978] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109edf0 is same with the state(5) to be set 00:22:31.848 [2024-04-26 15:32:49.221983] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109edf0 is same with the state(5) to be set 00:22:31.848 [2024-04-26 15:32:49.221987] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109edf0 is same with the state(5) to be set 00:22:31.848 [2024-04-26 15:32:49.221991] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109edf0 is same with the state(5) to be set 00:22:31.848 [2024-04-26 15:32:49.221996] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109edf0 is same with the state(5) to be set 00:22:31.848 [2024-04-26 15:32:49.222000] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109edf0 is same with the state(5) to be set 00:22:31.848 [2024-04-26 15:32:49.222004] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109edf0 is same with the state(5) to be set 00:22:31.848 [2024-04-26 15:32:49.222009] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109edf0 is same with the state(5) to be set 00:22:31.848 [2024-04-26 15:32:49.222013] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109edf0 is same with the state(5) to be set 00:22:31.848 [2024-04-26 15:32:49.222017] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109edf0 is same with the state(5) to be set 00:22:31.848 [2024-04-26 15:32:49.222021] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109edf0 is same with the state(5) to be set 00:22:31.848 [2024-04-26 15:32:49.222025] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109edf0 is same with the state(5) to be set 00:22:31.848 [2024-04-26 15:32:49.222030] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109edf0 is same with the state(5) to be set 00:22:31.848 [2024-04-26 15:32:49.222034] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109edf0 is same with the state(5) to be set 00:22:31.848 [2024-04-26 15:32:49.222038] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109edf0 is same with the state(5) to be set 00:22:31.848 [2024-04-26 15:32:49.222042] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109edf0 is same with the state(5) to be set 00:22:31.848 [2024-04-26 15:32:49.222047] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109edf0 is same with the state(5) to be set 00:22:31.848 [2024-04-26 15:32:49.222051] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109edf0 is same with the state(5) to be set 00:22:31.848 [2024-04-26 15:32:49.222055] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109edf0 is same with the state(5) to be set 00:22:31.848 [2024-04-26 15:32:49.222064] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109edf0 is same with the state(5) to be set 00:22:31.848 [2024-04-26 15:32:49.222068] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109edf0 is same with the state(5) to be set 00:22:31.848 [2024-04-26 15:32:49.222072] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109edf0 is same with the state(5) to be set 00:22:31.848 [2024-04-26 15:32:49.222077] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109edf0 is same with the state(5) to be set 00:22:31.848 [2024-04-26 15:32:49.222081] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109edf0 is same with the state(5) to be set 00:22:31.848 [2024-04-26 15:32:49.222085] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109edf0 is same with the state(5) to be set 00:22:31.848 [2024-04-26 15:32:49.222089] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109edf0 is same with the state(5) to be set 00:22:31.848 [2024-04-26 15:32:49.222094] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109edf0 is same with the state(5) to be set 00:22:31.848 15:32:49 -- host/failover.sh@50 -- # sleep 3 00:22:35.152 15:32:52 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:35.152 [2024-04-26 15:32:52.399698] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:35.152 15:32:52 -- host/failover.sh@55 -- # sleep 1 00:22:36.095 15:32:53 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:36.356 [2024-04-26 15:32:53.574060] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5cb0 is same with the state(5) to be set 00:22:36.356 [2024-04-26 15:32:53.574094] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5cb0 is same with the state(5) to be set 00:22:36.356 [2024-04-26 15:32:53.574100] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5cb0 is same with the state(5) to be set 00:22:36.356 [2024-04-26 15:32:53.574105] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5cb0 is same with the state(5) to be set 00:22:36.356 [2024-04-26 15:32:53.574109] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5cb0 is same with the state(5) to be set 00:22:36.356 [2024-04-26 15:32:53.574114] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5cb0 is same with the state(5) to be set 00:22:36.356 [2024-04-26 15:32:53.574119] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5cb0 is same with the state(5) to be set 00:22:36.356 [2024-04-26 15:32:53.574123] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5cb0 is same with the state(5) to be set 00:22:36.356 [2024-04-26 15:32:53.574127] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5cb0 is same with the state(5) to be set 00:22:36.356 [2024-04-26 15:32:53.574132] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5cb0 is same with the state(5) to be set 00:22:36.356 [2024-04-26 15:32:53.574136] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5cb0 is same with the state(5) to be set 00:22:36.356 [2024-04-26 15:32:53.574141] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5cb0 is same with the state(5) to be set 00:22:36.356 [2024-04-26 15:32:53.574146] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5cb0 is same with the state(5) to be set 00:22:36.356 [2024-04-26 15:32:53.574150] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5cb0 is same with the state(5) to be set 00:22:36.356 [2024-04-26 15:32:53.574154] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5cb0 is same with the state(5) to be set 00:22:36.356 [2024-04-26 15:32:53.574159] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5cb0 is same with the state(5) to be set 00:22:36.356 [2024-04-26 15:32:53.574172] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5cb0 is same with the state(5) to be set 00:22:36.356 [2024-04-26 15:32:53.574177] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5cb0 is same with the state(5) to be set 00:22:36.356 [2024-04-26 15:32:53.574182] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5cb0 is same with the state(5) to be set 00:22:36.356 [2024-04-26 15:32:53.574187] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5cb0 is same with the state(5) to be set 00:22:36.356 [2024-04-26 15:32:53.574191] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5cb0 is same with the state(5) to be set 00:22:36.357 [2024-04-26 15:32:53.574195] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5cb0 is same with the state(5) to be set 00:22:36.357 [2024-04-26 15:32:53.574200] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5cb0 is same with the state(5) to be set 00:22:36.357 [2024-04-26 15:32:53.574204] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5cb0 is same with the state(5) to be set 00:22:36.357 [2024-04-26 15:32:53.574208] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5cb0 is same with the state(5) to be set 00:22:36.357 [2024-04-26 15:32:53.574213] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5cb0 is same with the state(5) to be set 00:22:36.357 [2024-04-26 15:32:53.574218] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5cb0 is same with the state(5) to be set 00:22:36.357 [2024-04-26 15:32:53.574223] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5cb0 is same with the state(5) to be set 00:22:36.357 [2024-04-26 15:32:53.574227] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5cb0 is same with the state(5) to be set 00:22:36.357 [2024-04-26 15:32:53.574232] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5cb0 is same with the state(5) to be set 00:22:36.357 [2024-04-26 15:32:53.574236] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5cb0 is same with the state(5) to be set 00:22:36.357 [2024-04-26 15:32:53.574241] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5cb0 is same with the state(5) to be set 00:22:36.357 [2024-04-26 15:32:53.574245] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5cb0 is same with the state(5) to be set 00:22:36.357 [2024-04-26 15:32:53.574251] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5cb0 is same with the state(5) to be set 00:22:36.357 [2024-04-26 15:32:53.574256] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5cb0 is same with the state(5) to be set 00:22:36.357 [2024-04-26 15:32:53.574261] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5cb0 is same with the state(5) to be set 00:22:36.357 [2024-04-26 15:32:53.574265] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5cb0 is same with the state(5) to be set 00:22:36.357 [2024-04-26 15:32:53.574269] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5cb0 is same with the state(5) to be set 00:22:36.357 [2024-04-26 15:32:53.574274] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5cb0 is same with the state(5) to be set 00:22:36.357 [2024-04-26 15:32:53.574278] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5cb0 is same with the state(5) to be set 00:22:36.357 [2024-04-26 15:32:53.574283] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5cb0 is same with the state(5) to be set 00:22:36.357 [2024-04-26 15:32:53.574289] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5cb0 is same with the state(5) to be set 00:22:36.357 [2024-04-26 15:32:53.574294] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5cb0 is same with the state(5) to be set 00:22:36.357 [2024-04-26 15:32:53.574300] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5cb0 is same with the state(5) to be set 00:22:36.357 [2024-04-26 15:32:53.574304] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5cb0 is same with the state(5) to be set 00:22:36.357 [2024-04-26 15:32:53.574309] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5cb0 is same with the state(5) to be set 00:22:36.357 [2024-04-26 15:32:53.574314] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5cb0 is same with the state(5) to be set 00:22:36.357 [2024-04-26 15:32:53.574318] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5cb0 is same with the state(5) to be set 00:22:36.357 [2024-04-26 15:32:53.574323] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5cb0 is same with the state(5) to be set 00:22:36.357 [2024-04-26 15:32:53.574328] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5cb0 is same with the state(5) to be set 00:22:36.357 [2024-04-26 15:32:53.574333] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5cb0 is same with the state(5) to be set 00:22:36.357 [2024-04-26 15:32:53.574338] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5cb0 is same with the state(5) to be set 00:22:36.357 [2024-04-26 15:32:53.574342] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5cb0 is same with the state(5) to be set 00:22:36.357 [2024-04-26 15:32:53.574346] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5cb0 is same with the state(5) to be set 00:22:36.357 [2024-04-26 15:32:53.574351] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5cb0 is same with the state(5) to be set 00:22:36.357 [2024-04-26 15:32:53.574357] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5cb0 is same with the state(5) to be set 00:22:36.357 [2024-04-26 15:32:53.574362] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5cb0 is same with the state(5) to be set 00:22:36.357 [2024-04-26 15:32:53.574367] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5cb0 is same with the state(5) to be set 00:22:36.357 [2024-04-26 15:32:53.574371] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5cb0 is same with the state(5) to be set 00:22:36.357 [2024-04-26 15:32:53.574376] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5cb0 is same with the state(5) to be set 00:22:36.357 [2024-04-26 15:32:53.574381] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5cb0 is same with the state(5) to be set 00:22:36.357 [2024-04-26 15:32:53.574385] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5cb0 is same with the state(5) to be set 00:22:36.357 [2024-04-26 15:32:53.574390] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5cb0 is same with the state(5) to be set 00:22:36.357 [2024-04-26 15:32:53.574395] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef5cb0 is same with the state(5) to be set 00:22:36.357 15:32:53 -- host/failover.sh@59 -- # wait 1722910 00:22:42.952 0 00:22:42.952 15:32:59 -- host/failover.sh@61 -- # killprocess 1722569 00:22:42.952 15:32:59 -- common/autotest_common.sh@936 -- # '[' -z 1722569 ']' 00:22:42.952 15:32:59 -- common/autotest_common.sh@940 -- # kill -0 1722569 00:22:42.952 15:32:59 -- common/autotest_common.sh@941 -- # uname 00:22:42.952 15:32:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:42.952 15:32:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1722569 00:22:42.952 15:32:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:42.952 15:32:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:42.952 15:32:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1722569' 00:22:42.952 killing process with pid 1722569 00:22:42.952 15:32:59 -- common/autotest_common.sh@955 -- # kill 1722569 00:22:42.952 15:32:59 -- common/autotest_common.sh@960 -- # wait 1722569 00:22:42.952 15:32:59 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:42.952 [2024-04-26 15:32:43.035789] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:22:42.952 [2024-04-26 15:32:43.035850] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1722569 ] 00:22:42.952 EAL: No free 2048 kB hugepages reported on node 1 00:22:42.952 [2024-04-26 15:32:43.095403] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:42.952 [2024-04-26 15:32:43.157906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:42.952 Running I/O for 15 seconds... 00:22:42.952 [2024-04-26 15:32:45.740543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:96040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.952 [2024-04-26 15:32:45.740577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.952 [2024-04-26 15:32:45.740593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:96048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.952 [2024-04-26 15:32:45.740602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.952 [2024-04-26 15:32:45.740612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.952 [2024-04-26 15:32:45.740619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.952 [2024-04-26 15:32:45.740628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:96064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.952 [2024-04-26 15:32:45.740636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.952 [2024-04-26 15:32:45.740645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:96072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.952 [2024-04-26 15:32:45.740652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.952 [2024-04-26 15:32:45.740661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:96080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.952 [2024-04-26 15:32:45.740668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.952 [2024-04-26 15:32:45.740677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:96088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.952 [2024-04-26 15:32:45.740684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.952 [2024-04-26 15:32:45.740693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:96096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.952 [2024-04-26 15:32:45.740700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.952 [2024-04-26 15:32:45.740709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:96104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.952 [2024-04-26 15:32:45.740717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.952 [2024-04-26 15:32:45.740726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:96112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.952 [2024-04-26 15:32:45.740733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.952 [2024-04-26 15:32:45.740743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:96120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.952 [2024-04-26 15:32:45.740750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.953 [2024-04-26 15:32:45.740765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:96128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.953 [2024-04-26 15:32:45.740773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.953 [2024-04-26 15:32:45.740782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:96136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.953 [2024-04-26 15:32:45.740789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.953 [2024-04-26 15:32:45.740798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:96144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.953 [2024-04-26 15:32:45.740805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.953 [2024-04-26 15:32:45.740815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:96152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.953 [2024-04-26 15:32:45.740822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.953 [2024-04-26 15:32:45.740831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:96160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.953 [2024-04-26 15:32:45.740843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.953 [2024-04-26 15:32:45.740853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.953 [2024-04-26 15:32:45.740861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.953 [2024-04-26 15:32:45.740870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:96176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.953 [2024-04-26 15:32:45.740877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.953 [2024-04-26 15:32:45.740886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:96184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.953 [2024-04-26 15:32:45.740893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.953 [2024-04-26 15:32:45.740902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:96192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.953 [2024-04-26 15:32:45.740909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.953 [2024-04-26 15:32:45.740918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:96200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.953 [2024-04-26 15:32:45.740925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.953 [2024-04-26 15:32:45.740934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:96208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.953 [2024-04-26 15:32:45.740941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.953 [2024-04-26 15:32:45.740950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:96216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.953 [2024-04-26 15:32:45.740957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.953 [2024-04-26 15:32:45.740965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:96224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.953 [2024-04-26 15:32:45.740974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.953 [2024-04-26 15:32:45.740983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:96232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.953 [2024-04-26 15:32:45.740990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.953 [2024-04-26 15:32:45.740999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:96240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.953 [2024-04-26 15:32:45.741006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.953 [2024-04-26 15:32:45.741015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:96248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.953 [2024-04-26 15:32:45.741022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.953 [2024-04-26 15:32:45.741031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:96256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.953 [2024-04-26 15:32:45.741038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.953 [2024-04-26 15:32:45.741046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:96264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.953 [2024-04-26 15:32:45.741053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.953 [2024-04-26 15:32:45.741062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:96272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.953 [2024-04-26 15:32:45.741069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.953 [2024-04-26 15:32:45.741078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:96280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.953 [2024-04-26 15:32:45.741085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.953 [2024-04-26 15:32:45.741093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:96288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.953 [2024-04-26 15:32:45.741100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.953 [2024-04-26 15:32:45.741108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:96296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.953 [2024-04-26 15:32:45.741115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.953 [2024-04-26 15:32:45.741124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:96304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.953 [2024-04-26 15:32:45.741132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.953 [2024-04-26 15:32:45.741141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:96312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.953 [2024-04-26 15:32:45.741148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.953 [2024-04-26 15:32:45.741157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:96320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.953 [2024-04-26 15:32:45.741164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.953 [2024-04-26 15:32:45.741175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:96328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.953 [2024-04-26 15:32:45.741183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.953 [2024-04-26 15:32:45.741192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:96336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.953 [2024-04-26 15:32:45.741198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.953 [2024-04-26 15:32:45.741208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:96344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.953 [2024-04-26 15:32:45.741215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.953 [2024-04-26 15:32:45.741224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:96352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.953 [2024-04-26 15:32:45.741231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.953 [2024-04-26 15:32:45.741240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:96360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.953 [2024-04-26 15:32:45.741247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.953 [2024-04-26 15:32:45.741256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:96368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.953 [2024-04-26 15:32:45.741263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.953 [2024-04-26 15:32:45.741271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:96376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.953 [2024-04-26 15:32:45.741279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.953 [2024-04-26 15:32:45.741288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.953 [2024-04-26 15:32:45.741294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.953 [2024-04-26 15:32:45.741303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:96392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.953 [2024-04-26 15:32:45.741310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.953 [2024-04-26 15:32:45.741319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:96400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.953 [2024-04-26 15:32:45.741326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.953 [2024-04-26 15:32:45.741335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:96408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.953 [2024-04-26 15:32:45.741342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.953 [2024-04-26 15:32:45.741350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:96416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.953 [2024-04-26 15:32:45.741357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.953 [2024-04-26 15:32:45.741366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.953 [2024-04-26 15:32:45.741373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.953 [2024-04-26 15:32:45.741384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:96432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.953 [2024-04-26 15:32:45.741391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.954 [2024-04-26 15:32:45.741399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:96440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.954 [2024-04-26 15:32:45.741406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.954 [2024-04-26 15:32:45.741415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:96448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.954 [2024-04-26 15:32:45.741421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.954 [2024-04-26 15:32:45.741430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:96456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.954 [2024-04-26 15:32:45.741437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.954 [2024-04-26 15:32:45.741446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:96464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.954 [2024-04-26 15:32:45.741453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.954 [2024-04-26 15:32:45.741462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:96472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.954 [2024-04-26 15:32:45.741469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.954 [2024-04-26 15:32:45.741477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:96480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.954 [2024-04-26 15:32:45.741484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.954 [2024-04-26 15:32:45.741493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.954 [2024-04-26 15:32:45.741500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.954 [2024-04-26 15:32:45.741508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:96496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.954 [2024-04-26 15:32:45.741515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.954 [2024-04-26 15:32:45.741524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:96504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.954 [2024-04-26 15:32:45.741531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.954 [2024-04-26 15:32:45.741540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:96512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.954 [2024-04-26 15:32:45.741547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.954 [2024-04-26 15:32:45.741555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.954 [2024-04-26 15:32:45.741562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.954 [2024-04-26 15:32:45.741571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.954 [2024-04-26 15:32:45.741580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.954 [2024-04-26 15:32:45.741589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:96536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.954 [2024-04-26 15:32:45.741596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.954 [2024-04-26 15:32:45.741604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:96544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.954 [2024-04-26 15:32:45.741611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.954 [2024-04-26 15:32:45.741620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.954 [2024-04-26 15:32:45.741627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.954 [2024-04-26 15:32:45.741636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.954 [2024-04-26 15:32:45.741643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.954 [2024-04-26 15:32:45.741651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.954 [2024-04-26 15:32:45.741659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.954 [2024-04-26 15:32:45.741668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.954 [2024-04-26 15:32:45.741675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.954 [2024-04-26 15:32:45.741684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.954 [2024-04-26 15:32:45.741691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.954 [2024-04-26 15:32:45.741700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.954 [2024-04-26 15:32:45.741707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.954 [2024-04-26 15:32:45.741716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:96600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.954 [2024-04-26 15:32:45.741723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.954 [2024-04-26 15:32:45.741732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.954 [2024-04-26 15:32:45.741739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.954 [2024-04-26 15:32:45.741748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:96616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.954 [2024-04-26 15:32:45.741755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.954 [2024-04-26 15:32:45.741763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:96624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.954 [2024-04-26 15:32:45.741770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.954 [2024-04-26 15:32:45.741780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:96632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.954 [2024-04-26 15:32:45.741788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.954 [2024-04-26 15:32:45.741796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.954 [2024-04-26 15:32:45.741803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.954 [2024-04-26 15:32:45.741812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:96648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.954 [2024-04-26 15:32:45.741819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.954 [2024-04-26 15:32:45.741828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:96656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.954 [2024-04-26 15:32:45.741835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.954 [2024-04-26 15:32:45.741847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:96664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.954 [2024-04-26 15:32:45.741854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.954 [2024-04-26 15:32:45.741863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:96672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.954 [2024-04-26 15:32:45.741870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.954 [2024-04-26 15:32:45.741878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.954 [2024-04-26 15:32:45.741885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.954 [2024-04-26 15:32:45.741894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:96688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.954 [2024-04-26 15:32:45.741902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.954 [2024-04-26 15:32:45.741911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:96696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.954 [2024-04-26 15:32:45.741918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.954 [2024-04-26 15:32:45.741927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:96704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.954 [2024-04-26 15:32:45.741933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.954 [2024-04-26 15:32:45.741943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:96712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.954 [2024-04-26 15:32:45.741950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.954 [2024-04-26 15:32:45.741958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:96720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.954 [2024-04-26 15:32:45.741965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.954 [2024-04-26 15:32:45.741974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:96728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.954 [2024-04-26 15:32:45.741982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.954 [2024-04-26 15:32:45.741991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:96736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.954 [2024-04-26 15:32:45.741998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.954 [2024-04-26 15:32:45.742007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:96744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.955 [2024-04-26 15:32:45.742014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.955 [2024-04-26 15:32:45.742023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.955 [2024-04-26 15:32:45.742029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.955 [2024-04-26 15:32:45.742039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:96760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.955 [2024-04-26 15:32:45.742046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.955 [2024-04-26 15:32:45.742055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:96768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.955 [2024-04-26 15:32:45.742062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.955 [2024-04-26 15:32:45.742071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.955 [2024-04-26 15:32:45.742077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.955 [2024-04-26 15:32:45.742087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.955 [2024-04-26 15:32:45.742094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.955 [2024-04-26 15:32:45.742102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:96792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.955 [2024-04-26 15:32:45.742109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.955 [2024-04-26 15:32:45.742118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:96800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.955 [2024-04-26 15:32:45.742125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.955 [2024-04-26 15:32:45.742134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:96808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.955 [2024-04-26 15:32:45.742141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.955 [2024-04-26 15:32:45.742150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:96816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.955 [2024-04-26 15:32:45.742157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.955 [2024-04-26 15:32:45.742166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:96824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.955 [2024-04-26 15:32:45.742173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.955 [2024-04-26 15:32:45.742182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:96832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.955 [2024-04-26 15:32:45.742191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.955 [2024-04-26 15:32:45.742200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:96840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.955 [2024-04-26 15:32:45.742207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.955 [2024-04-26 15:32:45.742215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:96848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.955 [2024-04-26 15:32:45.742222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.955 [2024-04-26 15:32:45.742231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:96856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.955 [2024-04-26 15:32:45.742238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.955 [2024-04-26 15:32:45.742247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:96864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.955 [2024-04-26 15:32:45.742254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.955 [2024-04-26 15:32:45.742263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:96872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.955 [2024-04-26 15:32:45.742270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.955 [2024-04-26 15:32:45.742279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:96880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.955 [2024-04-26 15:32:45.742286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.955 [2024-04-26 15:32:45.742295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:96888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.955 [2024-04-26 15:32:45.742302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.955 [2024-04-26 15:32:45.742311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.955 [2024-04-26 15:32:45.742318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.955 [2024-04-26 15:32:45.742326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:96904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.955 [2024-04-26 15:32:45.742334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.955 [2024-04-26 15:32:45.742343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:96912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.955 [2024-04-26 15:32:45.742350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.955 [2024-04-26 15:32:45.742358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:96920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.955 [2024-04-26 15:32:45.742365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.955 [2024-04-26 15:32:45.742374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:96928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.955 [2024-04-26 15:32:45.742381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.955 [2024-04-26 15:32:45.742403] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:42.955 [2024-04-26 15:32:45.742411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96936 len:8 PRP1 0x0 PRP2 0x0 00:22:42.955 [2024-04-26 15:32:45.742418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.955 [2024-04-26 15:32:45.742428] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:42.955 [2024-04-26 15:32:45.742434] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:42.955 [2024-04-26 15:32:45.742440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96944 len:8 PRP1 0x0 PRP2 0x0 00:22:42.955 [2024-04-26 15:32:45.742447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.955 [2024-04-26 15:32:45.742454] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:42.955 [2024-04-26 15:32:45.742460] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:42.955 [2024-04-26 15:32:45.742466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96952 len:8 PRP1 0x0 PRP2 0x0 00:22:42.955 [2024-04-26 15:32:45.742473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.955 [2024-04-26 15:32:45.742481] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:42.955 [2024-04-26 15:32:45.742486] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:42.955 [2024-04-26 15:32:45.742492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96960 len:8 PRP1 0x0 PRP2 0x0 00:22:42.955 [2024-04-26 15:32:45.742499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.955 [2024-04-26 15:32:45.742506] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:42.955 [2024-04-26 15:32:45.742511] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:42.955 [2024-04-26 15:32:45.742517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96968 len:8 PRP1 0x0 PRP2 0x0 00:22:42.955 [2024-04-26 15:32:45.742524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.955 [2024-04-26 15:32:45.742532] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:42.955 [2024-04-26 15:32:45.742537] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:42.955 [2024-04-26 15:32:45.742543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96976 len:8 PRP1 0x0 PRP2 0x0 00:22:42.955 [2024-04-26 15:32:45.742550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.955 [2024-04-26 15:32:45.742557] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:42.955 [2024-04-26 15:32:45.742562] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:42.955 [2024-04-26 15:32:45.742568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96984 len:8 PRP1 0x0 PRP2 0x0 00:22:42.955 [2024-04-26 15:32:45.742575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.955 [2024-04-26 15:32:45.742582] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:42.955 [2024-04-26 15:32:45.742588] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:42.955 [2024-04-26 15:32:45.742594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96992 len:8 PRP1 0x0 PRP2 0x0 00:22:42.955 [2024-04-26 15:32:45.742601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.955 [2024-04-26 15:32:45.742609] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:42.955 [2024-04-26 15:32:45.742615] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:42.955 [2024-04-26 15:32:45.742621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97000 len:8 PRP1 0x0 PRP2 0x0 00:22:42.955 [2024-04-26 15:32:45.742628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.955 [2024-04-26 15:32:45.742635] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:42.955 [2024-04-26 15:32:45.742640] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:42.956 [2024-04-26 15:32:45.742646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97008 len:8 PRP1 0x0 PRP2 0x0 00:22:42.956 [2024-04-26 15:32:45.742653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.956 [2024-04-26 15:32:45.742661] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:42.956 [2024-04-26 15:32:45.742666] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:42.956 [2024-04-26 15:32:45.742672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97016 len:8 PRP1 0x0 PRP2 0x0 00:22:42.956 [2024-04-26 15:32:45.742679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.956 [2024-04-26 15:32:45.742687] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:42.956 [2024-04-26 15:32:45.742692] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:42.956 [2024-04-26 15:32:45.742698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97024 len:8 PRP1 0x0 PRP2 0x0 00:22:42.956 [2024-04-26 15:32:45.742705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.956 [2024-04-26 15:32:45.742712] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:42.956 [2024-04-26 15:32:45.742717] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:42.956 [2024-04-26 15:32:45.742724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97032 len:8 PRP1 0x0 PRP2 0x0 00:22:42.956 [2024-04-26 15:32:45.742731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.956 [2024-04-26 15:32:45.742738] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:42.956 [2024-04-26 15:32:45.742744] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:42.956 [2024-04-26 15:32:45.742749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97040 len:8 PRP1 0x0 PRP2 0x0 00:22:42.956 [2024-04-26 15:32:45.742756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.956 [2024-04-26 15:32:45.742763] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:42.956 [2024-04-26 15:32:45.742769] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:42.956 [2024-04-26 15:32:45.742775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97048 len:8 PRP1 0x0 PRP2 0x0 00:22:42.956 [2024-04-26 15:32:45.742782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.956 [2024-04-26 15:32:45.742789] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:42.956 [2024-04-26 15:32:45.742794] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:42.956 [2024-04-26 15:32:45.742800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97056 len:8 PRP1 0x0 PRP2 0x0 00:22:42.956 [2024-04-26 15:32:45.742808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.956 [2024-04-26 15:32:45.742846] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x9316f0 was disconnected and freed. reset controller. 00:22:42.956 [2024-04-26 15:32:45.742856] bdev_nvme.c:1857:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:42.956 [2024-04-26 15:32:45.742876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.956 [2024-04-26 15:32:45.742884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.956 [2024-04-26 15:32:45.742893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.956 [2024-04-26 15:32:45.742900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.956 [2024-04-26 15:32:45.742907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.956 [2024-04-26 15:32:45.742915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.956 [2024-04-26 15:32:45.742923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.956 [2024-04-26 15:32:45.742930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.956 [2024-04-26 15:32:45.742938] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:42.956 [2024-04-26 15:32:45.742976] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x93bc50 (9): Bad file descriptor 00:22:42.956 [2024-04-26 15:32:45.746428] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:42.956 [2024-04-26 15:32:45.918637] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:42.956 [2024-04-26 15:32:49.223984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:49856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.956 [2024-04-26 15:32:49.224020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.956 [2024-04-26 15:32:49.224037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:49984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.956 [2024-04-26 15:32:49.224045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.956 [2024-04-26 15:32:49.224055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:49992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.956 [2024-04-26 15:32:49.224062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.956 [2024-04-26 15:32:49.224072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:50000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.956 [2024-04-26 15:32:49.224079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.956 [2024-04-26 15:32:49.224089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:50008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.956 [2024-04-26 15:32:49.224096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.956 [2024-04-26 15:32:49.224105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:50016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.956 [2024-04-26 15:32:49.224117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.956 [2024-04-26 15:32:49.224126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:50024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.956 [2024-04-26 15:32:49.224133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.956 [2024-04-26 15:32:49.224142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:50032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.956 [2024-04-26 15:32:49.224149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.956 [2024-04-26 15:32:49.224158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:50040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.956 [2024-04-26 15:32:49.224165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.956 [2024-04-26 15:32:49.224174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:50048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.956 [2024-04-26 15:32:49.224181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.956 [2024-04-26 15:32:49.224190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.956 [2024-04-26 15:32:49.224197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.956 [2024-04-26 15:32:49.224206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:50064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.956 [2024-04-26 15:32:49.224214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.956 [2024-04-26 15:32:49.224222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:50072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.956 [2024-04-26 15:32:49.224230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.956 [2024-04-26 15:32:49.224239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:50080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.956 [2024-04-26 15:32:49.224247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.956 [2024-04-26 15:32:49.224256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:50088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.956 [2024-04-26 15:32:49.224263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.956 [2024-04-26 15:32:49.224272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:50096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.956 [2024-04-26 15:32:49.224279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.956 [2024-04-26 15:32:49.224288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:50104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.956 [2024-04-26 15:32:49.224295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.956 [2024-04-26 15:32:49.224304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:50112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.956 [2024-04-26 15:32:49.224312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.956 [2024-04-26 15:32:49.224326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:50120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.956 [2024-04-26 15:32:49.224333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.956 [2024-04-26 15:32:49.224343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:50128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.956 [2024-04-26 15:32:49.224350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.956 [2024-04-26 15:32:49.224359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:50136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.956 [2024-04-26 15:32:49.224367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.956 [2024-04-26 15:32:49.224376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:50144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.957 [2024-04-26 15:32:49.224383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.957 [2024-04-26 15:32:49.224392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:50152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.957 [2024-04-26 15:32:49.224399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.957 [2024-04-26 15:32:49.224409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:50160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.957 [2024-04-26 15:32:49.224416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.957 [2024-04-26 15:32:49.224425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:50168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.957 [2024-04-26 15:32:49.224432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.957 [2024-04-26 15:32:49.224441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:50176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.957 [2024-04-26 15:32:49.224448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.957 [2024-04-26 15:32:49.224457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:50184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.957 [2024-04-26 15:32:49.224464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.957 [2024-04-26 15:32:49.224473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:50192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.957 [2024-04-26 15:32:49.224480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.957 [2024-04-26 15:32:49.224488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:50200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.957 [2024-04-26 15:32:49.224495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.957 [2024-04-26 15:32:49.224504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:50208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.957 [2024-04-26 15:32:49.224512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.957 [2024-04-26 15:32:49.224521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:50216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.957 [2024-04-26 15:32:49.224528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.957 [2024-04-26 15:32:49.224538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:50224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.957 [2024-04-26 15:32:49.224545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.957 [2024-04-26 15:32:49.224553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:50232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.957 [2024-04-26 15:32:49.224561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.957 [2024-04-26 15:32:49.224570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:50240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.957 [2024-04-26 15:32:49.224577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.957 [2024-04-26 15:32:49.224587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:50248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.957 [2024-04-26 15:32:49.224594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.957 [2024-04-26 15:32:49.224603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:50256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.957 [2024-04-26 15:32:49.224610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.957 [2024-04-26 15:32:49.224619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:50264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.957 [2024-04-26 15:32:49.224626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.957 [2024-04-26 15:32:49.224635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:50272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.957 [2024-04-26 15:32:49.224642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.957 [2024-04-26 15:32:49.224651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:50280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.957 [2024-04-26 15:32:49.224658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.957 [2024-04-26 15:32:49.224666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:50288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.957 [2024-04-26 15:32:49.224674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.957 [2024-04-26 15:32:49.224683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:50296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.957 [2024-04-26 15:32:49.224690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.957 [2024-04-26 15:32:49.224699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:50304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.957 [2024-04-26 15:32:49.224706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.957 [2024-04-26 15:32:49.224714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:50312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.957 [2024-04-26 15:32:49.224722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.957 [2024-04-26 15:32:49.224730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:50320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.957 [2024-04-26 15:32:49.224739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.957 [2024-04-26 15:32:49.224748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:50328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.957 [2024-04-26 15:32:49.224755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.957 [2024-04-26 15:32:49.224764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:50336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.957 [2024-04-26 15:32:49.224771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.957 [2024-04-26 15:32:49.224780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:50344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.957 [2024-04-26 15:32:49.224787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.957 [2024-04-26 15:32:49.224796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:50352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.957 [2024-04-26 15:32:49.224802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.957 [2024-04-26 15:32:49.224811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:50360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.957 [2024-04-26 15:32:49.224819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.957 [2024-04-26 15:32:49.224828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:50368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.957 [2024-04-26 15:32:49.224835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.957 [2024-04-26 15:32:49.224848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:50376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.957 [2024-04-26 15:32:49.224856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.957 [2024-04-26 15:32:49.224865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:50384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.957 [2024-04-26 15:32:49.224872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.957 [2024-04-26 15:32:49.224881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:50392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.958 [2024-04-26 15:32:49.224888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.958 [2024-04-26 15:32:49.224897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:50400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.958 [2024-04-26 15:32:49.224904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.958 [2024-04-26 15:32:49.224913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:50408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.958 [2024-04-26 15:32:49.224920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.958 [2024-04-26 15:32:49.224929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:50416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.958 [2024-04-26 15:32:49.224936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.958 [2024-04-26 15:32:49.224947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:50424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.958 [2024-04-26 15:32:49.224954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.958 [2024-04-26 15:32:49.224963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:50432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.958 [2024-04-26 15:32:49.224969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.958 [2024-04-26 15:32:49.224978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:50440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.958 [2024-04-26 15:32:49.224986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.958 [2024-04-26 15:32:49.224995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:50448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.958 [2024-04-26 15:32:49.225001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.958 [2024-04-26 15:32:49.225010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:50456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.958 [2024-04-26 15:32:49.225017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.958 [2024-04-26 15:32:49.225026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:50464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.958 [2024-04-26 15:32:49.225033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.958 [2024-04-26 15:32:49.225042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:50472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.958 [2024-04-26 15:32:49.225049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.958 [2024-04-26 15:32:49.225058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:50480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.958 [2024-04-26 15:32:49.225064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.958 [2024-04-26 15:32:49.225073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:50488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.958 [2024-04-26 15:32:49.225081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.958 [2024-04-26 15:32:49.225089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:50496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.958 [2024-04-26 15:32:49.225096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.958 [2024-04-26 15:32:49.225106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:50504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.958 [2024-04-26 15:32:49.225113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.958 [2024-04-26 15:32:49.225122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:50512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.958 [2024-04-26 15:32:49.225129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.958 [2024-04-26 15:32:49.225138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:50520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.958 [2024-04-26 15:32:49.225146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.958 [2024-04-26 15:32:49.225155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:50528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.958 [2024-04-26 15:32:49.225163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.958 [2024-04-26 15:32:49.225171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:50536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.958 [2024-04-26 15:32:49.225178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.958 [2024-04-26 15:32:49.225187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:50544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.958 [2024-04-26 15:32:49.225194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.958 [2024-04-26 15:32:49.225203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:50552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.958 [2024-04-26 15:32:49.225211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.958 [2024-04-26 15:32:49.225219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:50560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.958 [2024-04-26 15:32:49.225226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.958 [2024-04-26 15:32:49.225235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:50568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.958 [2024-04-26 15:32:49.225242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.958 [2024-04-26 15:32:49.225251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:50576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.958 [2024-04-26 15:32:49.225259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.958 [2024-04-26 15:32:49.225267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:50584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.958 [2024-04-26 15:32:49.225274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.958 [2024-04-26 15:32:49.225283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:50592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.958 [2024-04-26 15:32:49.225291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.958 [2024-04-26 15:32:49.225299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:50600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.958 [2024-04-26 15:32:49.225306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.958 [2024-04-26 15:32:49.225315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:50608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.958 [2024-04-26 15:32:49.225322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.958 [2024-04-26 15:32:49.225331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:50616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.958 [2024-04-26 15:32:49.225338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.958 [2024-04-26 15:32:49.225347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:50624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.958 [2024-04-26 15:32:49.225355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.958 [2024-04-26 15:32:49.225364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:50632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.958 [2024-04-26 15:32:49.225371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.958 [2024-04-26 15:32:49.225381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:50640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.958 [2024-04-26 15:32:49.225388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.958 [2024-04-26 15:32:49.225396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:50648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.958 [2024-04-26 15:32:49.225403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.958 [2024-04-26 15:32:49.225412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:50656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.958 [2024-04-26 15:32:49.225420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.958 [2024-04-26 15:32:49.225428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:50664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.958 [2024-04-26 15:32:49.225435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.958 [2024-04-26 15:32:49.225444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:50672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.958 [2024-04-26 15:32:49.225451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.958 [2024-04-26 15:32:49.225460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:50680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.958 [2024-04-26 15:32:49.225467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.959 [2024-04-26 15:32:49.225476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:50688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.959 [2024-04-26 15:32:49.225483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.959 [2024-04-26 15:32:49.225492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:50696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.959 [2024-04-26 15:32:49.225499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.959 [2024-04-26 15:32:49.225508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:50704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.959 [2024-04-26 15:32:49.225515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.959 [2024-04-26 15:32:49.225524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:50712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.959 [2024-04-26 15:32:49.225531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.959 [2024-04-26 15:32:49.225540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:50720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.959 [2024-04-26 15:32:49.225547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.959 [2024-04-26 15:32:49.225557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:50728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.959 [2024-04-26 15:32:49.225564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.959 [2024-04-26 15:32:49.225573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:50736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.959 [2024-04-26 15:32:49.225580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.959 [2024-04-26 15:32:49.225589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:50744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.959 [2024-04-26 15:32:49.225597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.959 [2024-04-26 15:32:49.225605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:50752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.959 [2024-04-26 15:32:49.225612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.959 [2024-04-26 15:32:49.225621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:50760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.959 [2024-04-26 15:32:49.225629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.959 [2024-04-26 15:32:49.225638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:50768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.959 [2024-04-26 15:32:49.225645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.959 [2024-04-26 15:32:49.225653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:50776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.959 [2024-04-26 15:32:49.225660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.959 [2024-04-26 15:32:49.225669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:50784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.959 [2024-04-26 15:32:49.225677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.959 [2024-04-26 15:32:49.225686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:50792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.959 [2024-04-26 15:32:49.225693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.959 [2024-04-26 15:32:49.225702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:50800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.959 [2024-04-26 15:32:49.225709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.959 [2024-04-26 15:32:49.225717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:50808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.959 [2024-04-26 15:32:49.225725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.959 [2024-04-26 15:32:49.225734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:50816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.959 [2024-04-26 15:32:49.225741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.959 [2024-04-26 15:32:49.225749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:50824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.959 [2024-04-26 15:32:49.225758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.959 [2024-04-26 15:32:49.225766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:50832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.959 [2024-04-26 15:32:49.225774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.959 [2024-04-26 15:32:49.225783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:50840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.959 [2024-04-26 15:32:49.225790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.959 [2024-04-26 15:32:49.225798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:50848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.959 [2024-04-26 15:32:49.225805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.959 [2024-04-26 15:32:49.225814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:50856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.959 [2024-04-26 15:32:49.225821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.959 [2024-04-26 15:32:49.225830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:50864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.959 [2024-04-26 15:32:49.225840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.959 [2024-04-26 15:32:49.225850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:50872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.959 [2024-04-26 15:32:49.225856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.959 [2024-04-26 15:32:49.225866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:49864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.959 [2024-04-26 15:32:49.225873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.959 [2024-04-26 15:32:49.225882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:49872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.959 [2024-04-26 15:32:49.225889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.959 [2024-04-26 15:32:49.225898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:49880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.959 [2024-04-26 15:32:49.225905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.959 [2024-04-26 15:32:49.225914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:49888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.959 [2024-04-26 15:32:49.225921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.959 [2024-04-26 15:32:49.225930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:49896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.959 [2024-04-26 15:32:49.225937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.959 [2024-04-26 15:32:49.225946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:49904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.959 [2024-04-26 15:32:49.225954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.959 [2024-04-26 15:32:49.225963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:49912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.959 [2024-04-26 15:32:49.225971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.959 [2024-04-26 15:32:49.225980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:49920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.959 [2024-04-26 15:32:49.225987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.959 [2024-04-26 15:32:49.225996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:49928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.959 [2024-04-26 15:32:49.226003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.959 [2024-04-26 15:32:49.226012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:49936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.959 [2024-04-26 15:32:49.226019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.959 [2024-04-26 15:32:49.226028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:49944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.959 [2024-04-26 15:32:49.226035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.959 [2024-04-26 15:32:49.226044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:49952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.959 [2024-04-26 15:32:49.226051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.959 [2024-04-26 15:32:49.226060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:49960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.959 [2024-04-26 15:32:49.226067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.959 [2024-04-26 15:32:49.226076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:49968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.959 [2024-04-26 15:32:49.226083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.959 [2024-04-26 15:32:49.226102] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:42.959 [2024-04-26 15:32:49.226109] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:42.959 [2024-04-26 15:32:49.226115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49976 len:8 PRP1 0x0 PRP2 0x0 00:22:42.959 [2024-04-26 15:32:49.226122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.960 [2024-04-26 15:32:49.226157] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x9480f0 was disconnected and freed. reset controller. 00:22:42.960 [2024-04-26 15:32:49.226166] bdev_nvme.c:1857:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:22:42.960 [2024-04-26 15:32:49.226186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.960 [2024-04-26 15:32:49.226194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.960 [2024-04-26 15:32:49.226202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.960 [2024-04-26 15:32:49.226209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.960 [2024-04-26 15:32:49.226216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.960 [2024-04-26 15:32:49.226225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.960 [2024-04-26 15:32:49.226233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.960 [2024-04-26 15:32:49.226240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.960 [2024-04-26 15:32:49.226247] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:42.960 [2024-04-26 15:32:49.229710] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:42.960 [2024-04-26 15:32:49.229735] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x93bc50 (9): Bad file descriptor 00:22:42.960 [2024-04-26 15:32:49.433437] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:42.960 [2024-04-26 15:32:53.576024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:90728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.960 [2024-04-26 15:32:53.576060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.960 [2024-04-26 15:32:53.576078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:90736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.960 [2024-04-26 15:32:53.576086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.960 [2024-04-26 15:32:53.576095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:90744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.960 [2024-04-26 15:32:53.576103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.960 [2024-04-26 15:32:53.576112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:90752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.960 [2024-04-26 15:32:53.576119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.960 [2024-04-26 15:32:53.576129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:90760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.960 [2024-04-26 15:32:53.576136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.960 [2024-04-26 15:32:53.576145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:90768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.960 [2024-04-26 15:32:53.576152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.960 [2024-04-26 15:32:53.576161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:90776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.960 [2024-04-26 15:32:53.576168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.960 [2024-04-26 15:32:53.576177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:90784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.960 [2024-04-26 15:32:53.576185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.960 [2024-04-26 15:32:53.576194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:90792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.960 [2024-04-26 15:32:53.576201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.960 [2024-04-26 15:32:53.576211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:90800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.960 [2024-04-26 15:32:53.576224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.960 [2024-04-26 15:32:53.576233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:90808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.960 [2024-04-26 15:32:53.576240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.960 [2024-04-26 15:32:53.576250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:90816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.960 [2024-04-26 15:32:53.576256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.960 [2024-04-26 15:32:53.576266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:90824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.960 [2024-04-26 15:32:53.576273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.960 [2024-04-26 15:32:53.576282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:90832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.960 [2024-04-26 15:32:53.576289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.960 [2024-04-26 15:32:53.576298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:90840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.960 [2024-04-26 15:32:53.576304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.960 [2024-04-26 15:32:53.576313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:90848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.960 [2024-04-26 15:32:53.576320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.960 [2024-04-26 15:32:53.576329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:90856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.960 [2024-04-26 15:32:53.576336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.960 [2024-04-26 15:32:53.576345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:90864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.960 [2024-04-26 15:32:53.576352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.960 [2024-04-26 15:32:53.576360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:90872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.960 [2024-04-26 15:32:53.576367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.960 [2024-04-26 15:32:53.576376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:90880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.960 [2024-04-26 15:32:53.576383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.960 [2024-04-26 15:32:53.576392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:90888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.960 [2024-04-26 15:32:53.576399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.960 [2024-04-26 15:32:53.576408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:90896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.960 [2024-04-26 15:32:53.576415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.960 [2024-04-26 15:32:53.576426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:90904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.960 [2024-04-26 15:32:53.576433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.960 [2024-04-26 15:32:53.576441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:90912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.960 [2024-04-26 15:32:53.576448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.960 [2024-04-26 15:32:53.576457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:90920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.960 [2024-04-26 15:32:53.576465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.960 [2024-04-26 15:32:53.576475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:90928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.960 [2024-04-26 15:32:53.576482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.960 [2024-04-26 15:32:53.576492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:90936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.960 [2024-04-26 15:32:53.576499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.960 [2024-04-26 15:32:53.576508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:90944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.960 [2024-04-26 15:32:53.576516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.960 [2024-04-26 15:32:53.576525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:90952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.960 [2024-04-26 15:32:53.576532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.960 [2024-04-26 15:32:53.576541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:90960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.960 [2024-04-26 15:32:53.576547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.960 [2024-04-26 15:32:53.576556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:90968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.960 [2024-04-26 15:32:53.576563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.960 [2024-04-26 15:32:53.576573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:90976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.960 [2024-04-26 15:32:53.576579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.960 [2024-04-26 15:32:53.576588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:90984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.960 [2024-04-26 15:32:53.576596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.961 [2024-04-26 15:32:53.576606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:90992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.961 [2024-04-26 15:32:53.576614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.961 [2024-04-26 15:32:53.576623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:91000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.961 [2024-04-26 15:32:53.576631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.961 [2024-04-26 15:32:53.576641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:91008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.961 [2024-04-26 15:32:53.576648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.961 [2024-04-26 15:32:53.576657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:91016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.961 [2024-04-26 15:32:53.576664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.961 [2024-04-26 15:32:53.576673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:91024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.961 [2024-04-26 15:32:53.576680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.961 [2024-04-26 15:32:53.576689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:91032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.961 [2024-04-26 15:32:53.576697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.961 [2024-04-26 15:32:53.576707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:91040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.961 [2024-04-26 15:32:53.576714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.961 [2024-04-26 15:32:53.576723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:91048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.961 [2024-04-26 15:32:53.576730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.961 [2024-04-26 15:32:53.576739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:91056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.961 [2024-04-26 15:32:53.576746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.961 [2024-04-26 15:32:53.576755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:91064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.961 [2024-04-26 15:32:53.576762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.961 [2024-04-26 15:32:53.576771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:91072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.961 [2024-04-26 15:32:53.576778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.961 [2024-04-26 15:32:53.576787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:91080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.961 [2024-04-26 15:32:53.576794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.961 [2024-04-26 15:32:53.576805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:91088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.961 [2024-04-26 15:32:53.576812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.961 [2024-04-26 15:32:53.576821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:91096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.961 [2024-04-26 15:32:53.576828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.961 [2024-04-26 15:32:53.576843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:91104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.961 [2024-04-26 15:32:53.576850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.961 [2024-04-26 15:32:53.576859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:91112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.961 [2024-04-26 15:32:53.576867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.961 [2024-04-26 15:32:53.576875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:91120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.961 [2024-04-26 15:32:53.576882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.961 [2024-04-26 15:32:53.576891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:91128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.961 [2024-04-26 15:32:53.576898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.961 [2024-04-26 15:32:53.576907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:91136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.961 [2024-04-26 15:32:53.576916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.961 [2024-04-26 15:32:53.576925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:91144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.961 [2024-04-26 15:32:53.576932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.961 [2024-04-26 15:32:53.576941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:91152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.961 [2024-04-26 15:32:53.576948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.961 [2024-04-26 15:32:53.576957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:91160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.961 [2024-04-26 15:32:53.576964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.961 [2024-04-26 15:32:53.576973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:91168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.961 [2024-04-26 15:32:53.576980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.961 [2024-04-26 15:32:53.576989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:91176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.961 [2024-04-26 15:32:53.576997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.961 [2024-04-26 15:32:53.577006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:91184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.961 [2024-04-26 15:32:53.577013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.961 [2024-04-26 15:32:53.577022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:91192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.961 [2024-04-26 15:32:53.577029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.961 [2024-04-26 15:32:53.577037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:91200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.961 [2024-04-26 15:32:53.577044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.961 [2024-04-26 15:32:53.577055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:91208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.961 [2024-04-26 15:32:53.577062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.961 [2024-04-26 15:32:53.577071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:91216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.961 [2024-04-26 15:32:53.577078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.961 [2024-04-26 15:32:53.577087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:91224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.961 [2024-04-26 15:32:53.577094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.961 [2024-04-26 15:32:53.577103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:91232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.961 [2024-04-26 15:32:53.577110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.961 [2024-04-26 15:32:53.577118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:91240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.962 [2024-04-26 15:32:53.577126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.962 [2024-04-26 15:32:53.577134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:91248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.962 [2024-04-26 15:32:53.577142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.962 [2024-04-26 15:32:53.577151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:91256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.962 [2024-04-26 15:32:53.577158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.962 [2024-04-26 15:32:53.577167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:91264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.962 [2024-04-26 15:32:53.577174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.962 [2024-04-26 15:32:53.577183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:91272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.962 [2024-04-26 15:32:53.577190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.962 [2024-04-26 15:32:53.577199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.962 [2024-04-26 15:32:53.577206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.962 [2024-04-26 15:32:53.577215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:91288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.962 [2024-04-26 15:32:53.577221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.962 [2024-04-26 15:32:53.577230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:91296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.962 [2024-04-26 15:32:53.577237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.962 [2024-04-26 15:32:53.577246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:91304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.962 [2024-04-26 15:32:53.577257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.962 [2024-04-26 15:32:53.577266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:91312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.962 [2024-04-26 15:32:53.577273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.962 [2024-04-26 15:32:53.577283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:91320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.962 [2024-04-26 15:32:53.577290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.962 [2024-04-26 15:32:53.577298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:91328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.962 [2024-04-26 15:32:53.577305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.962 [2024-04-26 15:32:53.577314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:91336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.962 [2024-04-26 15:32:53.577321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.962 [2024-04-26 15:32:53.577330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:91344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.962 [2024-04-26 15:32:53.577337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.962 [2024-04-26 15:32:53.577345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:91352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.962 [2024-04-26 15:32:53.577352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.962 [2024-04-26 15:32:53.577361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:91360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.962 [2024-04-26 15:32:53.577368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.962 [2024-04-26 15:32:53.577377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:91368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.962 [2024-04-26 15:32:53.577384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.962 [2024-04-26 15:32:53.577392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:91376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.962 [2024-04-26 15:32:53.577399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.962 [2024-04-26 15:32:53.577408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:91384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.962 [2024-04-26 15:32:53.577415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.962 [2024-04-26 15:32:53.577423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:91392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.962 [2024-04-26 15:32:53.577430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.962 [2024-04-26 15:32:53.577439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:91400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.962 [2024-04-26 15:32:53.577446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.962 [2024-04-26 15:32:53.577456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:91408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.962 [2024-04-26 15:32:53.577463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.962 [2024-04-26 15:32:53.577472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:91416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.962 [2024-04-26 15:32:53.577479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.962 [2024-04-26 15:32:53.577487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:91424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.962 [2024-04-26 15:32:53.577494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.962 [2024-04-26 15:32:53.577503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:91432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.962 [2024-04-26 15:32:53.577511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.962 [2024-04-26 15:32:53.577520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:91440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.962 [2024-04-26 15:32:53.577527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.962 [2024-04-26 15:32:53.577535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:91448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.962 [2024-04-26 15:32:53.577542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.962 [2024-04-26 15:32:53.577551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:91456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.962 [2024-04-26 15:32:53.577558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.962 [2024-04-26 15:32:53.577567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:91464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.962 [2024-04-26 15:32:53.577573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.962 [2024-04-26 15:32:53.577582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:91472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.962 [2024-04-26 15:32:53.577589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.962 [2024-04-26 15:32:53.577598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:91480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.962 [2024-04-26 15:32:53.577605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.962 [2024-04-26 15:32:53.577613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:91488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.962 [2024-04-26 15:32:53.577620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.962 [2024-04-26 15:32:53.577629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:91496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.962 [2024-04-26 15:32:53.577636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.962 [2024-04-26 15:32:53.577645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:91504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.962 [2024-04-26 15:32:53.577654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.962 [2024-04-26 15:32:53.577663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:91512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.962 [2024-04-26 15:32:53.577670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.962 [2024-04-26 15:32:53.577678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:91520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.962 [2024-04-26 15:32:53.577685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.962 [2024-04-26 15:32:53.577694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:91528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.962 [2024-04-26 15:32:53.577701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.962 [2024-04-26 15:32:53.577710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:91536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.962 [2024-04-26 15:32:53.577717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.962 [2024-04-26 15:32:53.577725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:91544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.962 [2024-04-26 15:32:53.577732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.962 [2024-04-26 15:32:53.577741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:91552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.962 [2024-04-26 15:32:53.577748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.963 [2024-04-26 15:32:53.577757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:91560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.963 [2024-04-26 15:32:53.577764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.963 [2024-04-26 15:32:53.577773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:91568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.963 [2024-04-26 15:32:53.577779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.963 [2024-04-26 15:32:53.577801] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:42.963 [2024-04-26 15:32:53.577809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91576 len:8 PRP1 0x0 PRP2 0x0 00:22:42.963 [2024-04-26 15:32:53.577816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.963 [2024-04-26 15:32:53.577825] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:42.963 [2024-04-26 15:32:53.577831] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:42.963 [2024-04-26 15:32:53.577842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91584 len:8 PRP1 0x0 PRP2 0x0 00:22:42.963 [2024-04-26 15:32:53.577850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.963 [2024-04-26 15:32:53.577858] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:42.963 [2024-04-26 15:32:53.577863] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:42.963 [2024-04-26 15:32:53.577870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91592 len:8 PRP1 0x0 PRP2 0x0 00:22:42.963 [2024-04-26 15:32:53.577879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.963 [2024-04-26 15:32:53.577886] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:42.963 [2024-04-26 15:32:53.577893] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:42.963 [2024-04-26 15:32:53.577899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91600 len:8 PRP1 0x0 PRP2 0x0 00:22:42.963 [2024-04-26 15:32:53.577906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.963 [2024-04-26 15:32:53.577913] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:42.963 [2024-04-26 15:32:53.577919] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:42.963 [2024-04-26 15:32:53.577924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91608 len:8 PRP1 0x0 PRP2 0x0 00:22:42.963 [2024-04-26 15:32:53.577931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.963 [2024-04-26 15:32:53.577938] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:42.963 [2024-04-26 15:32:53.577944] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:42.963 [2024-04-26 15:32:53.577950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91616 len:8 PRP1 0x0 PRP2 0x0 00:22:42.963 [2024-04-26 15:32:53.577957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.963 [2024-04-26 15:32:53.577965] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:42.963 [2024-04-26 15:32:53.577970] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:42.963 [2024-04-26 15:32:53.577976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91624 len:8 PRP1 0x0 PRP2 0x0 00:22:42.963 [2024-04-26 15:32:53.577983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.963 [2024-04-26 15:32:53.577991] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:42.963 [2024-04-26 15:32:53.577997] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:42.963 [2024-04-26 15:32:53.578003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91632 len:8 PRP1 0x0 PRP2 0x0 00:22:42.963 [2024-04-26 15:32:53.578010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.963 [2024-04-26 15:32:53.578018] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:42.963 [2024-04-26 15:32:53.578023] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:42.963 [2024-04-26 15:32:53.578029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91640 len:8 PRP1 0x0 PRP2 0x0 00:22:42.963 [2024-04-26 15:32:53.578036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.963 [2024-04-26 15:32:53.578044] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:42.963 [2024-04-26 15:32:53.578049] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:42.963 [2024-04-26 15:32:53.578054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91648 len:8 PRP1 0x0 PRP2 0x0 00:22:42.963 [2024-04-26 15:32:53.578061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.963 [2024-04-26 15:32:53.578069] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:42.963 [2024-04-26 15:32:53.578074] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:42.963 [2024-04-26 15:32:53.578081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91656 len:8 PRP1 0x0 PRP2 0x0 00:22:42.963 [2024-04-26 15:32:53.578088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.963 [2024-04-26 15:32:53.578096] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:42.963 [2024-04-26 15:32:53.578101] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:42.963 [2024-04-26 15:32:53.578107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91664 len:8 PRP1 0x0 PRP2 0x0 00:22:42.963 [2024-04-26 15:32:53.578114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.963 [2024-04-26 15:32:53.578121] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:42.963 [2024-04-26 15:32:53.578127] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:42.963 [2024-04-26 15:32:53.578132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91672 len:8 PRP1 0x0 PRP2 0x0 00:22:42.963 [2024-04-26 15:32:53.578139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.963 [2024-04-26 15:32:53.578147] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:42.963 [2024-04-26 15:32:53.578152] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:42.963 [2024-04-26 15:32:53.578158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91680 len:8 PRP1 0x0 PRP2 0x0 00:22:42.963 [2024-04-26 15:32:53.578165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.963 [2024-04-26 15:32:53.578172] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:42.963 [2024-04-26 15:32:53.578177] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:42.963 [2024-04-26 15:32:53.578183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91688 len:8 PRP1 0x0 PRP2 0x0 00:22:42.963 [2024-04-26 15:32:53.578190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.963 [2024-04-26 15:32:53.578197] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:42.963 [2024-04-26 15:32:53.578202] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:42.963 [2024-04-26 15:32:53.578209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91696 len:8 PRP1 0x0 PRP2 0x0 00:22:42.963 [2024-04-26 15:32:53.578216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.963 [2024-04-26 15:32:53.578223] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:42.963 [2024-04-26 15:32:53.578228] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:42.963 [2024-04-26 15:32:53.578234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91704 len:8 PRP1 0x0 PRP2 0x0 00:22:42.963 [2024-04-26 15:32:53.578241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.963 [2024-04-26 15:32:53.578248] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:42.963 [2024-04-26 15:32:53.578254] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:42.963 [2024-04-26 15:32:53.578260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91712 len:8 PRP1 0x0 PRP2 0x0 00:22:42.963 [2024-04-26 15:32:53.578267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.963 [2024-04-26 15:32:53.590483] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:42.963 [2024-04-26 15:32:53.590516] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:42.963 [2024-04-26 15:32:53.590527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91720 len:8 PRP1 0x0 PRP2 0x0 00:22:42.963 [2024-04-26 15:32:53.590536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.963 [2024-04-26 15:32:53.590544] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:42.963 [2024-04-26 15:32:53.590550] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:42.963 [2024-04-26 15:32:53.590556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91728 len:8 PRP1 0x0 PRP2 0x0 00:22:42.963 [2024-04-26 15:32:53.590563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.963 [2024-04-26 15:32:53.590571] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:42.963 [2024-04-26 15:32:53.590577] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:42.963 [2024-04-26 15:32:53.590584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91736 len:8 PRP1 0x0 PRP2 0x0 00:22:42.963 [2024-04-26 15:32:53.590592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.963 [2024-04-26 15:32:53.590600] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:42.963 [2024-04-26 15:32:53.590605] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:42.963 [2024-04-26 15:32:53.590611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91744 len:8 PRP1 0x0 PRP2 0x0 00:22:42.963 [2024-04-26 15:32:53.590618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.963 [2024-04-26 15:32:53.590656] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x9480f0 was disconnected and freed. reset controller. 00:22:42.964 [2024-04-26 15:32:53.590667] bdev_nvme.c:1857:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:22:42.964 [2024-04-26 15:32:53.590695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.964 [2024-04-26 15:32:53.590704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.964 [2024-04-26 15:32:53.590714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.964 [2024-04-26 15:32:53.590721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.964 [2024-04-26 15:32:53.590729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.964 [2024-04-26 15:32:53.590736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.964 [2024-04-26 15:32:53.590745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.964 [2024-04-26 15:32:53.590752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.964 [2024-04-26 15:32:53.590759] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:42.964 [2024-04-26 15:32:53.590798] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x93bc50 (9): Bad file descriptor 00:22:42.964 [2024-04-26 15:32:53.594299] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:42.964 [2024-04-26 15:32:53.669739] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:42.964 00:22:42.964 Latency(us) 00:22:42.964 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:42.964 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:42.964 Verification LBA range: start 0x0 length 0x4000 00:22:42.964 NVMe0n1 : 15.00 10945.26 42.75 1090.65 0.00 10606.83 771.41 23265.28 00:22:42.964 =================================================================================================================== 00:22:42.964 Total : 10945.26 42.75 1090.65 0.00 10606.83 771.41 23265.28 00:22:42.964 Received shutdown signal, test time was about 15.000000 seconds 00:22:42.964 00:22:42.964 Latency(us) 00:22:42.964 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:42.964 =================================================================================================================== 00:22:42.964 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:42.964 15:32:59 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:22:42.964 15:32:59 -- host/failover.sh@65 -- # count=3 00:22:42.964 15:32:59 -- host/failover.sh@67 -- # (( count != 3 )) 00:22:42.964 15:32:59 -- host/failover.sh@73 -- # bdevperf_pid=1725922 00:22:42.964 15:32:59 -- host/failover.sh@75 -- # waitforlisten 1725922 /var/tmp/bdevperf.sock 00:22:42.964 15:32:59 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:22:42.964 15:32:59 -- common/autotest_common.sh@817 -- # '[' -z 1725922 ']' 00:22:42.964 15:32:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:42.964 15:32:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:42.964 15:32:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:42.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:42.964 15:32:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:42.964 15:32:59 -- common/autotest_common.sh@10 -- # set +x 00:22:43.536 15:33:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:43.536 15:33:00 -- common/autotest_common.sh@850 -- # return 0 00:22:43.536 15:33:00 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:43.536 [2024-04-26 15:33:00.899262] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:43.536 15:33:00 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:43.796 [2024-04-26 15:33:01.071710] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:43.796 15:33:01 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:44.056 NVMe0n1 00:22:44.056 15:33:01 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:44.627 00:22:44.627 15:33:01 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:44.888 00:22:44.888 15:33:02 -- host/failover.sh@82 -- # grep -q NVMe0 00:22:44.888 15:33:02 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:44.888 15:33:02 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:45.265 15:33:02 -- host/failover.sh@87 -- # sleep 3 00:22:48.582 15:33:05 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:48.582 15:33:05 -- host/failover.sh@88 -- # grep -q NVMe0 00:22:48.582 15:33:05 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:48.582 15:33:05 -- host/failover.sh@90 -- # run_test_pid=1727058 00:22:48.582 15:33:05 -- host/failover.sh@92 -- # wait 1727058 00:22:49.520 0 00:22:49.520 15:33:06 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:49.520 [2024-04-26 15:32:59.984615] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:22:49.520 [2024-04-26 15:32:59.984668] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1725922 ] 00:22:49.520 EAL: No free 2048 kB hugepages reported on node 1 00:22:49.520 [2024-04-26 15:33:00.045691] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:49.520 [2024-04-26 15:33:00.107519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:49.520 [2024-04-26 15:33:02.462112] bdev_nvme.c:1857:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:49.520 [2024-04-26 15:33:02.462155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.520 [2024-04-26 15:33:02.462167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.520 [2024-04-26 15:33:02.462176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.520 [2024-04-26 15:33:02.462183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.520 [2024-04-26 15:33:02.462191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.520 [2024-04-26 15:33:02.462198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.520 [2024-04-26 15:33:02.462205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.520 [2024-04-26 15:33:02.462212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.520 [2024-04-26 15:33:02.462219] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:49.520 [2024-04-26 15:33:02.462249] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:49.520 [2024-04-26 15:33:02.462264] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcb6c50 (9): Bad file descriptor 00:22:49.520 [2024-04-26 15:33:02.475377] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:49.520 Running I/O for 1 seconds... 00:22:49.520 00:22:49.520 Latency(us) 00:22:49.520 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:49.520 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:49.520 Verification LBA range: start 0x0 length 0x4000 00:22:49.520 NVMe0n1 : 1.01 11357.40 44.36 0.00 0.00 11215.62 2430.29 9502.72 00:22:49.520 =================================================================================================================== 00:22:49.520 Total : 11357.40 44.36 0.00 0.00 11215.62 2430.29 9502.72 00:22:49.520 15:33:06 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:49.520 15:33:06 -- host/failover.sh@95 -- # grep -q NVMe0 00:22:49.520 15:33:06 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:49.780 15:33:07 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:49.780 15:33:07 -- host/failover.sh@99 -- # grep -q NVMe0 00:22:50.040 15:33:07 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:50.040 15:33:07 -- host/failover.sh@101 -- # sleep 3 00:22:53.340 15:33:10 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:53.340 15:33:10 -- host/failover.sh@103 -- # grep -q NVMe0 00:22:53.340 15:33:10 -- host/failover.sh@108 -- # killprocess 1725922 00:22:53.340 15:33:10 -- common/autotest_common.sh@936 -- # '[' -z 1725922 ']' 00:22:53.340 15:33:10 -- common/autotest_common.sh@940 -- # kill -0 1725922 00:22:53.340 15:33:10 -- common/autotest_common.sh@941 -- # uname 00:22:53.340 15:33:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:53.340 15:33:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1725922 00:22:53.340 15:33:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:53.340 15:33:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:53.340 15:33:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1725922' 00:22:53.340 killing process with pid 1725922 00:22:53.340 15:33:10 -- common/autotest_common.sh@955 -- # kill 1725922 00:22:53.340 15:33:10 -- common/autotest_common.sh@960 -- # wait 1725922 00:22:53.601 15:33:10 -- host/failover.sh@110 -- # sync 00:22:53.601 15:33:10 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:53.601 15:33:10 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:22:53.601 15:33:10 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:53.601 15:33:10 -- host/failover.sh@116 -- # nvmftestfini 00:22:53.601 15:33:10 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:53.601 15:33:10 -- nvmf/common.sh@117 -- # sync 00:22:53.601 15:33:10 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:53.601 15:33:10 -- nvmf/common.sh@120 -- # set +e 00:22:53.601 15:33:10 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:53.601 15:33:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:53.601 rmmod nvme_tcp 00:22:53.601 rmmod nvme_fabrics 00:22:53.601 rmmod nvme_keyring 00:22:53.861 15:33:11 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:53.861 15:33:11 -- nvmf/common.sh@124 -- # set -e 00:22:53.861 15:33:11 -- nvmf/common.sh@125 -- # return 0 00:22:53.861 15:33:11 -- nvmf/common.sh@478 -- # '[' -n 1722203 ']' 00:22:53.861 15:33:11 -- nvmf/common.sh@479 -- # killprocess 1722203 00:22:53.861 15:33:11 -- common/autotest_common.sh@936 -- # '[' -z 1722203 ']' 00:22:53.861 15:33:11 -- common/autotest_common.sh@940 -- # kill -0 1722203 00:22:53.861 15:33:11 -- common/autotest_common.sh@941 -- # uname 00:22:53.861 15:33:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:53.861 15:33:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1722203 00:22:53.861 15:33:11 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:53.861 15:33:11 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:53.861 15:33:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1722203' 00:22:53.861 killing process with pid 1722203 00:22:53.861 15:33:11 -- common/autotest_common.sh@955 -- # kill 1722203 00:22:53.861 15:33:11 -- common/autotest_common.sh@960 -- # wait 1722203 00:22:53.862 15:33:11 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:53.862 15:33:11 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:53.862 15:33:11 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:53.862 15:33:11 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:53.862 15:33:11 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:53.862 15:33:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:53.862 15:33:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:53.862 15:33:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:56.408 15:33:13 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:56.408 00:22:56.408 real 0m39.633s 00:22:56.408 user 2m2.771s 00:22:56.408 sys 0m7.964s 00:22:56.408 15:33:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:56.408 15:33:13 -- common/autotest_common.sh@10 -- # set +x 00:22:56.408 ************************************ 00:22:56.408 END TEST nvmf_failover 00:22:56.408 ************************************ 00:22:56.408 15:33:13 -- nvmf/nvmf.sh@99 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:56.408 15:33:13 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:56.408 15:33:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:56.408 15:33:13 -- common/autotest_common.sh@10 -- # set +x 00:22:56.408 ************************************ 00:22:56.408 START TEST nvmf_discovery 00:22:56.408 ************************************ 00:22:56.408 15:33:13 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:56.408 * Looking for test storage... 00:22:56.408 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:56.408 15:33:13 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:56.408 15:33:13 -- nvmf/common.sh@7 -- # uname -s 00:22:56.408 15:33:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:56.408 15:33:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:56.408 15:33:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:56.408 15:33:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:56.408 15:33:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:56.408 15:33:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:56.408 15:33:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:56.408 15:33:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:56.408 15:33:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:56.408 15:33:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:56.408 15:33:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:56.408 15:33:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:56.408 15:33:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:56.408 15:33:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:56.408 15:33:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:56.408 15:33:13 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:56.408 15:33:13 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:56.408 15:33:13 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:56.408 15:33:13 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:56.408 15:33:13 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:56.408 15:33:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.408 15:33:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.408 15:33:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.408 15:33:13 -- paths/export.sh@5 -- # export PATH 00:22:56.409 15:33:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.409 15:33:13 -- nvmf/common.sh@47 -- # : 0 00:22:56.409 15:33:13 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:56.409 15:33:13 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:56.409 15:33:13 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:56.409 15:33:13 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:56.409 15:33:13 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:56.409 15:33:13 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:56.409 15:33:13 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:56.409 15:33:13 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:56.409 15:33:13 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:22:56.409 15:33:13 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:22:56.409 15:33:13 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:22:56.409 15:33:13 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:22:56.409 15:33:13 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:22:56.409 15:33:13 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:22:56.409 15:33:13 -- host/discovery.sh@25 -- # nvmftestinit 00:22:56.409 15:33:13 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:56.409 15:33:13 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:56.409 15:33:13 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:56.409 15:33:13 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:56.409 15:33:13 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:56.409 15:33:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:56.409 15:33:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:56.409 15:33:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:56.409 15:33:13 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:22:56.409 15:33:13 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:22:56.409 15:33:13 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:56.409 15:33:13 -- common/autotest_common.sh@10 -- # set +x 00:23:03.001 15:33:20 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:03.001 15:33:20 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:03.001 15:33:20 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:03.001 15:33:20 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:03.001 15:33:20 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:03.001 15:33:20 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:03.001 15:33:20 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:03.001 15:33:20 -- nvmf/common.sh@295 -- # net_devs=() 00:23:03.001 15:33:20 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:03.001 15:33:20 -- nvmf/common.sh@296 -- # e810=() 00:23:03.001 15:33:20 -- nvmf/common.sh@296 -- # local -ga e810 00:23:03.001 15:33:20 -- nvmf/common.sh@297 -- # x722=() 00:23:03.001 15:33:20 -- nvmf/common.sh@297 -- # local -ga x722 00:23:03.001 15:33:20 -- nvmf/common.sh@298 -- # mlx=() 00:23:03.001 15:33:20 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:03.001 15:33:20 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:03.001 15:33:20 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:03.001 15:33:20 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:03.001 15:33:20 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:03.001 15:33:20 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:03.001 15:33:20 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:03.001 15:33:20 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:03.001 15:33:20 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:03.001 15:33:20 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:03.001 15:33:20 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:03.001 15:33:20 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:03.001 15:33:20 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:03.001 15:33:20 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:03.001 15:33:20 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:03.001 15:33:20 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:03.001 15:33:20 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:03.001 15:33:20 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:03.001 15:33:20 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:03.001 15:33:20 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:03.001 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:03.001 15:33:20 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:03.001 15:33:20 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:03.001 15:33:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:03.001 15:33:20 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:03.001 15:33:20 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:03.001 15:33:20 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:03.001 15:33:20 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:03.001 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:03.001 15:33:20 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:03.001 15:33:20 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:03.001 15:33:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:03.001 15:33:20 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:03.001 15:33:20 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:03.001 15:33:20 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:03.001 15:33:20 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:03.001 15:33:20 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:03.001 15:33:20 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:03.001 15:33:20 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:03.001 15:33:20 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:03.001 15:33:20 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:03.001 15:33:20 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:03.001 Found net devices under 0000:31:00.0: cvl_0_0 00:23:03.001 15:33:20 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:03.001 15:33:20 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:03.001 15:33:20 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:03.001 15:33:20 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:03.001 15:33:20 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:03.001 15:33:20 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:03.001 Found net devices under 0000:31:00.1: cvl_0_1 00:23:03.001 15:33:20 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:03.001 15:33:20 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:23:03.001 15:33:20 -- nvmf/common.sh@403 -- # is_hw=yes 00:23:03.001 15:33:20 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:23:03.001 15:33:20 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:23:03.001 15:33:20 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:23:03.001 15:33:20 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:03.001 15:33:20 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:03.001 15:33:20 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:03.001 15:33:20 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:03.001 15:33:20 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:03.001 15:33:20 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:03.001 15:33:20 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:03.001 15:33:20 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:03.001 15:33:20 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:03.001 15:33:20 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:03.263 15:33:20 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:03.263 15:33:20 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:03.263 15:33:20 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:03.263 15:33:20 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:03.263 15:33:20 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:03.263 15:33:20 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:03.263 15:33:20 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:03.525 15:33:20 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:03.525 15:33:20 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:03.525 15:33:20 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:03.525 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:03.525 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.656 ms 00:23:03.525 00:23:03.525 --- 10.0.0.2 ping statistics --- 00:23:03.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:03.525 rtt min/avg/max/mdev = 0.656/0.656/0.656/0.000 ms 00:23:03.525 15:33:20 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:03.525 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:03.525 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:23:03.525 00:23:03.525 --- 10.0.0.1 ping statistics --- 00:23:03.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:03.525 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:23:03.525 15:33:20 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:03.525 15:33:20 -- nvmf/common.sh@411 -- # return 0 00:23:03.525 15:33:20 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:03.525 15:33:20 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:03.525 15:33:20 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:03.525 15:33:20 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:03.525 15:33:20 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:03.525 15:33:20 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:03.525 15:33:20 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:03.525 15:33:20 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:23:03.525 15:33:20 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:03.525 15:33:20 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:03.525 15:33:20 -- common/autotest_common.sh@10 -- # set +x 00:23:03.525 15:33:20 -- nvmf/common.sh@470 -- # nvmfpid=1732905 00:23:03.525 15:33:20 -- nvmf/common.sh@471 -- # waitforlisten 1732905 00:23:03.525 15:33:20 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:03.525 15:33:20 -- common/autotest_common.sh@817 -- # '[' -z 1732905 ']' 00:23:03.525 15:33:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:03.525 15:33:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:03.525 15:33:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:03.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:03.525 15:33:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:03.525 15:33:20 -- common/autotest_common.sh@10 -- # set +x 00:23:03.525 [2024-04-26 15:33:20.874834] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:23:03.525 [2024-04-26 15:33:20.874912] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:03.525 EAL: No free 2048 kB hugepages reported on node 1 00:23:03.525 [2024-04-26 15:33:20.962349] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:03.787 [2024-04-26 15:33:21.053888] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:03.787 [2024-04-26 15:33:21.053945] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:03.787 [2024-04-26 15:33:21.053954] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:03.787 [2024-04-26 15:33:21.053960] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:03.787 [2024-04-26 15:33:21.053966] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:03.787 [2024-04-26 15:33:21.053991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:04.360 15:33:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:04.360 15:33:21 -- common/autotest_common.sh@850 -- # return 0 00:23:04.360 15:33:21 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:04.360 15:33:21 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:04.360 15:33:21 -- common/autotest_common.sh@10 -- # set +x 00:23:04.360 15:33:21 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:04.360 15:33:21 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:04.360 15:33:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:04.360 15:33:21 -- common/autotest_common.sh@10 -- # set +x 00:23:04.360 [2024-04-26 15:33:21.725391] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:04.360 15:33:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:04.360 15:33:21 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:04.360 15:33:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:04.360 15:33:21 -- common/autotest_common.sh@10 -- # set +x 00:23:04.360 [2024-04-26 15:33:21.737623] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:04.360 15:33:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:04.360 15:33:21 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:04.360 15:33:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:04.360 15:33:21 -- common/autotest_common.sh@10 -- # set +x 00:23:04.360 null0 00:23:04.360 15:33:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:04.360 15:33:21 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:04.360 15:33:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:04.360 15:33:21 -- common/autotest_common.sh@10 -- # set +x 00:23:04.360 null1 00:23:04.360 15:33:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:04.360 15:33:21 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:23:04.360 15:33:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:04.360 15:33:21 -- common/autotest_common.sh@10 -- # set +x 00:23:04.360 15:33:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:04.360 15:33:21 -- host/discovery.sh@45 -- # hostpid=1733076 00:23:04.360 15:33:21 -- host/discovery.sh@46 -- # waitforlisten 1733076 /tmp/host.sock 00:23:04.360 15:33:21 -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:04.360 15:33:21 -- common/autotest_common.sh@817 -- # '[' -z 1733076 ']' 00:23:04.360 15:33:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:23:04.360 15:33:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:04.360 15:33:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:04.360 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:04.360 15:33:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:04.360 15:33:21 -- common/autotest_common.sh@10 -- # set +x 00:23:04.620 [2024-04-26 15:33:21.831853] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:23:04.620 [2024-04-26 15:33:21.831915] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1733076 ] 00:23:04.620 EAL: No free 2048 kB hugepages reported on node 1 00:23:04.620 [2024-04-26 15:33:21.898464] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:04.620 [2024-04-26 15:33:21.972566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:05.192 15:33:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:05.192 15:33:22 -- common/autotest_common.sh@850 -- # return 0 00:23:05.192 15:33:22 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:05.192 15:33:22 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:05.192 15:33:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:05.192 15:33:22 -- common/autotest_common.sh@10 -- # set +x 00:23:05.192 15:33:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:05.192 15:33:22 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:23:05.192 15:33:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:05.192 15:33:22 -- common/autotest_common.sh@10 -- # set +x 00:23:05.192 15:33:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:05.192 15:33:22 -- host/discovery.sh@72 -- # notify_id=0 00:23:05.192 15:33:22 -- host/discovery.sh@83 -- # get_subsystem_names 00:23:05.192 15:33:22 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:05.192 15:33:22 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:05.192 15:33:22 -- host/discovery.sh@59 -- # sort 00:23:05.192 15:33:22 -- host/discovery.sh@59 -- # xargs 00:23:05.192 15:33:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:05.192 15:33:22 -- common/autotest_common.sh@10 -- # set +x 00:23:05.192 15:33:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:05.453 15:33:22 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:23:05.453 15:33:22 -- host/discovery.sh@84 -- # get_bdev_list 00:23:05.453 15:33:22 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:05.453 15:33:22 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:05.453 15:33:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:05.453 15:33:22 -- host/discovery.sh@55 -- # sort 00:23:05.453 15:33:22 -- common/autotest_common.sh@10 -- # set +x 00:23:05.453 15:33:22 -- host/discovery.sh@55 -- # xargs 00:23:05.453 15:33:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:05.453 15:33:22 -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:23:05.453 15:33:22 -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:05.453 15:33:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:05.453 15:33:22 -- common/autotest_common.sh@10 -- # set +x 00:23:05.453 15:33:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:05.453 15:33:22 -- host/discovery.sh@87 -- # get_subsystem_names 00:23:05.453 15:33:22 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:05.453 15:33:22 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:05.453 15:33:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:05.453 15:33:22 -- host/discovery.sh@59 -- # sort 00:23:05.454 15:33:22 -- common/autotest_common.sh@10 -- # set +x 00:23:05.454 15:33:22 -- host/discovery.sh@59 -- # xargs 00:23:05.454 15:33:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:05.454 15:33:22 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:23:05.454 15:33:22 -- host/discovery.sh@88 -- # get_bdev_list 00:23:05.454 15:33:22 -- host/discovery.sh@55 -- # xargs 00:23:05.454 15:33:22 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:05.454 15:33:22 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:05.454 15:33:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:05.454 15:33:22 -- host/discovery.sh@55 -- # sort 00:23:05.454 15:33:22 -- common/autotest_common.sh@10 -- # set +x 00:23:05.454 15:33:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:05.454 15:33:22 -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:23:05.454 15:33:22 -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:05.454 15:33:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:05.454 15:33:22 -- common/autotest_common.sh@10 -- # set +x 00:23:05.454 15:33:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:05.454 15:33:22 -- host/discovery.sh@91 -- # get_subsystem_names 00:23:05.454 15:33:22 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:05.454 15:33:22 -- host/discovery.sh@59 -- # xargs 00:23:05.454 15:33:22 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:05.454 15:33:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:05.454 15:33:22 -- host/discovery.sh@59 -- # sort 00:23:05.454 15:33:22 -- common/autotest_common.sh@10 -- # set +x 00:23:05.454 15:33:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:05.715 15:33:22 -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:23:05.715 15:33:22 -- host/discovery.sh@92 -- # get_bdev_list 00:23:05.715 15:33:22 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:05.715 15:33:22 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:05.715 15:33:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:05.715 15:33:22 -- host/discovery.sh@55 -- # sort 00:23:05.715 15:33:22 -- common/autotest_common.sh@10 -- # set +x 00:23:05.715 15:33:22 -- host/discovery.sh@55 -- # xargs 00:23:05.715 15:33:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:05.715 15:33:22 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:23:05.715 15:33:22 -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:05.715 15:33:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:05.715 15:33:22 -- common/autotest_common.sh@10 -- # set +x 00:23:05.715 [2024-04-26 15:33:22.968742] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:05.715 15:33:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:05.715 15:33:22 -- host/discovery.sh@97 -- # get_subsystem_names 00:23:05.715 15:33:22 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:05.715 15:33:22 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:05.715 15:33:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:05.715 15:33:22 -- host/discovery.sh@59 -- # sort 00:23:05.715 15:33:22 -- common/autotest_common.sh@10 -- # set +x 00:23:05.715 15:33:22 -- host/discovery.sh@59 -- # xargs 00:23:05.715 15:33:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:05.715 15:33:23 -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:23:05.715 15:33:23 -- host/discovery.sh@98 -- # get_bdev_list 00:23:05.715 15:33:23 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:05.715 15:33:23 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:05.715 15:33:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:05.715 15:33:23 -- common/autotest_common.sh@10 -- # set +x 00:23:05.715 15:33:23 -- host/discovery.sh@55 -- # sort 00:23:05.715 15:33:23 -- host/discovery.sh@55 -- # xargs 00:23:05.715 15:33:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:05.715 15:33:23 -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:23:05.715 15:33:23 -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:23:05.715 15:33:23 -- host/discovery.sh@79 -- # expected_count=0 00:23:05.715 15:33:23 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:05.715 15:33:23 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:05.715 15:33:23 -- common/autotest_common.sh@901 -- # local max=10 00:23:05.715 15:33:23 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:05.715 15:33:23 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:05.715 15:33:23 -- common/autotest_common.sh@903 -- # get_notification_count 00:23:05.715 15:33:23 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:05.715 15:33:23 -- host/discovery.sh@74 -- # jq '. | length' 00:23:05.715 15:33:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:05.715 15:33:23 -- common/autotest_common.sh@10 -- # set +x 00:23:05.715 15:33:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:05.715 15:33:23 -- host/discovery.sh@74 -- # notification_count=0 00:23:05.715 15:33:23 -- host/discovery.sh@75 -- # notify_id=0 00:23:05.715 15:33:23 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:23:05.715 15:33:23 -- common/autotest_common.sh@904 -- # return 0 00:23:05.715 15:33:23 -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:05.715 15:33:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:05.715 15:33:23 -- common/autotest_common.sh@10 -- # set +x 00:23:05.715 15:33:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:05.715 15:33:23 -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:05.715 15:33:23 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:05.715 15:33:23 -- common/autotest_common.sh@901 -- # local max=10 00:23:05.715 15:33:23 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:05.715 15:33:23 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:05.715 15:33:23 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:23:05.715 15:33:23 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:05.715 15:33:23 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:05.715 15:33:23 -- host/discovery.sh@59 -- # sort 00:23:05.715 15:33:23 -- host/discovery.sh@59 -- # xargs 00:23:05.715 15:33:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:05.715 15:33:23 -- common/autotest_common.sh@10 -- # set +x 00:23:05.715 15:33:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:05.976 15:33:23 -- common/autotest_common.sh@903 -- # [[ '' == \n\v\m\e\0 ]] 00:23:05.976 15:33:23 -- common/autotest_common.sh@906 -- # sleep 1 00:23:06.238 [2024-04-26 15:33:23.665824] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:06.238 [2024-04-26 15:33:23.665849] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:06.238 [2024-04-26 15:33:23.665863] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:06.498 [2024-04-26 15:33:23.755149] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:06.758 [2024-04-26 15:33:23.980106] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:06.758 [2024-04-26 15:33:23.980130] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:06.758 15:33:24 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:06.758 15:33:24 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:06.758 15:33:24 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:23:06.758 15:33:24 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:06.758 15:33:24 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:06.758 15:33:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:06.758 15:33:24 -- host/discovery.sh@59 -- # sort 00:23:06.758 15:33:24 -- common/autotest_common.sh@10 -- # set +x 00:23:06.758 15:33:24 -- host/discovery.sh@59 -- # xargs 00:23:07.019 15:33:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:07.019 15:33:24 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:07.019 15:33:24 -- common/autotest_common.sh@904 -- # return 0 00:23:07.019 15:33:24 -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:07.019 15:33:24 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:07.019 15:33:24 -- common/autotest_common.sh@901 -- # local max=10 00:23:07.019 15:33:24 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:07.019 15:33:24 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:23:07.019 15:33:24 -- common/autotest_common.sh@903 -- # get_bdev_list 00:23:07.019 15:33:24 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:07.019 15:33:24 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:07.019 15:33:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:07.019 15:33:24 -- host/discovery.sh@55 -- # sort 00:23:07.019 15:33:24 -- common/autotest_common.sh@10 -- # set +x 00:23:07.019 15:33:24 -- host/discovery.sh@55 -- # xargs 00:23:07.019 15:33:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:07.019 15:33:24 -- common/autotest_common.sh@903 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:23:07.019 15:33:24 -- common/autotest_common.sh@904 -- # return 0 00:23:07.019 15:33:24 -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:07.019 15:33:24 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:07.019 15:33:24 -- common/autotest_common.sh@901 -- # local max=10 00:23:07.019 15:33:24 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:07.019 15:33:24 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:23:07.019 15:33:24 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:23:07.019 15:33:24 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:07.019 15:33:24 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:07.019 15:33:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:07.019 15:33:24 -- host/discovery.sh@63 -- # sort -n 00:23:07.019 15:33:24 -- common/autotest_common.sh@10 -- # set +x 00:23:07.019 15:33:24 -- host/discovery.sh@63 -- # xargs 00:23:07.019 15:33:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:07.019 15:33:24 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0 ]] 00:23:07.019 15:33:24 -- common/autotest_common.sh@904 -- # return 0 00:23:07.019 15:33:24 -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:23:07.019 15:33:24 -- host/discovery.sh@79 -- # expected_count=1 00:23:07.019 15:33:24 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:07.019 15:33:24 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:07.019 15:33:24 -- common/autotest_common.sh@901 -- # local max=10 00:23:07.019 15:33:24 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:07.019 15:33:24 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:07.019 15:33:24 -- common/autotest_common.sh@903 -- # get_notification_count 00:23:07.019 15:33:24 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:07.019 15:33:24 -- host/discovery.sh@74 -- # jq '. | length' 00:23:07.019 15:33:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:07.019 15:33:24 -- common/autotest_common.sh@10 -- # set +x 00:23:07.019 15:33:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:07.019 15:33:24 -- host/discovery.sh@74 -- # notification_count=1 00:23:07.019 15:33:24 -- host/discovery.sh@75 -- # notify_id=1 00:23:07.019 15:33:24 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:23:07.019 15:33:24 -- common/autotest_common.sh@904 -- # return 0 00:23:07.019 15:33:24 -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:07.019 15:33:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:07.019 15:33:24 -- common/autotest_common.sh@10 -- # set +x 00:23:07.019 15:33:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:07.019 15:33:24 -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:07.019 15:33:24 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:07.019 15:33:24 -- common/autotest_common.sh@901 -- # local max=10 00:23:07.019 15:33:24 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:07.019 15:33:24 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:07.019 15:33:24 -- common/autotest_common.sh@903 -- # get_bdev_list 00:23:07.019 15:33:24 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:07.019 15:33:24 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:07.019 15:33:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:07.019 15:33:24 -- common/autotest_common.sh@10 -- # set +x 00:23:07.019 15:33:24 -- host/discovery.sh@55 -- # sort 00:23:07.019 15:33:24 -- host/discovery.sh@55 -- # xargs 00:23:07.019 15:33:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:07.019 15:33:24 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:07.019 15:33:24 -- common/autotest_common.sh@904 -- # return 0 00:23:07.019 15:33:24 -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:23:07.019 15:33:24 -- host/discovery.sh@79 -- # expected_count=1 00:23:07.019 15:33:24 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:07.019 15:33:24 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:07.019 15:33:24 -- common/autotest_common.sh@901 -- # local max=10 00:23:07.019 15:33:24 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:07.019 15:33:24 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:07.019 15:33:24 -- common/autotest_common.sh@903 -- # get_notification_count 00:23:07.019 15:33:24 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:23:07.019 15:33:24 -- host/discovery.sh@74 -- # jq '. | length' 00:23:07.019 15:33:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:07.019 15:33:24 -- common/autotest_common.sh@10 -- # set +x 00:23:07.280 15:33:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:07.280 15:33:24 -- host/discovery.sh@74 -- # notification_count=1 00:23:07.280 15:33:24 -- host/discovery.sh@75 -- # notify_id=2 00:23:07.280 15:33:24 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:23:07.280 15:33:24 -- common/autotest_common.sh@904 -- # return 0 00:23:07.280 15:33:24 -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:07.280 15:33:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:07.280 15:33:24 -- common/autotest_common.sh@10 -- # set +x 00:23:07.280 [2024-04-26 15:33:24.504983] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:07.280 [2024-04-26 15:33:24.506055] bdev_nvme.c:6905:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:07.280 [2024-04-26 15:33:24.506080] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:07.280 15:33:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:07.280 15:33:24 -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:07.280 15:33:24 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:07.280 15:33:24 -- common/autotest_common.sh@901 -- # local max=10 00:23:07.280 15:33:24 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:07.280 15:33:24 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:07.280 15:33:24 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:23:07.280 15:33:24 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:07.280 15:33:24 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:07.280 15:33:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:07.280 15:33:24 -- common/autotest_common.sh@10 -- # set +x 00:23:07.280 15:33:24 -- host/discovery.sh@59 -- # sort 00:23:07.280 15:33:24 -- host/discovery.sh@59 -- # xargs 00:23:07.280 15:33:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:07.280 15:33:24 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:07.280 15:33:24 -- common/autotest_common.sh@904 -- # return 0 00:23:07.280 15:33:24 -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:07.280 15:33:24 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:07.280 15:33:24 -- common/autotest_common.sh@901 -- # local max=10 00:23:07.280 15:33:24 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:07.280 15:33:24 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:07.280 15:33:24 -- common/autotest_common.sh@903 -- # get_bdev_list 00:23:07.280 15:33:24 -- host/discovery.sh@55 -- # xargs 00:23:07.280 15:33:24 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:07.280 15:33:24 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:07.280 15:33:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:07.280 15:33:24 -- host/discovery.sh@55 -- # sort 00:23:07.280 15:33:24 -- common/autotest_common.sh@10 -- # set +x 00:23:07.280 15:33:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:07.280 [2024-04-26 15:33:24.594350] bdev_nvme.c:6847:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:23:07.280 15:33:24 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:07.280 15:33:24 -- common/autotest_common.sh@904 -- # return 0 00:23:07.280 15:33:24 -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:07.280 15:33:24 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:07.280 15:33:24 -- common/autotest_common.sh@901 -- # local max=10 00:23:07.280 15:33:24 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:07.280 15:33:24 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:07.280 15:33:24 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:23:07.280 15:33:24 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:07.280 15:33:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:07.280 15:33:24 -- common/autotest_common.sh@10 -- # set +x 00:23:07.280 15:33:24 -- host/discovery.sh@63 -- # xargs 00:23:07.280 15:33:24 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:07.280 15:33:24 -- host/discovery.sh@63 -- # sort -n 00:23:07.280 15:33:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:07.280 15:33:24 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:23:07.280 15:33:24 -- common/autotest_common.sh@906 -- # sleep 1 00:23:07.539 [2024-04-26 15:33:24.905790] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:07.540 [2024-04-26 15:33:24.905810] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:07.540 [2024-04-26 15:33:24.905815] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:08.482 15:33:25 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:08.482 15:33:25 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:08.482 15:33:25 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:23:08.482 15:33:25 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:08.482 15:33:25 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:08.482 15:33:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:08.482 15:33:25 -- host/discovery.sh@63 -- # sort -n 00:23:08.482 15:33:25 -- common/autotest_common.sh@10 -- # set +x 00:23:08.482 15:33:25 -- host/discovery.sh@63 -- # xargs 00:23:08.482 15:33:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:08.482 15:33:25 -- common/autotest_common.sh@903 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:08.482 15:33:25 -- common/autotest_common.sh@904 -- # return 0 00:23:08.482 15:33:25 -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:23:08.482 15:33:25 -- host/discovery.sh@79 -- # expected_count=0 00:23:08.482 15:33:25 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:08.482 15:33:25 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:08.482 15:33:25 -- common/autotest_common.sh@901 -- # local max=10 00:23:08.482 15:33:25 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:08.482 15:33:25 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:08.482 15:33:25 -- common/autotest_common.sh@903 -- # get_notification_count 00:23:08.482 15:33:25 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:08.482 15:33:25 -- host/discovery.sh@74 -- # jq '. | length' 00:23:08.482 15:33:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:08.482 15:33:25 -- common/autotest_common.sh@10 -- # set +x 00:23:08.482 15:33:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:08.482 15:33:25 -- host/discovery.sh@74 -- # notification_count=0 00:23:08.482 15:33:25 -- host/discovery.sh@75 -- # notify_id=2 00:23:08.482 15:33:25 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:23:08.482 15:33:25 -- common/autotest_common.sh@904 -- # return 0 00:23:08.482 15:33:25 -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:08.482 15:33:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:08.482 15:33:25 -- common/autotest_common.sh@10 -- # set +x 00:23:08.482 [2024-04-26 15:33:25.773257] bdev_nvme.c:6905:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:08.482 [2024-04-26 15:33:25.773278] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:08.482 15:33:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:08.482 15:33:25 -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:08.482 15:33:25 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:08.482 15:33:25 -- common/autotest_common.sh@901 -- # local max=10 00:23:08.482 15:33:25 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:08.482 15:33:25 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:08.482 15:33:25 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:23:08.482 [2024-04-26 15:33:25.781787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:08.482 [2024-04-26 15:33:25.781806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.482 [2024-04-26 15:33:25.781815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:08.482 [2024-04-26 15:33:25.781822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.482 [2024-04-26 15:33:25.781830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:08.483 [2024-04-26 15:33:25.781840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.483 [2024-04-26 15:33:25.781848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:08.483 [2024-04-26 15:33:25.781855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.483 [2024-04-26 15:33:25.781869] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe3d0f0 is same with the state(5) to be set 00:23:08.483 15:33:25 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:08.483 15:33:25 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:08.483 15:33:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:08.483 15:33:25 -- host/discovery.sh@59 -- # sort 00:23:08.483 15:33:25 -- common/autotest_common.sh@10 -- # set +x 00:23:08.483 15:33:25 -- host/discovery.sh@59 -- # xargs 00:23:08.483 [2024-04-26 15:33:25.791801] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe3d0f0 (9): Bad file descriptor 00:23:08.483 15:33:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:08.483 [2024-04-26 15:33:25.801842] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:08.483 [2024-04-26 15:33:25.802200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:08.483 [2024-04-26 15:33:25.802555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:08.483 [2024-04-26 15:33:25.802565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe3d0f0 with addr=10.0.0.2, port=4420 00:23:08.483 [2024-04-26 15:33:25.802573] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe3d0f0 is same with the state(5) to be set 00:23:08.483 [2024-04-26 15:33:25.802585] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe3d0f0 (9): Bad file descriptor 00:23:08.483 [2024-04-26 15:33:25.802603] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:08.483 [2024-04-26 15:33:25.802611] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:08.483 [2024-04-26 15:33:25.802619] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:08.483 [2024-04-26 15:33:25.802631] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:08.483 [2024-04-26 15:33:25.811898] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:08.483 [2024-04-26 15:33:25.812101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:08.483 [2024-04-26 15:33:25.812402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:08.483 [2024-04-26 15:33:25.812411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe3d0f0 with addr=10.0.0.2, port=4420 00:23:08.483 [2024-04-26 15:33:25.812418] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe3d0f0 is same with the state(5) to be set 00:23:08.483 [2024-04-26 15:33:25.812429] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe3d0f0 (9): Bad file descriptor 00:23:08.483 [2024-04-26 15:33:25.812439] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:08.483 [2024-04-26 15:33:25.812445] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:08.483 [2024-04-26 15:33:25.812452] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:08.483 [2024-04-26 15:33:25.812462] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:08.483 [2024-04-26 15:33:25.821947] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:08.483 [2024-04-26 15:33:25.822260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:08.483 [2024-04-26 15:33:25.822593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:08.483 [2024-04-26 15:33:25.822602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe3d0f0 with addr=10.0.0.2, port=4420 00:23:08.483 [2024-04-26 15:33:25.822610] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe3d0f0 is same with the state(5) to be set 00:23:08.483 [2024-04-26 15:33:25.822621] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe3d0f0 (9): Bad file descriptor 00:23:08.483 [2024-04-26 15:33:25.822635] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:08.483 [2024-04-26 15:33:25.822641] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:08.483 [2024-04-26 15:33:25.822648] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:08.483 [2024-04-26 15:33:25.822658] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:08.483 [2024-04-26 15:33:25.832000] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:08.483 [2024-04-26 15:33:25.832312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:08.483 [2024-04-26 15:33:25.832661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:08.483 [2024-04-26 15:33:25.832670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe3d0f0 with addr=10.0.0.2, port=4420 00:23:08.483 [2024-04-26 15:33:25.832677] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe3d0f0 is same with the state(5) to be set 00:23:08.483 [2024-04-26 15:33:25.832688] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe3d0f0 (9): Bad file descriptor 00:23:08.483 [2024-04-26 15:33:25.832698] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:08.483 [2024-04-26 15:33:25.832704] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:08.483 [2024-04-26 15:33:25.832711] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:08.483 [2024-04-26 15:33:25.832727] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:08.483 15:33:25 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:08.483 15:33:25 -- common/autotest_common.sh@904 -- # return 0 00:23:08.483 15:33:25 -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:08.483 15:33:25 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:08.483 15:33:25 -- common/autotest_common.sh@901 -- # local max=10 00:23:08.483 15:33:25 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:08.483 15:33:25 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:08.483 15:33:25 -- common/autotest_common.sh@903 -- # get_bdev_list 00:23:08.483 15:33:25 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:08.483 15:33:25 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:08.483 15:33:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:08.483 15:33:25 -- host/discovery.sh@55 -- # sort 00:23:08.483 15:33:25 -- common/autotest_common.sh@10 -- # set +x 00:23:08.483 15:33:25 -- host/discovery.sh@55 -- # xargs 00:23:08.483 [2024-04-26 15:33:25.842051] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:08.483 [2024-04-26 15:33:25.842361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:08.483 [2024-04-26 15:33:25.842551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:08.483 [2024-04-26 15:33:25.842564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe3d0f0 with addr=10.0.0.2, port=4420 00:23:08.483 [2024-04-26 15:33:25.842571] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe3d0f0 is same with the state(5) to be set 00:23:08.483 [2024-04-26 15:33:25.842582] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe3d0f0 (9): Bad file descriptor 00:23:08.483 [2024-04-26 15:33:25.842593] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:08.483 [2024-04-26 15:33:25.842600] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:08.483 [2024-04-26 15:33:25.842607] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:08.483 [2024-04-26 15:33:25.842621] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:08.483 [2024-04-26 15:33:25.852102] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:08.483 [2024-04-26 15:33:25.852419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:08.483 [2024-04-26 15:33:25.852738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:08.483 [2024-04-26 15:33:25.852747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe3d0f0 with addr=10.0.0.2, port=4420 00:23:08.483 [2024-04-26 15:33:25.852754] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe3d0f0 is same with the state(5) to be set 00:23:08.483 [2024-04-26 15:33:25.852765] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe3d0f0 (9): Bad file descriptor 00:23:08.483 [2024-04-26 15:33:25.852775] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:08.483 [2024-04-26 15:33:25.852782] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:08.483 [2024-04-26 15:33:25.852788] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:08.484 [2024-04-26 15:33:25.852799] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:08.484 [2024-04-26 15:33:25.861977] bdev_nvme.c:6710:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:08.484 [2024-04-26 15:33:25.861996] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:08.484 15:33:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:08.484 15:33:25 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:08.484 15:33:25 -- common/autotest_common.sh@904 -- # return 0 00:23:08.484 15:33:25 -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:08.484 15:33:25 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:08.484 15:33:25 -- common/autotest_common.sh@901 -- # local max=10 00:23:08.484 15:33:25 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:08.484 15:33:25 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:23:08.484 15:33:25 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:23:08.484 15:33:25 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:08.484 15:33:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:08.484 15:33:25 -- common/autotest_common.sh@10 -- # set +x 00:23:08.484 15:33:25 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:08.484 15:33:25 -- host/discovery.sh@63 -- # sort -n 00:23:08.484 15:33:25 -- host/discovery.sh@63 -- # xargs 00:23:08.484 15:33:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:08.744 15:33:25 -- common/autotest_common.sh@903 -- # [[ 4421 == \4\4\2\1 ]] 00:23:08.744 15:33:25 -- common/autotest_common.sh@904 -- # return 0 00:23:08.744 15:33:25 -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:23:08.744 15:33:25 -- host/discovery.sh@79 -- # expected_count=0 00:23:08.744 15:33:25 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:08.744 15:33:25 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:08.744 15:33:25 -- common/autotest_common.sh@901 -- # local max=10 00:23:08.744 15:33:25 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:08.744 15:33:25 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:08.744 15:33:25 -- common/autotest_common.sh@903 -- # get_notification_count 00:23:08.744 15:33:25 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:08.744 15:33:25 -- host/discovery.sh@74 -- # jq '. | length' 00:23:08.744 15:33:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:08.744 15:33:25 -- common/autotest_common.sh@10 -- # set +x 00:23:08.744 15:33:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:08.744 15:33:25 -- host/discovery.sh@74 -- # notification_count=0 00:23:08.744 15:33:25 -- host/discovery.sh@75 -- # notify_id=2 00:23:08.744 15:33:25 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:23:08.744 15:33:25 -- common/autotest_common.sh@904 -- # return 0 00:23:08.744 15:33:25 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:23:08.744 15:33:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:08.744 15:33:25 -- common/autotest_common.sh@10 -- # set +x 00:23:08.744 15:33:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:08.744 15:33:26 -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:23:08.744 15:33:26 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:23:08.744 15:33:26 -- common/autotest_common.sh@901 -- # local max=10 00:23:08.744 15:33:26 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:08.744 15:33:26 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:23:08.744 15:33:26 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:23:08.744 15:33:26 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:08.744 15:33:26 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:08.744 15:33:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:08.745 15:33:26 -- host/discovery.sh@59 -- # sort 00:23:08.745 15:33:26 -- common/autotest_common.sh@10 -- # set +x 00:23:08.745 15:33:26 -- host/discovery.sh@59 -- # xargs 00:23:08.745 15:33:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:08.745 15:33:26 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:23:08.745 15:33:26 -- common/autotest_common.sh@904 -- # return 0 00:23:08.745 15:33:26 -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:23:08.745 15:33:26 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:23:08.745 15:33:26 -- common/autotest_common.sh@901 -- # local max=10 00:23:08.745 15:33:26 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:08.745 15:33:26 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:23:08.745 15:33:26 -- common/autotest_common.sh@903 -- # get_bdev_list 00:23:08.745 15:33:26 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:08.745 15:33:26 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:08.745 15:33:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:08.745 15:33:26 -- host/discovery.sh@55 -- # sort 00:23:08.745 15:33:26 -- common/autotest_common.sh@10 -- # set +x 00:23:08.745 15:33:26 -- host/discovery.sh@55 -- # xargs 00:23:08.745 15:33:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:08.745 15:33:26 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:23:08.745 15:33:26 -- common/autotest_common.sh@904 -- # return 0 00:23:08.745 15:33:26 -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:23:08.745 15:33:26 -- host/discovery.sh@79 -- # expected_count=2 00:23:08.745 15:33:26 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:08.745 15:33:26 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:08.745 15:33:26 -- common/autotest_common.sh@901 -- # local max=10 00:23:08.745 15:33:26 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:08.745 15:33:26 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:08.745 15:33:26 -- common/autotest_common.sh@903 -- # get_notification_count 00:23:08.745 15:33:26 -- host/discovery.sh@74 -- # jq '. | length' 00:23:08.745 15:33:26 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:08.745 15:33:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:08.745 15:33:26 -- common/autotest_common.sh@10 -- # set +x 00:23:08.745 15:33:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:08.745 15:33:26 -- host/discovery.sh@74 -- # notification_count=2 00:23:08.745 15:33:26 -- host/discovery.sh@75 -- # notify_id=4 00:23:08.745 15:33:26 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:23:08.745 15:33:26 -- common/autotest_common.sh@904 -- # return 0 00:23:08.745 15:33:26 -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:08.745 15:33:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:08.745 15:33:26 -- common/autotest_common.sh@10 -- # set +x 00:23:10.129 [2024-04-26 15:33:27.194773] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:10.129 [2024-04-26 15:33:27.194791] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:10.129 [2024-04-26 15:33:27.194804] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:10.129 [2024-04-26 15:33:27.282090] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:23:10.129 [2024-04-26 15:33:27.346823] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:10.129 [2024-04-26 15:33:27.346860] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:10.129 15:33:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:10.129 15:33:27 -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:10.129 15:33:27 -- common/autotest_common.sh@638 -- # local es=0 00:23:10.129 15:33:27 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:10.129 15:33:27 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:23:10.129 15:33:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:10.129 15:33:27 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:23:10.129 15:33:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:10.129 15:33:27 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:10.129 15:33:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:10.129 15:33:27 -- common/autotest_common.sh@10 -- # set +x 00:23:10.129 request: 00:23:10.129 { 00:23:10.129 "name": "nvme", 00:23:10.129 "trtype": "tcp", 00:23:10.129 "traddr": "10.0.0.2", 00:23:10.129 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:10.129 "adrfam": "ipv4", 00:23:10.129 "trsvcid": "8009", 00:23:10.129 "wait_for_attach": true, 00:23:10.129 "method": "bdev_nvme_start_discovery", 00:23:10.129 "req_id": 1 00:23:10.129 } 00:23:10.129 Got JSON-RPC error response 00:23:10.129 response: 00:23:10.129 { 00:23:10.129 "code": -17, 00:23:10.129 "message": "File exists" 00:23:10.129 } 00:23:10.129 15:33:27 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:23:10.129 15:33:27 -- common/autotest_common.sh@641 -- # es=1 00:23:10.129 15:33:27 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:10.129 15:33:27 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:10.129 15:33:27 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:10.129 15:33:27 -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:23:10.129 15:33:27 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:10.129 15:33:27 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:10.129 15:33:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:10.129 15:33:27 -- host/discovery.sh@67 -- # sort 00:23:10.129 15:33:27 -- common/autotest_common.sh@10 -- # set +x 00:23:10.129 15:33:27 -- host/discovery.sh@67 -- # xargs 00:23:10.130 15:33:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:10.130 15:33:27 -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:23:10.130 15:33:27 -- host/discovery.sh@146 -- # get_bdev_list 00:23:10.130 15:33:27 -- host/discovery.sh@55 -- # xargs 00:23:10.130 15:33:27 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:10.130 15:33:27 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:10.130 15:33:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:10.130 15:33:27 -- host/discovery.sh@55 -- # sort 00:23:10.130 15:33:27 -- common/autotest_common.sh@10 -- # set +x 00:23:10.130 15:33:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:10.130 15:33:27 -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:10.130 15:33:27 -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:10.130 15:33:27 -- common/autotest_common.sh@638 -- # local es=0 00:23:10.130 15:33:27 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:10.130 15:33:27 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:23:10.130 15:33:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:10.130 15:33:27 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:23:10.130 15:33:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:10.130 15:33:27 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:10.130 15:33:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:10.130 15:33:27 -- common/autotest_common.sh@10 -- # set +x 00:23:10.130 request: 00:23:10.130 { 00:23:10.130 "name": "nvme_second", 00:23:10.130 "trtype": "tcp", 00:23:10.130 "traddr": "10.0.0.2", 00:23:10.130 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:10.130 "adrfam": "ipv4", 00:23:10.130 "trsvcid": "8009", 00:23:10.130 "wait_for_attach": true, 00:23:10.130 "method": "bdev_nvme_start_discovery", 00:23:10.130 "req_id": 1 00:23:10.130 } 00:23:10.130 Got JSON-RPC error response 00:23:10.130 response: 00:23:10.130 { 00:23:10.130 "code": -17, 00:23:10.130 "message": "File exists" 00:23:10.130 } 00:23:10.130 15:33:27 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:23:10.130 15:33:27 -- common/autotest_common.sh@641 -- # es=1 00:23:10.130 15:33:27 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:10.130 15:33:27 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:10.130 15:33:27 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:10.130 15:33:27 -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:23:10.130 15:33:27 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:10.130 15:33:27 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:10.130 15:33:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:10.130 15:33:27 -- common/autotest_common.sh@10 -- # set +x 00:23:10.130 15:33:27 -- host/discovery.sh@67 -- # sort 00:23:10.130 15:33:27 -- host/discovery.sh@67 -- # xargs 00:23:10.130 15:33:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:10.130 15:33:27 -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:23:10.130 15:33:27 -- host/discovery.sh@152 -- # get_bdev_list 00:23:10.130 15:33:27 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:10.130 15:33:27 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:10.130 15:33:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:10.130 15:33:27 -- host/discovery.sh@55 -- # sort 00:23:10.130 15:33:27 -- common/autotest_common.sh@10 -- # set +x 00:23:10.130 15:33:27 -- host/discovery.sh@55 -- # xargs 00:23:10.391 15:33:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:10.391 15:33:27 -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:10.391 15:33:27 -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:10.391 15:33:27 -- common/autotest_common.sh@638 -- # local es=0 00:23:10.391 15:33:27 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:10.391 15:33:27 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:23:10.391 15:33:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:10.391 15:33:27 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:23:10.391 15:33:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:10.391 15:33:27 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:10.391 15:33:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:10.391 15:33:27 -- common/autotest_common.sh@10 -- # set +x 00:23:11.331 [2024-04-26 15:33:28.610216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.331 [2024-04-26 15:33:28.610514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.331 [2024-04-26 15:33:28.610525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe59640 with addr=10.0.0.2, port=8010 00:23:11.331 [2024-04-26 15:33:28.610536] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:11.331 [2024-04-26 15:33:28.610543] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:11.331 [2024-04-26 15:33:28.610551] bdev_nvme.c:6985:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:12.271 [2024-04-26 15:33:29.612696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.271 [2024-04-26 15:33:29.612935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.271 [2024-04-26 15:33:29.612948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe59640 with addr=10.0.0.2, port=8010 00:23:12.271 [2024-04-26 15:33:29.612959] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:12.271 [2024-04-26 15:33:29.612966] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:12.271 [2024-04-26 15:33:29.612972] bdev_nvme.c:6985:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:13.215 [2024-04-26 15:33:30.614677] bdev_nvme.c:6966:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:23:13.215 request: 00:23:13.215 { 00:23:13.215 "name": "nvme_second", 00:23:13.215 "trtype": "tcp", 00:23:13.215 "traddr": "10.0.0.2", 00:23:13.215 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:13.215 "adrfam": "ipv4", 00:23:13.215 "trsvcid": "8010", 00:23:13.215 "attach_timeout_ms": 3000, 00:23:13.215 "method": "bdev_nvme_start_discovery", 00:23:13.215 "req_id": 1 00:23:13.215 } 00:23:13.215 Got JSON-RPC error response 00:23:13.215 response: 00:23:13.215 { 00:23:13.215 "code": -110, 00:23:13.215 "message": "Connection timed out" 00:23:13.215 } 00:23:13.215 15:33:30 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:23:13.215 15:33:30 -- common/autotest_common.sh@641 -- # es=1 00:23:13.215 15:33:30 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:13.215 15:33:30 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:13.215 15:33:30 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:13.215 15:33:30 -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:23:13.215 15:33:30 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:13.215 15:33:30 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:13.215 15:33:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:13.215 15:33:30 -- common/autotest_common.sh@10 -- # set +x 00:23:13.215 15:33:30 -- host/discovery.sh@67 -- # sort 00:23:13.215 15:33:30 -- host/discovery.sh@67 -- # xargs 00:23:13.215 15:33:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:13.477 15:33:30 -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:23:13.477 15:33:30 -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:23:13.477 15:33:30 -- host/discovery.sh@161 -- # kill 1733076 00:23:13.477 15:33:30 -- host/discovery.sh@162 -- # nvmftestfini 00:23:13.477 15:33:30 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:13.477 15:33:30 -- nvmf/common.sh@117 -- # sync 00:23:13.477 15:33:30 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:13.477 15:33:30 -- nvmf/common.sh@120 -- # set +e 00:23:13.477 15:33:30 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:13.477 15:33:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:13.477 rmmod nvme_tcp 00:23:13.477 rmmod nvme_fabrics 00:23:13.477 rmmod nvme_keyring 00:23:13.477 15:33:30 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:13.477 15:33:30 -- nvmf/common.sh@124 -- # set -e 00:23:13.477 15:33:30 -- nvmf/common.sh@125 -- # return 0 00:23:13.477 15:33:30 -- nvmf/common.sh@478 -- # '[' -n 1732905 ']' 00:23:13.477 15:33:30 -- nvmf/common.sh@479 -- # killprocess 1732905 00:23:13.477 15:33:30 -- common/autotest_common.sh@936 -- # '[' -z 1732905 ']' 00:23:13.477 15:33:30 -- common/autotest_common.sh@940 -- # kill -0 1732905 00:23:13.477 15:33:30 -- common/autotest_common.sh@941 -- # uname 00:23:13.477 15:33:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:13.477 15:33:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1732905 00:23:13.477 15:33:30 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:13.477 15:33:30 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:13.477 15:33:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1732905' 00:23:13.477 killing process with pid 1732905 00:23:13.477 15:33:30 -- common/autotest_common.sh@955 -- # kill 1732905 00:23:13.477 15:33:30 -- common/autotest_common.sh@960 -- # wait 1732905 00:23:13.477 15:33:30 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:13.477 15:33:30 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:13.477 15:33:30 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:13.477 15:33:30 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:13.477 15:33:30 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:13.477 15:33:30 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:13.477 15:33:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:13.477 15:33:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:16.033 15:33:32 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:16.033 00:23:16.033 real 0m19.482s 00:23:16.033 user 0m22.791s 00:23:16.033 sys 0m6.650s 00:23:16.033 15:33:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:16.033 15:33:32 -- common/autotest_common.sh@10 -- # set +x 00:23:16.033 ************************************ 00:23:16.033 END TEST nvmf_discovery 00:23:16.033 ************************************ 00:23:16.033 15:33:33 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:16.033 15:33:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:16.033 15:33:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:16.033 15:33:33 -- common/autotest_common.sh@10 -- # set +x 00:23:16.033 ************************************ 00:23:16.033 START TEST nvmf_discovery_remove_ifc 00:23:16.033 ************************************ 00:23:16.033 15:33:33 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:16.033 * Looking for test storage... 00:23:16.033 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:16.033 15:33:33 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:16.033 15:33:33 -- nvmf/common.sh@7 -- # uname -s 00:23:16.033 15:33:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:16.033 15:33:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:16.033 15:33:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:16.033 15:33:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:16.033 15:33:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:16.033 15:33:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:16.033 15:33:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:16.033 15:33:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:16.033 15:33:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:16.033 15:33:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:16.033 15:33:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:16.033 15:33:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:16.033 15:33:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:16.033 15:33:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:16.033 15:33:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:16.033 15:33:33 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:16.033 15:33:33 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:16.033 15:33:33 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:16.033 15:33:33 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:16.033 15:33:33 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:16.033 15:33:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.033 15:33:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.034 15:33:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.034 15:33:33 -- paths/export.sh@5 -- # export PATH 00:23:16.034 15:33:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.034 15:33:33 -- nvmf/common.sh@47 -- # : 0 00:23:16.034 15:33:33 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:16.034 15:33:33 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:16.034 15:33:33 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:16.034 15:33:33 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:16.034 15:33:33 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:16.034 15:33:33 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:16.034 15:33:33 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:16.034 15:33:33 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:16.034 15:33:33 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:23:16.034 15:33:33 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:23:16.034 15:33:33 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:23:16.034 15:33:33 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:23:16.034 15:33:33 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:23:16.034 15:33:33 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:23:16.034 15:33:33 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:23:16.034 15:33:33 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:16.034 15:33:33 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:16.034 15:33:33 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:16.034 15:33:33 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:16.034 15:33:33 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:16.034 15:33:33 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:16.034 15:33:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:16.034 15:33:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:16.034 15:33:33 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:23:16.034 15:33:33 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:23:16.034 15:33:33 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:16.034 15:33:33 -- common/autotest_common.sh@10 -- # set +x 00:23:24.176 15:33:40 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:24.176 15:33:40 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:24.176 15:33:40 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:24.176 15:33:40 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:24.176 15:33:40 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:24.176 15:33:40 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:24.176 15:33:40 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:24.176 15:33:40 -- nvmf/common.sh@295 -- # net_devs=() 00:23:24.176 15:33:40 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:24.176 15:33:40 -- nvmf/common.sh@296 -- # e810=() 00:23:24.176 15:33:40 -- nvmf/common.sh@296 -- # local -ga e810 00:23:24.176 15:33:40 -- nvmf/common.sh@297 -- # x722=() 00:23:24.176 15:33:40 -- nvmf/common.sh@297 -- # local -ga x722 00:23:24.176 15:33:40 -- nvmf/common.sh@298 -- # mlx=() 00:23:24.176 15:33:40 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:24.176 15:33:40 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:24.176 15:33:40 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:24.176 15:33:40 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:24.176 15:33:40 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:24.176 15:33:40 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:24.176 15:33:40 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:24.176 15:33:40 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:24.176 15:33:40 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:24.176 15:33:40 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:24.176 15:33:40 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:24.176 15:33:40 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:24.176 15:33:40 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:24.176 15:33:40 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:24.176 15:33:40 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:24.176 15:33:40 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:24.176 15:33:40 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:24.176 15:33:40 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:24.176 15:33:40 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:24.176 15:33:40 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:24.176 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:24.176 15:33:40 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:24.176 15:33:40 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:24.176 15:33:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:24.176 15:33:40 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:24.176 15:33:40 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:24.176 15:33:40 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:24.176 15:33:40 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:24.176 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:24.176 15:33:40 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:24.176 15:33:40 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:24.176 15:33:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:24.176 15:33:40 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:24.176 15:33:40 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:24.176 15:33:40 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:24.176 15:33:40 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:24.176 15:33:40 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:24.176 15:33:40 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:24.176 15:33:40 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:24.176 15:33:40 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:24.176 15:33:40 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:24.176 15:33:40 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:24.176 Found net devices under 0000:31:00.0: cvl_0_0 00:23:24.176 15:33:40 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:24.176 15:33:40 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:24.176 15:33:40 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:24.176 15:33:40 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:24.176 15:33:40 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:24.176 15:33:40 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:24.176 Found net devices under 0000:31:00.1: cvl_0_1 00:23:24.176 15:33:40 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:24.176 15:33:40 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:23:24.176 15:33:40 -- nvmf/common.sh@403 -- # is_hw=yes 00:23:24.176 15:33:40 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:23:24.176 15:33:40 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:23:24.176 15:33:40 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:23:24.176 15:33:40 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:24.176 15:33:40 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:24.176 15:33:40 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:24.176 15:33:40 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:24.176 15:33:40 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:24.176 15:33:40 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:24.176 15:33:40 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:24.176 15:33:40 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:24.176 15:33:40 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:24.176 15:33:40 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:24.176 15:33:40 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:24.176 15:33:40 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:24.176 15:33:40 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:24.176 15:33:40 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:24.176 15:33:40 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:24.176 15:33:40 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:24.176 15:33:40 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:24.176 15:33:40 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:24.176 15:33:40 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:24.176 15:33:40 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:24.176 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:24.176 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.503 ms 00:23:24.176 00:23:24.176 --- 10.0.0.2 ping statistics --- 00:23:24.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:24.176 rtt min/avg/max/mdev = 0.503/0.503/0.503/0.000 ms 00:23:24.176 15:33:40 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:24.176 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:24.176 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:23:24.176 00:23:24.176 --- 10.0.0.1 ping statistics --- 00:23:24.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:24.176 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:23:24.176 15:33:40 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:24.176 15:33:40 -- nvmf/common.sh@411 -- # return 0 00:23:24.176 15:33:40 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:24.177 15:33:40 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:24.177 15:33:40 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:24.177 15:33:40 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:24.177 15:33:40 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:24.177 15:33:40 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:24.177 15:33:40 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:24.177 15:33:40 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:23:24.177 15:33:40 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:24.177 15:33:40 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:24.177 15:33:40 -- common/autotest_common.sh@10 -- # set +x 00:23:24.177 15:33:40 -- nvmf/common.sh@470 -- # nvmfpid=1739180 00:23:24.177 15:33:40 -- nvmf/common.sh@471 -- # waitforlisten 1739180 00:23:24.177 15:33:40 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:24.177 15:33:40 -- common/autotest_common.sh@817 -- # '[' -z 1739180 ']' 00:23:24.177 15:33:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:24.177 15:33:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:24.177 15:33:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:24.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:24.177 15:33:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:24.177 15:33:40 -- common/autotest_common.sh@10 -- # set +x 00:23:24.177 [2024-04-26 15:33:40.673783] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:23:24.177 [2024-04-26 15:33:40.673853] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:24.177 EAL: No free 2048 kB hugepages reported on node 1 00:23:24.177 [2024-04-26 15:33:40.761918] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:24.177 [2024-04-26 15:33:40.853825] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:24.177 [2024-04-26 15:33:40.853893] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:24.177 [2024-04-26 15:33:40.853903] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:24.177 [2024-04-26 15:33:40.853910] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:24.177 [2024-04-26 15:33:40.853916] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:24.177 [2024-04-26 15:33:40.853947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:24.177 15:33:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:24.177 15:33:41 -- common/autotest_common.sh@850 -- # return 0 00:23:24.177 15:33:41 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:24.177 15:33:41 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:24.177 15:33:41 -- common/autotest_common.sh@10 -- # set +x 00:23:24.177 15:33:41 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:24.177 15:33:41 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:23:24.177 15:33:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:24.177 15:33:41 -- common/autotest_common.sh@10 -- # set +x 00:23:24.177 [2024-04-26 15:33:41.514549] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:24.177 [2024-04-26 15:33:41.522752] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:24.177 null0 00:23:24.177 [2024-04-26 15:33:41.554728] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:24.177 15:33:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:24.177 15:33:41 -- host/discovery_remove_ifc.sh@59 -- # hostpid=1739527 00:23:24.177 15:33:41 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1739527 /tmp/host.sock 00:23:24.177 15:33:41 -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:23:24.177 15:33:41 -- common/autotest_common.sh@817 -- # '[' -z 1739527 ']' 00:23:24.177 15:33:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:23:24.177 15:33:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:24.177 15:33:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:24.177 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:24.177 15:33:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:24.177 15:33:41 -- common/autotest_common.sh@10 -- # set +x 00:23:24.437 [2024-04-26 15:33:41.636536] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:23:24.438 [2024-04-26 15:33:41.636600] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1739527 ] 00:23:24.438 EAL: No free 2048 kB hugepages reported on node 1 00:23:24.438 [2024-04-26 15:33:41.700973] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:24.438 [2024-04-26 15:33:41.772916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:25.009 15:33:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:25.009 15:33:42 -- common/autotest_common.sh@850 -- # return 0 00:23:25.009 15:33:42 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:25.009 15:33:42 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:23:25.009 15:33:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:25.009 15:33:42 -- common/autotest_common.sh@10 -- # set +x 00:23:25.009 15:33:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:25.009 15:33:42 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:23:25.009 15:33:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:25.009 15:33:42 -- common/autotest_common.sh@10 -- # set +x 00:23:25.270 15:33:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:25.270 15:33:42 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:23:25.270 15:33:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:25.270 15:33:42 -- common/autotest_common.sh@10 -- # set +x 00:23:26.210 [2024-04-26 15:33:43.524002] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:26.210 [2024-04-26 15:33:43.524023] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:26.210 [2024-04-26 15:33:43.524036] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:26.210 [2024-04-26 15:33:43.610308] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:26.470 [2024-04-26 15:33:43.797077] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:26.470 [2024-04-26 15:33:43.797128] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:26.470 [2024-04-26 15:33:43.797149] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:26.470 [2024-04-26 15:33:43.797168] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:26.470 [2024-04-26 15:33:43.797189] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:26.470 15:33:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:26.470 15:33:43 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:23:26.470 15:33:43 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:26.470 [2024-04-26 15:33:43.802420] bdev_nvme.c:1606:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xdf6990 was disconnected and freed. delete nvme_qpair. 00:23:26.470 15:33:43 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:26.470 15:33:43 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:26.470 15:33:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:26.470 15:33:43 -- common/autotest_common.sh@10 -- # set +x 00:23:26.470 15:33:43 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:26.470 15:33:43 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:26.470 15:33:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:26.470 15:33:43 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:23:26.470 15:33:43 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:23:26.470 15:33:43 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:23:26.730 15:33:43 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:23:26.730 15:33:43 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:26.730 15:33:43 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:26.730 15:33:43 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:26.730 15:33:43 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:26.730 15:33:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:26.730 15:33:43 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:26.730 15:33:43 -- common/autotest_common.sh@10 -- # set +x 00:23:26.730 15:33:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:26.730 15:33:44 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:26.731 15:33:44 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:27.672 15:33:45 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:27.672 15:33:45 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:27.672 15:33:45 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:27.672 15:33:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:27.672 15:33:45 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:27.672 15:33:45 -- common/autotest_common.sh@10 -- # set +x 00:23:27.673 15:33:45 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:27.673 15:33:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:27.673 15:33:45 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:27.673 15:33:45 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:29.057 15:33:46 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:29.058 15:33:46 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:29.058 15:33:46 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:29.058 15:33:46 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:29.058 15:33:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:29.058 15:33:46 -- common/autotest_common.sh@10 -- # set +x 00:23:29.058 15:33:46 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:29.058 15:33:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:29.058 15:33:46 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:29.058 15:33:46 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:30.002 15:33:47 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:30.002 15:33:47 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:30.002 15:33:47 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:30.002 15:33:47 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:30.002 15:33:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:30.002 15:33:47 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:30.002 15:33:47 -- common/autotest_common.sh@10 -- # set +x 00:23:30.002 15:33:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:30.002 15:33:47 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:30.002 15:33:47 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:30.947 15:33:48 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:30.947 15:33:48 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:30.947 15:33:48 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:30.947 15:33:48 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:30.947 15:33:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:30.947 15:33:48 -- common/autotest_common.sh@10 -- # set +x 00:23:30.947 15:33:48 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:30.947 15:33:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:30.947 15:33:48 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:30.947 15:33:48 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:31.892 [2024-04-26 15:33:49.237555] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:23:31.892 [2024-04-26 15:33:49.237596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.892 [2024-04-26 15:33:49.237608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.892 [2024-04-26 15:33:49.237617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.892 [2024-04-26 15:33:49.237626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.892 [2024-04-26 15:33:49.237633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.892 [2024-04-26 15:33:49.237641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.892 [2024-04-26 15:33:49.237648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.892 [2024-04-26 15:33:49.237655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.892 [2024-04-26 15:33:49.237663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.892 [2024-04-26 15:33:49.237670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.892 [2024-04-26 15:33:49.237677] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbcda0 is same with the state(5) to be set 00:23:31.892 [2024-04-26 15:33:49.247575] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbcda0 (9): Bad file descriptor 00:23:31.892 [2024-04-26 15:33:49.257615] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:31.892 15:33:49 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:31.892 15:33:49 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:31.892 15:33:49 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:31.892 15:33:49 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:31.892 15:33:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.893 15:33:49 -- common/autotest_common.sh@10 -- # set +x 00:23:31.893 15:33:49 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:32.836 [2024-04-26 15:33:50.280902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:23:34.224 [2024-04-26 15:33:51.304883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:23:34.224 [2024-04-26 15:33:51.304921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbcda0 with addr=10.0.0.2, port=4420 00:23:34.224 [2024-04-26 15:33:51.304934] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbcda0 is same with the state(5) to be set 00:23:34.224 [2024-04-26 15:33:51.305296] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbcda0 (9): Bad file descriptor 00:23:34.224 [2024-04-26 15:33:51.305318] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.224 [2024-04-26 15:33:51.305344] bdev_nvme.c:6674:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:23:34.224 [2024-04-26 15:33:51.305366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.224 [2024-04-26 15:33:51.305375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.224 [2024-04-26 15:33:51.305385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.224 [2024-04-26 15:33:51.305392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.224 [2024-04-26 15:33:51.305400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.224 [2024-04-26 15:33:51.305407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.224 [2024-04-26 15:33:51.305415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.224 [2024-04-26 15:33:51.305422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.224 [2024-04-26 15:33:51.305430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.224 [2024-04-26 15:33:51.305437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.224 [2024-04-26 15:33:51.305444] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:23:34.224 [2024-04-26 15:33:51.305956] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbd1b0 (9): Bad file descriptor 00:23:34.224 [2024-04-26 15:33:51.306967] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:23:34.224 [2024-04-26 15:33:51.306978] nvme_ctrlr.c:1148:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:23:34.224 15:33:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:34.224 15:33:51 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:34.224 15:33:51 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:35.166 15:33:52 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:35.166 15:33:52 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:35.166 15:33:52 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:35.166 15:33:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:35.166 15:33:52 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:35.166 15:33:52 -- common/autotest_common.sh@10 -- # set +x 00:23:35.166 15:33:52 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:35.166 15:33:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:35.166 15:33:52 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:23:35.166 15:33:52 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:35.166 15:33:52 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:35.166 15:33:52 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:23:35.166 15:33:52 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:35.166 15:33:52 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:35.166 15:33:52 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:35.166 15:33:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:35.166 15:33:52 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:35.166 15:33:52 -- common/autotest_common.sh@10 -- # set +x 00:23:35.166 15:33:52 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:35.167 15:33:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:35.167 15:33:52 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:35.167 15:33:52 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:36.178 [2024-04-26 15:33:53.358999] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:36.178 [2024-04-26 15:33:53.359020] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:36.178 [2024-04-26 15:33:53.359033] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:36.178 [2024-04-26 15:33:53.447321] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:23:36.178 [2024-04-26 15:33:53.506992] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:36.178 [2024-04-26 15:33:53.507028] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:36.178 [2024-04-26 15:33:53.507048] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:36.178 [2024-04-26 15:33:53.507062] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:23:36.178 [2024-04-26 15:33:53.507070] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:36.178 [2024-04-26 15:33:53.515990] bdev_nvme.c:1606:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xdca970 was disconnected and freed. delete nvme_qpair. 00:23:36.178 15:33:53 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:36.178 15:33:53 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:36.178 15:33:53 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:36.178 15:33:53 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:36.178 15:33:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:36.178 15:33:53 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:36.178 15:33:53 -- common/autotest_common.sh@10 -- # set +x 00:23:36.178 15:33:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:36.178 15:33:53 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:23:36.178 15:33:53 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:23:36.178 15:33:53 -- host/discovery_remove_ifc.sh@90 -- # killprocess 1739527 00:23:36.178 15:33:53 -- common/autotest_common.sh@936 -- # '[' -z 1739527 ']' 00:23:36.178 15:33:53 -- common/autotest_common.sh@940 -- # kill -0 1739527 00:23:36.178 15:33:53 -- common/autotest_common.sh@941 -- # uname 00:23:36.178 15:33:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:36.178 15:33:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1739527 00:23:36.439 15:33:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:36.439 15:33:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:36.439 15:33:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1739527' 00:23:36.439 killing process with pid 1739527 00:23:36.439 15:33:53 -- common/autotest_common.sh@955 -- # kill 1739527 00:23:36.439 15:33:53 -- common/autotest_common.sh@960 -- # wait 1739527 00:23:36.439 15:33:53 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:23:36.439 15:33:53 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:36.439 15:33:53 -- nvmf/common.sh@117 -- # sync 00:23:36.439 15:33:53 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:36.439 15:33:53 -- nvmf/common.sh@120 -- # set +e 00:23:36.439 15:33:53 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:36.439 15:33:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:36.439 rmmod nvme_tcp 00:23:36.439 rmmod nvme_fabrics 00:23:36.439 rmmod nvme_keyring 00:23:36.439 15:33:53 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:36.439 15:33:53 -- nvmf/common.sh@124 -- # set -e 00:23:36.439 15:33:53 -- nvmf/common.sh@125 -- # return 0 00:23:36.439 15:33:53 -- nvmf/common.sh@478 -- # '[' -n 1739180 ']' 00:23:36.439 15:33:53 -- nvmf/common.sh@479 -- # killprocess 1739180 00:23:36.439 15:33:53 -- common/autotest_common.sh@936 -- # '[' -z 1739180 ']' 00:23:36.439 15:33:53 -- common/autotest_common.sh@940 -- # kill -0 1739180 00:23:36.439 15:33:53 -- common/autotest_common.sh@941 -- # uname 00:23:36.439 15:33:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:36.439 15:33:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1739180 00:23:36.700 15:33:53 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:36.700 15:33:53 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:36.700 15:33:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1739180' 00:23:36.700 killing process with pid 1739180 00:23:36.700 15:33:53 -- common/autotest_common.sh@955 -- # kill 1739180 00:23:36.700 15:33:53 -- common/autotest_common.sh@960 -- # wait 1739180 00:23:36.700 15:33:54 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:36.700 15:33:54 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:36.700 15:33:54 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:36.700 15:33:54 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:36.700 15:33:54 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:36.700 15:33:54 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:36.700 15:33:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:36.700 15:33:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:39.241 15:33:56 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:39.241 00:23:39.241 real 0m22.915s 00:23:39.241 user 0m26.070s 00:23:39.241 sys 0m6.606s 00:23:39.242 15:33:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:39.242 15:33:56 -- common/autotest_common.sh@10 -- # set +x 00:23:39.242 ************************************ 00:23:39.242 END TEST nvmf_discovery_remove_ifc 00:23:39.242 ************************************ 00:23:39.242 15:33:56 -- nvmf/nvmf.sh@101 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:39.242 15:33:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:39.242 15:33:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:39.242 15:33:56 -- common/autotest_common.sh@10 -- # set +x 00:23:39.242 ************************************ 00:23:39.242 START TEST nvmf_identify_kernel_target 00:23:39.242 ************************************ 00:23:39.242 15:33:56 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:39.242 * Looking for test storage... 00:23:39.242 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:39.242 15:33:56 -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:39.242 15:33:56 -- nvmf/common.sh@7 -- # uname -s 00:23:39.242 15:33:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:39.242 15:33:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:39.242 15:33:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:39.242 15:33:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:39.242 15:33:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:39.242 15:33:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:39.242 15:33:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:39.242 15:33:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:39.242 15:33:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:39.242 15:33:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:39.242 15:33:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:39.242 15:33:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:39.242 15:33:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:39.242 15:33:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:39.242 15:33:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:39.242 15:33:56 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:39.242 15:33:56 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:39.242 15:33:56 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:39.242 15:33:56 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:39.242 15:33:56 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:39.242 15:33:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.242 15:33:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.242 15:33:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.242 15:33:56 -- paths/export.sh@5 -- # export PATH 00:23:39.242 15:33:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.242 15:33:56 -- nvmf/common.sh@47 -- # : 0 00:23:39.242 15:33:56 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:39.242 15:33:56 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:39.242 15:33:56 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:39.242 15:33:56 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:39.242 15:33:56 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:39.242 15:33:56 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:39.242 15:33:56 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:39.242 15:33:56 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:39.242 15:33:56 -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:23:39.242 15:33:56 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:39.242 15:33:56 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:39.242 15:33:56 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:39.242 15:33:56 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:39.242 15:33:56 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:39.242 15:33:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:39.242 15:33:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:39.242 15:33:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:39.242 15:33:56 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:23:39.242 15:33:56 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:23:39.242 15:33:56 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:39.242 15:33:56 -- common/autotest_common.sh@10 -- # set +x 00:23:45.825 15:34:03 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:45.825 15:34:03 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:45.825 15:34:03 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:45.825 15:34:03 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:45.825 15:34:03 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:45.825 15:34:03 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:45.825 15:34:03 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:45.825 15:34:03 -- nvmf/common.sh@295 -- # net_devs=() 00:23:45.825 15:34:03 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:45.825 15:34:03 -- nvmf/common.sh@296 -- # e810=() 00:23:45.825 15:34:03 -- nvmf/common.sh@296 -- # local -ga e810 00:23:45.825 15:34:03 -- nvmf/common.sh@297 -- # x722=() 00:23:45.825 15:34:03 -- nvmf/common.sh@297 -- # local -ga x722 00:23:45.825 15:34:03 -- nvmf/common.sh@298 -- # mlx=() 00:23:45.825 15:34:03 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:45.825 15:34:03 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:45.825 15:34:03 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:45.825 15:34:03 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:45.825 15:34:03 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:45.825 15:34:03 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:45.825 15:34:03 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:45.825 15:34:03 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:45.825 15:34:03 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:45.825 15:34:03 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:45.825 15:34:03 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:45.825 15:34:03 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:45.825 15:34:03 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:45.825 15:34:03 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:45.825 15:34:03 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:45.825 15:34:03 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:45.825 15:34:03 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:45.825 15:34:03 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:45.825 15:34:03 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:45.825 15:34:03 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:45.825 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:45.825 15:34:03 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:45.825 15:34:03 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:45.825 15:34:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:45.825 15:34:03 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:45.825 15:34:03 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:45.825 15:34:03 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:45.825 15:34:03 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:45.825 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:45.825 15:34:03 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:45.825 15:34:03 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:45.825 15:34:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:45.825 15:34:03 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:45.825 15:34:03 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:45.825 15:34:03 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:45.825 15:34:03 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:45.825 15:34:03 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:45.825 15:34:03 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:45.825 15:34:03 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:45.825 15:34:03 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:45.825 15:34:03 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:45.825 15:34:03 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:45.825 Found net devices under 0000:31:00.0: cvl_0_0 00:23:45.825 15:34:03 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:45.825 15:34:03 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:45.825 15:34:03 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:45.825 15:34:03 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:45.825 15:34:03 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:45.825 15:34:03 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:45.825 Found net devices under 0000:31:00.1: cvl_0_1 00:23:45.825 15:34:03 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:45.825 15:34:03 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:23:45.825 15:34:03 -- nvmf/common.sh@403 -- # is_hw=yes 00:23:45.825 15:34:03 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:23:45.825 15:34:03 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:23:45.825 15:34:03 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:23:45.825 15:34:03 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:45.825 15:34:03 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:45.825 15:34:03 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:45.825 15:34:03 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:45.825 15:34:03 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:45.825 15:34:03 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:45.825 15:34:03 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:45.825 15:34:03 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:45.825 15:34:03 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:45.825 15:34:03 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:45.825 15:34:03 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:45.825 15:34:03 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:45.825 15:34:03 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:46.085 15:34:03 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:46.085 15:34:03 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:46.085 15:34:03 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:46.085 15:34:03 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:46.085 15:34:03 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:46.085 15:34:03 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:46.085 15:34:03 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:46.085 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:46.085 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.704 ms 00:23:46.085 00:23:46.085 --- 10.0.0.2 ping statistics --- 00:23:46.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:46.085 rtt min/avg/max/mdev = 0.704/0.704/0.704/0.000 ms 00:23:46.085 15:34:03 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:46.085 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:46.085 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.243 ms 00:23:46.085 00:23:46.085 --- 10.0.0.1 ping statistics --- 00:23:46.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:46.085 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:23:46.085 15:34:03 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:46.085 15:34:03 -- nvmf/common.sh@411 -- # return 0 00:23:46.085 15:34:03 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:46.085 15:34:03 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:46.085 15:34:03 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:46.085 15:34:03 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:46.085 15:34:03 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:46.085 15:34:03 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:46.085 15:34:03 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:46.085 15:34:03 -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:23:46.085 15:34:03 -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:23:46.085 15:34:03 -- nvmf/common.sh@717 -- # local ip 00:23:46.085 15:34:03 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:46.085 15:34:03 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:46.085 15:34:03 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:46.085 15:34:03 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:46.085 15:34:03 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:46.085 15:34:03 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:46.085 15:34:03 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:46.085 15:34:03 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:46.085 15:34:03 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:46.085 15:34:03 -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:23:46.085 15:34:03 -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:23:46.085 15:34:03 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:23:46.085 15:34:03 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:23:46.085 15:34:03 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:46.085 15:34:03 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:46.085 15:34:03 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:46.085 15:34:03 -- nvmf/common.sh@628 -- # local block nvme 00:23:46.085 15:34:03 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:23:46.085 15:34:03 -- nvmf/common.sh@631 -- # modprobe nvmet 00:23:46.346 15:34:03 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:46.346 15:34:03 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:23:49.650 Waiting for block devices as requested 00:23:49.650 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:23:49.912 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:23:49.912 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:23:49.912 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:23:50.173 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:23:50.173 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:23:50.173 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:23:50.434 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:23:50.434 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:23:50.434 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:23:50.695 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:23:50.695 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:23:50.695 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:23:50.957 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:23:50.957 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:23:50.958 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:23:50.958 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:23:51.531 15:34:08 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:23:51.531 15:34:08 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:51.531 15:34:08 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:23:51.531 15:34:08 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:23:51.531 15:34:08 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:51.531 15:34:08 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:23:51.531 15:34:08 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:23:51.531 15:34:08 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:23:51.531 15:34:08 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:23:51.531 No valid GPT data, bailing 00:23:51.531 15:34:08 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:51.531 15:34:08 -- scripts/common.sh@391 -- # pt= 00:23:51.531 15:34:08 -- scripts/common.sh@392 -- # return 1 00:23:51.531 15:34:08 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:23:51.531 15:34:08 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:23:51.531 15:34:08 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:51.531 15:34:08 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:51.531 15:34:08 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:51.531 15:34:08 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:23:51.531 15:34:08 -- nvmf/common.sh@656 -- # echo 1 00:23:51.531 15:34:08 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:23:51.531 15:34:08 -- nvmf/common.sh@658 -- # echo 1 00:23:51.531 15:34:08 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:23:51.531 15:34:08 -- nvmf/common.sh@661 -- # echo tcp 00:23:51.531 15:34:08 -- nvmf/common.sh@662 -- # echo 4420 00:23:51.531 15:34:08 -- nvmf/common.sh@663 -- # echo ipv4 00:23:51.531 15:34:08 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:51.531 15:34:08 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:23:51.531 00:23:51.531 Discovery Log Number of Records 2, Generation counter 2 00:23:51.531 =====Discovery Log Entry 0====== 00:23:51.531 trtype: tcp 00:23:51.531 adrfam: ipv4 00:23:51.531 subtype: current discovery subsystem 00:23:51.531 treq: not specified, sq flow control disable supported 00:23:51.531 portid: 1 00:23:51.531 trsvcid: 4420 00:23:51.531 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:51.531 traddr: 10.0.0.1 00:23:51.531 eflags: none 00:23:51.531 sectype: none 00:23:51.531 =====Discovery Log Entry 1====== 00:23:51.531 trtype: tcp 00:23:51.531 adrfam: ipv4 00:23:51.531 subtype: nvme subsystem 00:23:51.531 treq: not specified, sq flow control disable supported 00:23:51.531 portid: 1 00:23:51.531 trsvcid: 4420 00:23:51.531 subnqn: nqn.2016-06.io.spdk:testnqn 00:23:51.531 traddr: 10.0.0.1 00:23:51.531 eflags: none 00:23:51.531 sectype: none 00:23:51.531 15:34:08 -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:23:51.531 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:23:51.531 EAL: No free 2048 kB hugepages reported on node 1 00:23:51.531 ===================================================== 00:23:51.531 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:51.531 ===================================================== 00:23:51.531 Controller Capabilities/Features 00:23:51.531 ================================ 00:23:51.531 Vendor ID: 0000 00:23:51.531 Subsystem Vendor ID: 0000 00:23:51.531 Serial Number: 6e5a780eca0d32757034 00:23:51.531 Model Number: Linux 00:23:51.531 Firmware Version: 6.7.0-68 00:23:51.531 Recommended Arb Burst: 0 00:23:51.531 IEEE OUI Identifier: 00 00 00 00:23:51.531 Multi-path I/O 00:23:51.531 May have multiple subsystem ports: No 00:23:51.531 May have multiple controllers: No 00:23:51.531 Associated with SR-IOV VF: No 00:23:51.531 Max Data Transfer Size: Unlimited 00:23:51.531 Max Number of Namespaces: 0 00:23:51.531 Max Number of I/O Queues: 1024 00:23:51.531 NVMe Specification Version (VS): 1.3 00:23:51.531 NVMe Specification Version (Identify): 1.3 00:23:51.531 Maximum Queue Entries: 1024 00:23:51.531 Contiguous Queues Required: No 00:23:51.531 Arbitration Mechanisms Supported 00:23:51.531 Weighted Round Robin: Not Supported 00:23:51.531 Vendor Specific: Not Supported 00:23:51.531 Reset Timeout: 7500 ms 00:23:51.531 Doorbell Stride: 4 bytes 00:23:51.531 NVM Subsystem Reset: Not Supported 00:23:51.531 Command Sets Supported 00:23:51.531 NVM Command Set: Supported 00:23:51.531 Boot Partition: Not Supported 00:23:51.531 Memory Page Size Minimum: 4096 bytes 00:23:51.531 Memory Page Size Maximum: 4096 bytes 00:23:51.531 Persistent Memory Region: Not Supported 00:23:51.531 Optional Asynchronous Events Supported 00:23:51.531 Namespace Attribute Notices: Not Supported 00:23:51.531 Firmware Activation Notices: Not Supported 00:23:51.531 ANA Change Notices: Not Supported 00:23:51.531 PLE Aggregate Log Change Notices: Not Supported 00:23:51.531 LBA Status Info Alert Notices: Not Supported 00:23:51.531 EGE Aggregate Log Change Notices: Not Supported 00:23:51.531 Normal NVM Subsystem Shutdown event: Not Supported 00:23:51.531 Zone Descriptor Change Notices: Not Supported 00:23:51.531 Discovery Log Change Notices: Supported 00:23:51.531 Controller Attributes 00:23:51.531 128-bit Host Identifier: Not Supported 00:23:51.531 Non-Operational Permissive Mode: Not Supported 00:23:51.531 NVM Sets: Not Supported 00:23:51.531 Read Recovery Levels: Not Supported 00:23:51.531 Endurance Groups: Not Supported 00:23:51.531 Predictable Latency Mode: Not Supported 00:23:51.531 Traffic Based Keep ALive: Not Supported 00:23:51.531 Namespace Granularity: Not Supported 00:23:51.531 SQ Associations: Not Supported 00:23:51.531 UUID List: Not Supported 00:23:51.531 Multi-Domain Subsystem: Not Supported 00:23:51.531 Fixed Capacity Management: Not Supported 00:23:51.531 Variable Capacity Management: Not Supported 00:23:51.531 Delete Endurance Group: Not Supported 00:23:51.531 Delete NVM Set: Not Supported 00:23:51.531 Extended LBA Formats Supported: Not Supported 00:23:51.531 Flexible Data Placement Supported: Not Supported 00:23:51.531 00:23:51.531 Controller Memory Buffer Support 00:23:51.531 ================================ 00:23:51.531 Supported: No 00:23:51.531 00:23:51.531 Persistent Memory Region Support 00:23:51.531 ================================ 00:23:51.531 Supported: No 00:23:51.531 00:23:51.531 Admin Command Set Attributes 00:23:51.531 ============================ 00:23:51.531 Security Send/Receive: Not Supported 00:23:51.531 Format NVM: Not Supported 00:23:51.531 Firmware Activate/Download: Not Supported 00:23:51.531 Namespace Management: Not Supported 00:23:51.531 Device Self-Test: Not Supported 00:23:51.531 Directives: Not Supported 00:23:51.531 NVMe-MI: Not Supported 00:23:51.531 Virtualization Management: Not Supported 00:23:51.531 Doorbell Buffer Config: Not Supported 00:23:51.531 Get LBA Status Capability: Not Supported 00:23:51.531 Command & Feature Lockdown Capability: Not Supported 00:23:51.531 Abort Command Limit: 1 00:23:51.531 Async Event Request Limit: 1 00:23:51.531 Number of Firmware Slots: N/A 00:23:51.531 Firmware Slot 1 Read-Only: N/A 00:23:51.531 Firmware Activation Without Reset: N/A 00:23:51.531 Multiple Update Detection Support: N/A 00:23:51.531 Firmware Update Granularity: No Information Provided 00:23:51.531 Per-Namespace SMART Log: No 00:23:51.531 Asymmetric Namespace Access Log Page: Not Supported 00:23:51.531 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:51.531 Command Effects Log Page: Not Supported 00:23:51.531 Get Log Page Extended Data: Supported 00:23:51.531 Telemetry Log Pages: Not Supported 00:23:51.531 Persistent Event Log Pages: Not Supported 00:23:51.531 Supported Log Pages Log Page: May Support 00:23:51.531 Commands Supported & Effects Log Page: Not Supported 00:23:51.531 Feature Identifiers & Effects Log Page:May Support 00:23:51.531 NVMe-MI Commands & Effects Log Page: May Support 00:23:51.531 Data Area 4 for Telemetry Log: Not Supported 00:23:51.531 Error Log Page Entries Supported: 1 00:23:51.531 Keep Alive: Not Supported 00:23:51.531 00:23:51.531 NVM Command Set Attributes 00:23:51.531 ========================== 00:23:51.531 Submission Queue Entry Size 00:23:51.531 Max: 1 00:23:51.531 Min: 1 00:23:51.531 Completion Queue Entry Size 00:23:51.531 Max: 1 00:23:51.531 Min: 1 00:23:51.531 Number of Namespaces: 0 00:23:51.531 Compare Command: Not Supported 00:23:51.531 Write Uncorrectable Command: Not Supported 00:23:51.531 Dataset Management Command: Not Supported 00:23:51.531 Write Zeroes Command: Not Supported 00:23:51.531 Set Features Save Field: Not Supported 00:23:51.532 Reservations: Not Supported 00:23:51.532 Timestamp: Not Supported 00:23:51.532 Copy: Not Supported 00:23:51.532 Volatile Write Cache: Not Present 00:23:51.532 Atomic Write Unit (Normal): 1 00:23:51.532 Atomic Write Unit (PFail): 1 00:23:51.532 Atomic Compare & Write Unit: 1 00:23:51.532 Fused Compare & Write: Not Supported 00:23:51.532 Scatter-Gather List 00:23:51.532 SGL Command Set: Supported 00:23:51.532 SGL Keyed: Not Supported 00:23:51.532 SGL Bit Bucket Descriptor: Not Supported 00:23:51.532 SGL Metadata Pointer: Not Supported 00:23:51.532 Oversized SGL: Not Supported 00:23:51.532 SGL Metadata Address: Not Supported 00:23:51.532 SGL Offset: Supported 00:23:51.532 Transport SGL Data Block: Not Supported 00:23:51.532 Replay Protected Memory Block: Not Supported 00:23:51.532 00:23:51.532 Firmware Slot Information 00:23:51.532 ========================= 00:23:51.532 Active slot: 0 00:23:51.532 00:23:51.532 00:23:51.532 Error Log 00:23:51.532 ========= 00:23:51.532 00:23:51.532 Active Namespaces 00:23:51.532 ================= 00:23:51.532 Discovery Log Page 00:23:51.532 ================== 00:23:51.532 Generation Counter: 2 00:23:51.532 Number of Records: 2 00:23:51.532 Record Format: 0 00:23:51.532 00:23:51.532 Discovery Log Entry 0 00:23:51.532 ---------------------- 00:23:51.532 Transport Type: 3 (TCP) 00:23:51.532 Address Family: 1 (IPv4) 00:23:51.532 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:51.532 Entry Flags: 00:23:51.532 Duplicate Returned Information: 0 00:23:51.532 Explicit Persistent Connection Support for Discovery: 0 00:23:51.532 Transport Requirements: 00:23:51.532 Secure Channel: Not Specified 00:23:51.532 Port ID: 1 (0x0001) 00:23:51.532 Controller ID: 65535 (0xffff) 00:23:51.532 Admin Max SQ Size: 32 00:23:51.532 Transport Service Identifier: 4420 00:23:51.532 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:51.532 Transport Address: 10.0.0.1 00:23:51.532 Discovery Log Entry 1 00:23:51.532 ---------------------- 00:23:51.532 Transport Type: 3 (TCP) 00:23:51.532 Address Family: 1 (IPv4) 00:23:51.532 Subsystem Type: 2 (NVM Subsystem) 00:23:51.532 Entry Flags: 00:23:51.532 Duplicate Returned Information: 0 00:23:51.532 Explicit Persistent Connection Support for Discovery: 0 00:23:51.532 Transport Requirements: 00:23:51.532 Secure Channel: Not Specified 00:23:51.532 Port ID: 1 (0x0001) 00:23:51.532 Controller ID: 65535 (0xffff) 00:23:51.532 Admin Max SQ Size: 32 00:23:51.532 Transport Service Identifier: 4420 00:23:51.532 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:23:51.532 Transport Address: 10.0.0.1 00:23:51.532 15:34:08 -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:51.794 EAL: No free 2048 kB hugepages reported on node 1 00:23:51.794 get_feature(0x01) failed 00:23:51.794 get_feature(0x02) failed 00:23:51.794 get_feature(0x04) failed 00:23:51.794 ===================================================== 00:23:51.794 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:51.794 ===================================================== 00:23:51.794 Controller Capabilities/Features 00:23:51.794 ================================ 00:23:51.794 Vendor ID: 0000 00:23:51.794 Subsystem Vendor ID: 0000 00:23:51.794 Serial Number: f3a4b65b4ad7d9443c4c 00:23:51.794 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:23:51.794 Firmware Version: 6.7.0-68 00:23:51.794 Recommended Arb Burst: 6 00:23:51.794 IEEE OUI Identifier: 00 00 00 00:23:51.794 Multi-path I/O 00:23:51.794 May have multiple subsystem ports: Yes 00:23:51.794 May have multiple controllers: Yes 00:23:51.794 Associated with SR-IOV VF: No 00:23:51.794 Max Data Transfer Size: Unlimited 00:23:51.794 Max Number of Namespaces: 1024 00:23:51.794 Max Number of I/O Queues: 128 00:23:51.794 NVMe Specification Version (VS): 1.3 00:23:51.794 NVMe Specification Version (Identify): 1.3 00:23:51.794 Maximum Queue Entries: 1024 00:23:51.794 Contiguous Queues Required: No 00:23:51.794 Arbitration Mechanisms Supported 00:23:51.794 Weighted Round Robin: Not Supported 00:23:51.794 Vendor Specific: Not Supported 00:23:51.794 Reset Timeout: 7500 ms 00:23:51.794 Doorbell Stride: 4 bytes 00:23:51.794 NVM Subsystem Reset: Not Supported 00:23:51.794 Command Sets Supported 00:23:51.794 NVM Command Set: Supported 00:23:51.794 Boot Partition: Not Supported 00:23:51.794 Memory Page Size Minimum: 4096 bytes 00:23:51.794 Memory Page Size Maximum: 4096 bytes 00:23:51.794 Persistent Memory Region: Not Supported 00:23:51.794 Optional Asynchronous Events Supported 00:23:51.794 Namespace Attribute Notices: Supported 00:23:51.794 Firmware Activation Notices: Not Supported 00:23:51.794 ANA Change Notices: Supported 00:23:51.794 PLE Aggregate Log Change Notices: Not Supported 00:23:51.794 LBA Status Info Alert Notices: Not Supported 00:23:51.794 EGE Aggregate Log Change Notices: Not Supported 00:23:51.794 Normal NVM Subsystem Shutdown event: Not Supported 00:23:51.794 Zone Descriptor Change Notices: Not Supported 00:23:51.794 Discovery Log Change Notices: Not Supported 00:23:51.794 Controller Attributes 00:23:51.794 128-bit Host Identifier: Supported 00:23:51.794 Non-Operational Permissive Mode: Not Supported 00:23:51.794 NVM Sets: Not Supported 00:23:51.794 Read Recovery Levels: Not Supported 00:23:51.794 Endurance Groups: Not Supported 00:23:51.794 Predictable Latency Mode: Not Supported 00:23:51.794 Traffic Based Keep ALive: Supported 00:23:51.794 Namespace Granularity: Not Supported 00:23:51.794 SQ Associations: Not Supported 00:23:51.794 UUID List: Not Supported 00:23:51.794 Multi-Domain Subsystem: Not Supported 00:23:51.794 Fixed Capacity Management: Not Supported 00:23:51.794 Variable Capacity Management: Not Supported 00:23:51.794 Delete Endurance Group: Not Supported 00:23:51.795 Delete NVM Set: Not Supported 00:23:51.795 Extended LBA Formats Supported: Not Supported 00:23:51.795 Flexible Data Placement Supported: Not Supported 00:23:51.795 00:23:51.795 Controller Memory Buffer Support 00:23:51.795 ================================ 00:23:51.795 Supported: No 00:23:51.795 00:23:51.795 Persistent Memory Region Support 00:23:51.795 ================================ 00:23:51.795 Supported: No 00:23:51.795 00:23:51.795 Admin Command Set Attributes 00:23:51.795 ============================ 00:23:51.795 Security Send/Receive: Not Supported 00:23:51.795 Format NVM: Not Supported 00:23:51.795 Firmware Activate/Download: Not Supported 00:23:51.795 Namespace Management: Not Supported 00:23:51.795 Device Self-Test: Not Supported 00:23:51.795 Directives: Not Supported 00:23:51.795 NVMe-MI: Not Supported 00:23:51.795 Virtualization Management: Not Supported 00:23:51.795 Doorbell Buffer Config: Not Supported 00:23:51.795 Get LBA Status Capability: Not Supported 00:23:51.795 Command & Feature Lockdown Capability: Not Supported 00:23:51.795 Abort Command Limit: 4 00:23:51.795 Async Event Request Limit: 4 00:23:51.795 Number of Firmware Slots: N/A 00:23:51.795 Firmware Slot 1 Read-Only: N/A 00:23:51.795 Firmware Activation Without Reset: N/A 00:23:51.795 Multiple Update Detection Support: N/A 00:23:51.795 Firmware Update Granularity: No Information Provided 00:23:51.795 Per-Namespace SMART Log: Yes 00:23:51.795 Asymmetric Namespace Access Log Page: Supported 00:23:51.795 ANA Transition Time : 10 sec 00:23:51.795 00:23:51.795 Asymmetric Namespace Access Capabilities 00:23:51.795 ANA Optimized State : Supported 00:23:51.795 ANA Non-Optimized State : Supported 00:23:51.795 ANA Inaccessible State : Supported 00:23:51.795 ANA Persistent Loss State : Supported 00:23:51.795 ANA Change State : Supported 00:23:51.795 ANAGRPID is not changed : No 00:23:51.795 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:23:51.795 00:23:51.795 ANA Group Identifier Maximum : 128 00:23:51.795 Number of ANA Group Identifiers : 128 00:23:51.795 Max Number of Allowed Namespaces : 1024 00:23:51.795 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:23:51.795 Command Effects Log Page: Supported 00:23:51.795 Get Log Page Extended Data: Supported 00:23:51.795 Telemetry Log Pages: Not Supported 00:23:51.795 Persistent Event Log Pages: Not Supported 00:23:51.795 Supported Log Pages Log Page: May Support 00:23:51.795 Commands Supported & Effects Log Page: Not Supported 00:23:51.795 Feature Identifiers & Effects Log Page:May Support 00:23:51.795 NVMe-MI Commands & Effects Log Page: May Support 00:23:51.795 Data Area 4 for Telemetry Log: Not Supported 00:23:51.795 Error Log Page Entries Supported: 128 00:23:51.795 Keep Alive: Supported 00:23:51.795 Keep Alive Granularity: 1000 ms 00:23:51.795 00:23:51.795 NVM Command Set Attributes 00:23:51.795 ========================== 00:23:51.795 Submission Queue Entry Size 00:23:51.795 Max: 64 00:23:51.795 Min: 64 00:23:51.795 Completion Queue Entry Size 00:23:51.795 Max: 16 00:23:51.795 Min: 16 00:23:51.795 Number of Namespaces: 1024 00:23:51.795 Compare Command: Not Supported 00:23:51.795 Write Uncorrectable Command: Not Supported 00:23:51.795 Dataset Management Command: Supported 00:23:51.795 Write Zeroes Command: Supported 00:23:51.795 Set Features Save Field: Not Supported 00:23:51.795 Reservations: Not Supported 00:23:51.795 Timestamp: Not Supported 00:23:51.795 Copy: Not Supported 00:23:51.795 Volatile Write Cache: Present 00:23:51.795 Atomic Write Unit (Normal): 1 00:23:51.795 Atomic Write Unit (PFail): 1 00:23:51.795 Atomic Compare & Write Unit: 1 00:23:51.795 Fused Compare & Write: Not Supported 00:23:51.795 Scatter-Gather List 00:23:51.795 SGL Command Set: Supported 00:23:51.795 SGL Keyed: Not Supported 00:23:51.795 SGL Bit Bucket Descriptor: Not Supported 00:23:51.795 SGL Metadata Pointer: Not Supported 00:23:51.795 Oversized SGL: Not Supported 00:23:51.795 SGL Metadata Address: Not Supported 00:23:51.795 SGL Offset: Supported 00:23:51.795 Transport SGL Data Block: Not Supported 00:23:51.795 Replay Protected Memory Block: Not Supported 00:23:51.795 00:23:51.795 Firmware Slot Information 00:23:51.795 ========================= 00:23:51.795 Active slot: 0 00:23:51.795 00:23:51.795 Asymmetric Namespace Access 00:23:51.795 =========================== 00:23:51.795 Change Count : 0 00:23:51.795 Number of ANA Group Descriptors : 1 00:23:51.795 ANA Group Descriptor : 0 00:23:51.795 ANA Group ID : 1 00:23:51.795 Number of NSID Values : 1 00:23:51.795 Change Count : 0 00:23:51.795 ANA State : 1 00:23:51.795 Namespace Identifier : 1 00:23:51.795 00:23:51.795 Commands Supported and Effects 00:23:51.795 ============================== 00:23:51.795 Admin Commands 00:23:51.795 -------------- 00:23:51.795 Get Log Page (02h): Supported 00:23:51.795 Identify (06h): Supported 00:23:51.795 Abort (08h): Supported 00:23:51.795 Set Features (09h): Supported 00:23:51.795 Get Features (0Ah): Supported 00:23:51.795 Asynchronous Event Request (0Ch): Supported 00:23:51.795 Keep Alive (18h): Supported 00:23:51.795 I/O Commands 00:23:51.795 ------------ 00:23:51.795 Flush (00h): Supported 00:23:51.795 Write (01h): Supported LBA-Change 00:23:51.795 Read (02h): Supported 00:23:51.795 Write Zeroes (08h): Supported LBA-Change 00:23:51.795 Dataset Management (09h): Supported 00:23:51.795 00:23:51.795 Error Log 00:23:51.795 ========= 00:23:51.795 Entry: 0 00:23:51.795 Error Count: 0x3 00:23:51.795 Submission Queue Id: 0x0 00:23:51.795 Command Id: 0x5 00:23:51.795 Phase Bit: 0 00:23:51.795 Status Code: 0x2 00:23:51.795 Status Code Type: 0x0 00:23:51.795 Do Not Retry: 1 00:23:51.795 Error Location: 0x28 00:23:51.795 LBA: 0x0 00:23:51.795 Namespace: 0x0 00:23:51.795 Vendor Log Page: 0x0 00:23:51.795 ----------- 00:23:51.795 Entry: 1 00:23:51.795 Error Count: 0x2 00:23:51.795 Submission Queue Id: 0x0 00:23:51.795 Command Id: 0x5 00:23:51.795 Phase Bit: 0 00:23:51.795 Status Code: 0x2 00:23:51.795 Status Code Type: 0x0 00:23:51.795 Do Not Retry: 1 00:23:51.795 Error Location: 0x28 00:23:51.795 LBA: 0x0 00:23:51.795 Namespace: 0x0 00:23:51.795 Vendor Log Page: 0x0 00:23:51.795 ----------- 00:23:51.795 Entry: 2 00:23:51.795 Error Count: 0x1 00:23:51.795 Submission Queue Id: 0x0 00:23:51.795 Command Id: 0x4 00:23:51.795 Phase Bit: 0 00:23:51.795 Status Code: 0x2 00:23:51.795 Status Code Type: 0x0 00:23:51.795 Do Not Retry: 1 00:23:51.795 Error Location: 0x28 00:23:51.795 LBA: 0x0 00:23:51.795 Namespace: 0x0 00:23:51.795 Vendor Log Page: 0x0 00:23:51.795 00:23:51.795 Number of Queues 00:23:51.795 ================ 00:23:51.795 Number of I/O Submission Queues: 128 00:23:51.795 Number of I/O Completion Queues: 128 00:23:51.795 00:23:51.795 ZNS Specific Controller Data 00:23:51.795 ============================ 00:23:51.795 Zone Append Size Limit: 0 00:23:51.795 00:23:51.795 00:23:51.795 Active Namespaces 00:23:51.795 ================= 00:23:51.795 get_feature(0x05) failed 00:23:51.795 Namespace ID:1 00:23:51.795 Command Set Identifier: NVM (00h) 00:23:51.795 Deallocate: Supported 00:23:51.795 Deallocated/Unwritten Error: Not Supported 00:23:51.795 Deallocated Read Value: Unknown 00:23:51.795 Deallocate in Write Zeroes: Not Supported 00:23:51.795 Deallocated Guard Field: 0xFFFF 00:23:51.795 Flush: Supported 00:23:51.795 Reservation: Not Supported 00:23:51.795 Namespace Sharing Capabilities: Multiple Controllers 00:23:51.795 Size (in LBAs): 3750748848 (1788GiB) 00:23:51.795 Capacity (in LBAs): 3750748848 (1788GiB) 00:23:51.795 Utilization (in LBAs): 3750748848 (1788GiB) 00:23:51.795 UUID: 0877e11d-5595-42eb-b7d6-40d600b06bf6 00:23:51.795 Thin Provisioning: Not Supported 00:23:51.795 Per-NS Atomic Units: Yes 00:23:51.795 Atomic Write Unit (Normal): 8 00:23:51.795 Atomic Write Unit (PFail): 8 00:23:51.795 Preferred Write Granularity: 8 00:23:51.795 Atomic Compare & Write Unit: 8 00:23:51.795 Atomic Boundary Size (Normal): 0 00:23:51.795 Atomic Boundary Size (PFail): 0 00:23:51.795 Atomic Boundary Offset: 0 00:23:51.795 NGUID/EUI64 Never Reused: No 00:23:51.795 ANA group ID: 1 00:23:51.795 Namespace Write Protected: No 00:23:51.795 Number of LBA Formats: 1 00:23:51.795 Current LBA Format: LBA Format #00 00:23:51.795 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:51.795 00:23:51.795 15:34:09 -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:23:51.795 15:34:09 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:51.796 15:34:09 -- nvmf/common.sh@117 -- # sync 00:23:51.796 15:34:09 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:51.796 15:34:09 -- nvmf/common.sh@120 -- # set +e 00:23:51.796 15:34:09 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:51.796 15:34:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:51.796 rmmod nvme_tcp 00:23:51.796 rmmod nvme_fabrics 00:23:51.796 15:34:09 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:51.796 15:34:09 -- nvmf/common.sh@124 -- # set -e 00:23:51.796 15:34:09 -- nvmf/common.sh@125 -- # return 0 00:23:51.796 15:34:09 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:23:51.796 15:34:09 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:51.796 15:34:09 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:51.796 15:34:09 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:51.796 15:34:09 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:51.796 15:34:09 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:51.796 15:34:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:51.796 15:34:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:51.796 15:34:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:54.343 15:34:11 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:54.343 15:34:11 -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:23:54.343 15:34:11 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:23:54.343 15:34:11 -- nvmf/common.sh@675 -- # echo 0 00:23:54.343 15:34:11 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:54.343 15:34:11 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:54.343 15:34:11 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:54.343 15:34:11 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:54.343 15:34:11 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:23:54.343 15:34:11 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:23:54.343 15:34:11 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:57.649 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:23:57.649 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:23:57.649 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:23:57.649 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:23:57.649 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:23:57.649 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:23:57.649 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:23:57.649 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:23:57.649 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:23:57.649 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:23:57.649 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:23:57.649 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:23:57.649 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:23:57.649 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:23:57.649 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:23:57.649 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:23:57.649 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:23:57.911 00:23:57.911 real 0m18.811s 00:23:57.911 user 0m4.977s 00:23:57.911 sys 0m10.747s 00:23:57.911 15:34:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:57.911 15:34:15 -- common/autotest_common.sh@10 -- # set +x 00:23:57.911 ************************************ 00:23:57.911 END TEST nvmf_identify_kernel_target 00:23:57.911 ************************************ 00:23:57.911 15:34:15 -- nvmf/nvmf.sh@102 -- # run_test nvmf_auth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:57.911 15:34:15 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:57.911 15:34:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:57.911 15:34:15 -- common/autotest_common.sh@10 -- # set +x 00:23:57.911 ************************************ 00:23:57.911 START TEST nvmf_auth 00:23:57.911 ************************************ 00:23:57.911 15:34:15 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:58.172 * Looking for test storage... 00:23:58.172 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:58.172 15:34:15 -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:58.172 15:34:15 -- nvmf/common.sh@7 -- # uname -s 00:23:58.172 15:34:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:58.172 15:34:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:58.172 15:34:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:58.172 15:34:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:58.172 15:34:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:58.172 15:34:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:58.172 15:34:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:58.172 15:34:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:58.172 15:34:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:58.172 15:34:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:58.172 15:34:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:58.172 15:34:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:58.172 15:34:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:58.172 15:34:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:58.172 15:34:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:58.172 15:34:15 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:58.172 15:34:15 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:58.172 15:34:15 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:58.172 15:34:15 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:58.172 15:34:15 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:58.172 15:34:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.172 15:34:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.172 15:34:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.172 15:34:15 -- paths/export.sh@5 -- # export PATH 00:23:58.172 15:34:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.172 15:34:15 -- nvmf/common.sh@47 -- # : 0 00:23:58.172 15:34:15 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:58.172 15:34:15 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:58.172 15:34:15 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:58.172 15:34:15 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:58.172 15:34:15 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:58.172 15:34:15 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:58.172 15:34:15 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:58.172 15:34:15 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:58.172 15:34:15 -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:23:58.172 15:34:15 -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:23:58.172 15:34:15 -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:23:58.172 15:34:15 -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:23:58.172 15:34:15 -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:58.172 15:34:15 -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:58.172 15:34:15 -- host/auth.sh@21 -- # keys=() 00:23:58.172 15:34:15 -- host/auth.sh@77 -- # nvmftestinit 00:23:58.172 15:34:15 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:58.172 15:34:15 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:58.172 15:34:15 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:58.172 15:34:15 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:58.172 15:34:15 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:58.172 15:34:15 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:58.172 15:34:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:58.172 15:34:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:58.172 15:34:15 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:23:58.172 15:34:15 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:23:58.172 15:34:15 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:58.172 15:34:15 -- common/autotest_common.sh@10 -- # set +x 00:24:06.314 15:34:22 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:06.314 15:34:22 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:06.314 15:34:22 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:06.314 15:34:22 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:06.314 15:34:22 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:06.314 15:34:22 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:06.314 15:34:22 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:06.314 15:34:22 -- nvmf/common.sh@295 -- # net_devs=() 00:24:06.314 15:34:22 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:06.314 15:34:22 -- nvmf/common.sh@296 -- # e810=() 00:24:06.314 15:34:22 -- nvmf/common.sh@296 -- # local -ga e810 00:24:06.314 15:34:22 -- nvmf/common.sh@297 -- # x722=() 00:24:06.314 15:34:22 -- nvmf/common.sh@297 -- # local -ga x722 00:24:06.314 15:34:22 -- nvmf/common.sh@298 -- # mlx=() 00:24:06.314 15:34:22 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:06.314 15:34:22 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:06.314 15:34:22 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:06.314 15:34:22 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:06.314 15:34:22 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:06.314 15:34:22 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:06.314 15:34:22 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:06.314 15:34:22 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:06.314 15:34:22 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:06.314 15:34:22 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:06.314 15:34:22 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:06.314 15:34:22 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:06.314 15:34:22 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:06.314 15:34:22 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:06.314 15:34:22 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:06.314 15:34:22 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:06.314 15:34:22 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:06.314 15:34:22 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:06.314 15:34:22 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:06.314 15:34:22 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:06.314 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:06.314 15:34:22 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:06.314 15:34:22 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:06.314 15:34:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:06.314 15:34:22 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:06.314 15:34:22 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:06.314 15:34:22 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:06.314 15:34:22 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:06.314 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:06.314 15:34:22 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:06.314 15:34:22 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:06.314 15:34:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:06.314 15:34:22 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:06.314 15:34:22 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:06.314 15:34:22 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:06.314 15:34:22 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:06.314 15:34:22 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:06.314 15:34:22 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:06.314 15:34:22 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:06.314 15:34:22 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:06.314 15:34:22 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:06.314 15:34:22 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:06.314 Found net devices under 0000:31:00.0: cvl_0_0 00:24:06.314 15:34:22 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:06.314 15:34:22 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:06.314 15:34:22 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:06.314 15:34:22 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:06.314 15:34:22 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:06.314 15:34:22 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:06.314 Found net devices under 0000:31:00.1: cvl_0_1 00:24:06.314 15:34:22 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:06.314 15:34:22 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:24:06.314 15:34:22 -- nvmf/common.sh@403 -- # is_hw=yes 00:24:06.314 15:34:22 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:24:06.314 15:34:22 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:24:06.314 15:34:22 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:24:06.314 15:34:22 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:06.314 15:34:22 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:06.314 15:34:22 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:06.314 15:34:22 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:06.314 15:34:22 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:06.314 15:34:22 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:06.314 15:34:22 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:06.314 15:34:22 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:06.314 15:34:22 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:06.314 15:34:22 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:06.314 15:34:22 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:06.314 15:34:22 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:06.314 15:34:22 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:06.314 15:34:22 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:06.314 15:34:22 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:06.314 15:34:22 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:06.314 15:34:22 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:06.314 15:34:22 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:06.314 15:34:22 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:06.314 15:34:22 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:06.314 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:06.314 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.679 ms 00:24:06.314 00:24:06.314 --- 10.0.0.2 ping statistics --- 00:24:06.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:06.314 rtt min/avg/max/mdev = 0.679/0.679/0.679/0.000 ms 00:24:06.314 15:34:22 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:06.314 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:06.314 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:24:06.314 00:24:06.314 --- 10.0.0.1 ping statistics --- 00:24:06.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:06.314 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:24:06.314 15:34:22 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:06.314 15:34:22 -- nvmf/common.sh@411 -- # return 0 00:24:06.314 15:34:22 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:06.314 15:34:22 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:06.314 15:34:22 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:06.314 15:34:22 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:06.314 15:34:22 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:06.314 15:34:22 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:06.314 15:34:22 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:06.314 15:34:22 -- host/auth.sh@78 -- # nvmfappstart -L nvme_auth 00:24:06.314 15:34:22 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:06.314 15:34:22 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:06.314 15:34:22 -- common/autotest_common.sh@10 -- # set +x 00:24:06.314 15:34:22 -- nvmf/common.sh@470 -- # nvmfpid=1753862 00:24:06.314 15:34:22 -- nvmf/common.sh@471 -- # waitforlisten 1753862 00:24:06.314 15:34:22 -- common/autotest_common.sh@817 -- # '[' -z 1753862 ']' 00:24:06.314 15:34:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:06.314 15:34:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:06.314 15:34:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:06.314 15:34:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:06.314 15:34:22 -- common/autotest_common.sh@10 -- # set +x 00:24:06.314 15:34:22 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:24:06.314 15:34:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:06.314 15:34:23 -- common/autotest_common.sh@850 -- # return 0 00:24:06.314 15:34:23 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:06.314 15:34:23 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:06.314 15:34:23 -- common/autotest_common.sh@10 -- # set +x 00:24:06.314 15:34:23 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:06.314 15:34:23 -- host/auth.sh@79 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:24:06.314 15:34:23 -- host/auth.sh@81 -- # gen_key null 32 00:24:06.314 15:34:23 -- host/auth.sh@53 -- # local digest len file key 00:24:06.314 15:34:23 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:06.314 15:34:23 -- host/auth.sh@54 -- # local -A digests 00:24:06.314 15:34:23 -- host/auth.sh@56 -- # digest=null 00:24:06.314 15:34:23 -- host/auth.sh@56 -- # len=32 00:24:06.314 15:34:23 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:06.314 15:34:23 -- host/auth.sh@57 -- # key=e5a8e1b19999f292a42a6c2b8073d393 00:24:06.314 15:34:23 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:24:06.314 15:34:23 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.IzG 00:24:06.314 15:34:23 -- host/auth.sh@59 -- # format_dhchap_key e5a8e1b19999f292a42a6c2b8073d393 0 00:24:06.314 15:34:23 -- nvmf/common.sh@708 -- # format_key DHHC-1 e5a8e1b19999f292a42a6c2b8073d393 0 00:24:06.314 15:34:23 -- nvmf/common.sh@691 -- # local prefix key digest 00:24:06.314 15:34:23 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:24:06.315 15:34:23 -- nvmf/common.sh@693 -- # key=e5a8e1b19999f292a42a6c2b8073d393 00:24:06.315 15:34:23 -- nvmf/common.sh@693 -- # digest=0 00:24:06.315 15:34:23 -- nvmf/common.sh@694 -- # python - 00:24:06.315 15:34:23 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.IzG 00:24:06.315 15:34:23 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.IzG 00:24:06.315 15:34:23 -- host/auth.sh@81 -- # keys[0]=/tmp/spdk.key-null.IzG 00:24:06.315 15:34:23 -- host/auth.sh@82 -- # gen_key null 48 00:24:06.315 15:34:23 -- host/auth.sh@53 -- # local digest len file key 00:24:06.315 15:34:23 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:06.315 15:34:23 -- host/auth.sh@54 -- # local -A digests 00:24:06.315 15:34:23 -- host/auth.sh@56 -- # digest=null 00:24:06.315 15:34:23 -- host/auth.sh@56 -- # len=48 00:24:06.315 15:34:23 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:06.315 15:34:23 -- host/auth.sh@57 -- # key=24a33b98a1d1aa510f9fd9b62be640477617f887462fd507 00:24:06.315 15:34:23 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:24:06.315 15:34:23 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.wRe 00:24:06.315 15:34:23 -- host/auth.sh@59 -- # format_dhchap_key 24a33b98a1d1aa510f9fd9b62be640477617f887462fd507 0 00:24:06.315 15:34:23 -- nvmf/common.sh@708 -- # format_key DHHC-1 24a33b98a1d1aa510f9fd9b62be640477617f887462fd507 0 00:24:06.315 15:34:23 -- nvmf/common.sh@691 -- # local prefix key digest 00:24:06.315 15:34:23 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:24:06.315 15:34:23 -- nvmf/common.sh@693 -- # key=24a33b98a1d1aa510f9fd9b62be640477617f887462fd507 00:24:06.315 15:34:23 -- nvmf/common.sh@693 -- # digest=0 00:24:06.315 15:34:23 -- nvmf/common.sh@694 -- # python - 00:24:06.315 15:34:23 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.wRe 00:24:06.315 15:34:23 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.wRe 00:24:06.315 15:34:23 -- host/auth.sh@82 -- # keys[1]=/tmp/spdk.key-null.wRe 00:24:06.315 15:34:23 -- host/auth.sh@83 -- # gen_key sha256 32 00:24:06.315 15:34:23 -- host/auth.sh@53 -- # local digest len file key 00:24:06.315 15:34:23 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:06.315 15:34:23 -- host/auth.sh@54 -- # local -A digests 00:24:06.315 15:34:23 -- host/auth.sh@56 -- # digest=sha256 00:24:06.315 15:34:23 -- host/auth.sh@56 -- # len=32 00:24:06.315 15:34:23 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:06.315 15:34:23 -- host/auth.sh@57 -- # key=907f45b915846f9e0dff2b58eebd78bb 00:24:06.315 15:34:23 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha256.XXX 00:24:06.315 15:34:23 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha256.jo2 00:24:06.315 15:34:23 -- host/auth.sh@59 -- # format_dhchap_key 907f45b915846f9e0dff2b58eebd78bb 1 00:24:06.315 15:34:23 -- nvmf/common.sh@708 -- # format_key DHHC-1 907f45b915846f9e0dff2b58eebd78bb 1 00:24:06.315 15:34:23 -- nvmf/common.sh@691 -- # local prefix key digest 00:24:06.315 15:34:23 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:24:06.315 15:34:23 -- nvmf/common.sh@693 -- # key=907f45b915846f9e0dff2b58eebd78bb 00:24:06.315 15:34:23 -- nvmf/common.sh@693 -- # digest=1 00:24:06.315 15:34:23 -- nvmf/common.sh@694 -- # python - 00:24:06.315 15:34:23 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha256.jo2 00:24:06.315 15:34:23 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha256.jo2 00:24:06.315 15:34:23 -- host/auth.sh@83 -- # keys[2]=/tmp/spdk.key-sha256.jo2 00:24:06.315 15:34:23 -- host/auth.sh@84 -- # gen_key sha384 48 00:24:06.315 15:34:23 -- host/auth.sh@53 -- # local digest len file key 00:24:06.315 15:34:23 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:06.315 15:34:23 -- host/auth.sh@54 -- # local -A digests 00:24:06.315 15:34:23 -- host/auth.sh@56 -- # digest=sha384 00:24:06.315 15:34:23 -- host/auth.sh@56 -- # len=48 00:24:06.315 15:34:23 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:06.315 15:34:23 -- host/auth.sh@57 -- # key=f2bac5303625f9ae9d3280055078d2b6c6c13150bbc78d62 00:24:06.315 15:34:23 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha384.XXX 00:24:06.315 15:34:23 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha384.voO 00:24:06.315 15:34:23 -- host/auth.sh@59 -- # format_dhchap_key f2bac5303625f9ae9d3280055078d2b6c6c13150bbc78d62 2 00:24:06.315 15:34:23 -- nvmf/common.sh@708 -- # format_key DHHC-1 f2bac5303625f9ae9d3280055078d2b6c6c13150bbc78d62 2 00:24:06.315 15:34:23 -- nvmf/common.sh@691 -- # local prefix key digest 00:24:06.315 15:34:23 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:24:06.315 15:34:23 -- nvmf/common.sh@693 -- # key=f2bac5303625f9ae9d3280055078d2b6c6c13150bbc78d62 00:24:06.315 15:34:23 -- nvmf/common.sh@693 -- # digest=2 00:24:06.315 15:34:23 -- nvmf/common.sh@694 -- # python - 00:24:06.315 15:34:23 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha384.voO 00:24:06.315 15:34:23 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha384.voO 00:24:06.315 15:34:23 -- host/auth.sh@84 -- # keys[3]=/tmp/spdk.key-sha384.voO 00:24:06.315 15:34:23 -- host/auth.sh@85 -- # gen_key sha512 64 00:24:06.315 15:34:23 -- host/auth.sh@53 -- # local digest len file key 00:24:06.315 15:34:23 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:06.315 15:34:23 -- host/auth.sh@54 -- # local -A digests 00:24:06.315 15:34:23 -- host/auth.sh@56 -- # digest=sha512 00:24:06.315 15:34:23 -- host/auth.sh@56 -- # len=64 00:24:06.315 15:34:23 -- host/auth.sh@57 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:06.315 15:34:23 -- host/auth.sh@57 -- # key=e35f6ddb3c1c17624bcb7bec254ad76ee8821879209a44268a1140b8ed22e5cd 00:24:06.315 15:34:23 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha512.XXX 00:24:06.315 15:34:23 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha512.7Id 00:24:06.315 15:34:23 -- host/auth.sh@59 -- # format_dhchap_key e35f6ddb3c1c17624bcb7bec254ad76ee8821879209a44268a1140b8ed22e5cd 3 00:24:06.315 15:34:23 -- nvmf/common.sh@708 -- # format_key DHHC-1 e35f6ddb3c1c17624bcb7bec254ad76ee8821879209a44268a1140b8ed22e5cd 3 00:24:06.315 15:34:23 -- nvmf/common.sh@691 -- # local prefix key digest 00:24:06.315 15:34:23 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:24:06.315 15:34:23 -- nvmf/common.sh@693 -- # key=e35f6ddb3c1c17624bcb7bec254ad76ee8821879209a44268a1140b8ed22e5cd 00:24:06.315 15:34:23 -- nvmf/common.sh@693 -- # digest=3 00:24:06.315 15:34:23 -- nvmf/common.sh@694 -- # python - 00:24:06.315 15:34:23 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha512.7Id 00:24:06.315 15:34:23 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha512.7Id 00:24:06.315 15:34:23 -- host/auth.sh@85 -- # keys[4]=/tmp/spdk.key-sha512.7Id 00:24:06.315 15:34:23 -- host/auth.sh@87 -- # waitforlisten 1753862 00:24:06.315 15:34:23 -- common/autotest_common.sh@817 -- # '[' -z 1753862 ']' 00:24:06.315 15:34:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:06.315 15:34:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:06.315 15:34:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:06.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:06.315 15:34:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:06.315 15:34:23 -- common/autotest_common.sh@10 -- # set +x 00:24:06.575 15:34:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:06.575 15:34:23 -- common/autotest_common.sh@850 -- # return 0 00:24:06.575 15:34:23 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:24:06.575 15:34:23 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.IzG 00:24:06.575 15:34:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:06.575 15:34:23 -- common/autotest_common.sh@10 -- # set +x 00:24:06.575 15:34:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:06.575 15:34:23 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:24:06.575 15:34:23 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.wRe 00:24:06.575 15:34:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:06.575 15:34:23 -- common/autotest_common.sh@10 -- # set +x 00:24:06.575 15:34:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:06.575 15:34:23 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:24:06.575 15:34:23 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.jo2 00:24:06.575 15:34:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:06.575 15:34:23 -- common/autotest_common.sh@10 -- # set +x 00:24:06.575 15:34:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:06.575 15:34:23 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:24:06.575 15:34:23 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.voO 00:24:06.575 15:34:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:06.575 15:34:23 -- common/autotest_common.sh@10 -- # set +x 00:24:06.575 15:34:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:06.575 15:34:23 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:24:06.575 15:34:23 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.7Id 00:24:06.575 15:34:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:06.575 15:34:23 -- common/autotest_common.sh@10 -- # set +x 00:24:06.575 15:34:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:06.575 15:34:23 -- host/auth.sh@92 -- # nvmet_auth_init 00:24:06.575 15:34:23 -- host/auth.sh@35 -- # get_main_ns_ip 00:24:06.575 15:34:23 -- nvmf/common.sh@717 -- # local ip 00:24:06.575 15:34:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:06.575 15:34:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:06.575 15:34:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:06.575 15:34:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:06.575 15:34:23 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:06.575 15:34:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:06.575 15:34:23 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:06.575 15:34:23 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:06.575 15:34:23 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:06.575 15:34:23 -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:24:06.575 15:34:23 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:24:06.575 15:34:23 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:24:06.575 15:34:23 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:06.575 15:34:23 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:06.575 15:34:23 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:06.575 15:34:23 -- nvmf/common.sh@628 -- # local block nvme 00:24:06.575 15:34:23 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:24:06.575 15:34:23 -- nvmf/common.sh@631 -- # modprobe nvmet 00:24:06.575 15:34:24 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:06.575 15:34:24 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:09.876 Waiting for block devices as requested 00:24:09.876 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:24:10.136 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:24:10.136 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:24:10.136 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:24:10.397 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:24:10.397 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:24:10.397 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:24:10.397 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:24:10.658 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:24:10.658 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:24:10.925 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:24:10.925 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:24:10.925 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:24:10.925 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:24:11.189 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:24:11.189 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:24:11.189 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:24:12.129 15:34:29 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:24:12.129 15:34:29 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:12.129 15:34:29 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:24:12.129 15:34:29 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:24:12.129 15:34:29 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:12.129 15:34:29 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:12.129 15:34:29 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:24:12.129 15:34:29 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:24:12.129 15:34:29 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:12.129 No valid GPT data, bailing 00:24:12.129 15:34:29 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:12.129 15:34:29 -- scripts/common.sh@391 -- # pt= 00:24:12.129 15:34:29 -- scripts/common.sh@392 -- # return 1 00:24:12.129 15:34:29 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:24:12.129 15:34:29 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:24:12.129 15:34:29 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:12.129 15:34:29 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:12.129 15:34:29 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:12.129 15:34:29 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:24:12.129 15:34:29 -- nvmf/common.sh@656 -- # echo 1 00:24:12.129 15:34:29 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:24:12.129 15:34:29 -- nvmf/common.sh@658 -- # echo 1 00:24:12.129 15:34:29 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:24:12.129 15:34:29 -- nvmf/common.sh@661 -- # echo tcp 00:24:12.129 15:34:29 -- nvmf/common.sh@662 -- # echo 4420 00:24:12.129 15:34:29 -- nvmf/common.sh@663 -- # echo ipv4 00:24:12.129 15:34:29 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:12.129 15:34:29 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:24:12.391 00:24:12.391 Discovery Log Number of Records 2, Generation counter 2 00:24:12.391 =====Discovery Log Entry 0====== 00:24:12.391 trtype: tcp 00:24:12.391 adrfam: ipv4 00:24:12.391 subtype: current discovery subsystem 00:24:12.391 treq: not specified, sq flow control disable supported 00:24:12.391 portid: 1 00:24:12.391 trsvcid: 4420 00:24:12.391 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:12.391 traddr: 10.0.0.1 00:24:12.391 eflags: none 00:24:12.391 sectype: none 00:24:12.391 =====Discovery Log Entry 1====== 00:24:12.391 trtype: tcp 00:24:12.391 adrfam: ipv4 00:24:12.391 subtype: nvme subsystem 00:24:12.391 treq: not specified, sq flow control disable supported 00:24:12.391 portid: 1 00:24:12.391 trsvcid: 4420 00:24:12.391 subnqn: nqn.2024-02.io.spdk:cnode0 00:24:12.391 traddr: 10.0.0.1 00:24:12.391 eflags: none 00:24:12.391 sectype: none 00:24:12.391 15:34:29 -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:12.391 15:34:29 -- host/auth.sh@37 -- # echo 0 00:24:12.391 15:34:29 -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:12.391 15:34:29 -- host/auth.sh@95 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:12.391 15:34:29 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:12.391 15:34:29 -- host/auth.sh@44 -- # digest=sha256 00:24:12.391 15:34:29 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:12.391 15:34:29 -- host/auth.sh@44 -- # keyid=1 00:24:12.392 15:34:29 -- host/auth.sh@45 -- # key=DHHC-1:00:MjRhMzNiOThhMWQxYWE1MTBmOWZkOWI2MmJlNjQwNDc3NjE3Zjg4NzQ2MmZkNTA3FN6w6g==: 00:24:12.392 15:34:29 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:12.392 15:34:29 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:12.392 15:34:29 -- host/auth.sh@49 -- # echo DHHC-1:00:MjRhMzNiOThhMWQxYWE1MTBmOWZkOWI2MmJlNjQwNDc3NjE3Zjg4NzQ2MmZkNTA3FN6w6g==: 00:24:12.392 15:34:29 -- host/auth.sh@100 -- # IFS=, 00:24:12.392 15:34:29 -- host/auth.sh@101 -- # printf %s sha256,sha384,sha512 00:24:12.392 15:34:29 -- host/auth.sh@100 -- # IFS=, 00:24:12.392 15:34:29 -- host/auth.sh@101 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:12.392 15:34:29 -- host/auth.sh@100 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:24:12.392 15:34:29 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:12.392 15:34:29 -- host/auth.sh@68 -- # digest=sha256,sha384,sha512 00:24:12.392 15:34:29 -- host/auth.sh@68 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:12.392 15:34:29 -- host/auth.sh@68 -- # keyid=1 00:24:12.392 15:34:29 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:12.392 15:34:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:12.392 15:34:29 -- common/autotest_common.sh@10 -- # set +x 00:24:12.392 15:34:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:12.392 15:34:29 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:12.392 15:34:29 -- nvmf/common.sh@717 -- # local ip 00:24:12.392 15:34:29 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:12.392 15:34:29 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:12.392 15:34:29 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:12.392 15:34:29 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:12.392 15:34:29 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:12.392 15:34:29 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:12.392 15:34:29 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:12.392 15:34:29 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:12.392 15:34:29 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:12.392 15:34:29 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:12.392 15:34:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:12.392 15:34:29 -- common/autotest_common.sh@10 -- # set +x 00:24:12.392 nvme0n1 00:24:12.392 15:34:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:12.392 15:34:29 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:12.392 15:34:29 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:12.392 15:34:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:12.392 15:34:29 -- common/autotest_common.sh@10 -- # set +x 00:24:12.392 15:34:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:12.392 15:34:29 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.392 15:34:29 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:12.392 15:34:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:12.392 15:34:29 -- common/autotest_common.sh@10 -- # set +x 00:24:12.653 15:34:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:12.653 15:34:29 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:24:12.653 15:34:29 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:12.653 15:34:29 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:12.653 15:34:29 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:24:12.653 15:34:29 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:12.653 15:34:29 -- host/auth.sh@44 -- # digest=sha256 00:24:12.653 15:34:29 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:12.653 15:34:29 -- host/auth.sh@44 -- # keyid=0 00:24:12.653 15:34:29 -- host/auth.sh@45 -- # key=DHHC-1:00:ZTVhOGUxYjE5OTk5ZjI5MmE0MmE2YzJiODA3M2QzOTPQkGqb: 00:24:12.653 15:34:29 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:12.653 15:34:29 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:12.653 15:34:29 -- host/auth.sh@49 -- # echo DHHC-1:00:ZTVhOGUxYjE5OTk5ZjI5MmE0MmE2YzJiODA3M2QzOTPQkGqb: 00:24:12.653 15:34:29 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 0 00:24:12.653 15:34:29 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:12.653 15:34:29 -- host/auth.sh@68 -- # digest=sha256 00:24:12.653 15:34:29 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:12.653 15:34:29 -- host/auth.sh@68 -- # keyid=0 00:24:12.654 15:34:29 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:12.654 15:34:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:12.654 15:34:29 -- common/autotest_common.sh@10 -- # set +x 00:24:12.654 15:34:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:12.654 15:34:29 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:12.654 15:34:29 -- nvmf/common.sh@717 -- # local ip 00:24:12.654 15:34:29 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:12.654 15:34:29 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:12.654 15:34:29 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:12.654 15:34:29 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:12.654 15:34:29 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:12.654 15:34:29 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:12.654 15:34:29 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:12.654 15:34:29 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:12.654 15:34:29 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:12.654 15:34:29 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:12.654 15:34:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:12.654 15:34:29 -- common/autotest_common.sh@10 -- # set +x 00:24:12.654 nvme0n1 00:24:12.654 15:34:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:12.654 15:34:30 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:12.654 15:34:30 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:12.654 15:34:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:12.654 15:34:30 -- common/autotest_common.sh@10 -- # set +x 00:24:12.654 15:34:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:12.654 15:34:30 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.654 15:34:30 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:12.654 15:34:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:12.654 15:34:30 -- common/autotest_common.sh@10 -- # set +x 00:24:12.654 15:34:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:12.654 15:34:30 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:12.654 15:34:30 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:12.654 15:34:30 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:12.654 15:34:30 -- host/auth.sh@44 -- # digest=sha256 00:24:12.654 15:34:30 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:12.654 15:34:30 -- host/auth.sh@44 -- # keyid=1 00:24:12.654 15:34:30 -- host/auth.sh@45 -- # key=DHHC-1:00:MjRhMzNiOThhMWQxYWE1MTBmOWZkOWI2MmJlNjQwNDc3NjE3Zjg4NzQ2MmZkNTA3FN6w6g==: 00:24:12.654 15:34:30 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:12.654 15:34:30 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:12.654 15:34:30 -- host/auth.sh@49 -- # echo DHHC-1:00:MjRhMzNiOThhMWQxYWE1MTBmOWZkOWI2MmJlNjQwNDc3NjE3Zjg4NzQ2MmZkNTA3FN6w6g==: 00:24:12.654 15:34:30 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 1 00:24:12.654 15:34:30 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:12.654 15:34:30 -- host/auth.sh@68 -- # digest=sha256 00:24:12.654 15:34:30 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:12.654 15:34:30 -- host/auth.sh@68 -- # keyid=1 00:24:12.654 15:34:30 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:12.654 15:34:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:12.654 15:34:30 -- common/autotest_common.sh@10 -- # set +x 00:24:12.654 15:34:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:12.654 15:34:30 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:12.654 15:34:30 -- nvmf/common.sh@717 -- # local ip 00:24:12.654 15:34:30 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:12.654 15:34:30 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:12.654 15:34:30 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:12.654 15:34:30 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:12.654 15:34:30 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:12.654 15:34:30 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:12.654 15:34:30 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:12.654 15:34:30 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:12.654 15:34:30 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:12.654 15:34:30 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:12.654 15:34:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:12.654 15:34:30 -- common/autotest_common.sh@10 -- # set +x 00:24:12.915 nvme0n1 00:24:12.915 15:34:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:12.915 15:34:30 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:12.915 15:34:30 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:12.915 15:34:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:12.915 15:34:30 -- common/autotest_common.sh@10 -- # set +x 00:24:12.915 15:34:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:12.915 15:34:30 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.915 15:34:30 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:12.915 15:34:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:12.915 15:34:30 -- common/autotest_common.sh@10 -- # set +x 00:24:12.915 15:34:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:12.915 15:34:30 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:12.915 15:34:30 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:12.915 15:34:30 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:12.915 15:34:30 -- host/auth.sh@44 -- # digest=sha256 00:24:12.915 15:34:30 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:12.915 15:34:30 -- host/auth.sh@44 -- # keyid=2 00:24:12.915 15:34:30 -- host/auth.sh@45 -- # key=DHHC-1:01:OTA3ZjQ1YjkxNTg0NmY5ZTBkZmYyYjU4ZWViZDc4YmLNO2iw: 00:24:12.915 15:34:30 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:12.915 15:34:30 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:12.915 15:34:30 -- host/auth.sh@49 -- # echo DHHC-1:01:OTA3ZjQ1YjkxNTg0NmY5ZTBkZmYyYjU4ZWViZDc4YmLNO2iw: 00:24:12.915 15:34:30 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 2 00:24:12.915 15:34:30 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:12.915 15:34:30 -- host/auth.sh@68 -- # digest=sha256 00:24:12.915 15:34:30 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:12.915 15:34:30 -- host/auth.sh@68 -- # keyid=2 00:24:12.915 15:34:30 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:12.915 15:34:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:12.915 15:34:30 -- common/autotest_common.sh@10 -- # set +x 00:24:12.915 15:34:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:12.915 15:34:30 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:12.915 15:34:30 -- nvmf/common.sh@717 -- # local ip 00:24:12.915 15:34:30 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:12.915 15:34:30 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:12.915 15:34:30 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:12.915 15:34:30 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:12.915 15:34:30 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:12.915 15:34:30 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:12.916 15:34:30 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:12.916 15:34:30 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:12.916 15:34:30 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:12.916 15:34:30 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:12.916 15:34:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:12.916 15:34:30 -- common/autotest_common.sh@10 -- # set +x 00:24:13.177 nvme0n1 00:24:13.177 15:34:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:13.177 15:34:30 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:13.177 15:34:30 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:13.177 15:34:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:13.177 15:34:30 -- common/autotest_common.sh@10 -- # set +x 00:24:13.177 15:34:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:13.177 15:34:30 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.177 15:34:30 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:13.177 15:34:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:13.177 15:34:30 -- common/autotest_common.sh@10 -- # set +x 00:24:13.177 15:34:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:13.177 15:34:30 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:13.177 15:34:30 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:24:13.178 15:34:30 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:13.178 15:34:30 -- host/auth.sh@44 -- # digest=sha256 00:24:13.178 15:34:30 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:13.178 15:34:30 -- host/auth.sh@44 -- # keyid=3 00:24:13.178 15:34:30 -- host/auth.sh@45 -- # key=DHHC-1:02:ZjJiYWM1MzAzNjI1ZjlhZTlkMzI4MDA1NTA3OGQyYjZjNmMxMzE1MGJiYzc4ZDYy6zJpDQ==: 00:24:13.178 15:34:30 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:13.178 15:34:30 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:13.178 15:34:30 -- host/auth.sh@49 -- # echo DHHC-1:02:ZjJiYWM1MzAzNjI1ZjlhZTlkMzI4MDA1NTA3OGQyYjZjNmMxMzE1MGJiYzc4ZDYy6zJpDQ==: 00:24:13.178 15:34:30 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 3 00:24:13.178 15:34:30 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:13.178 15:34:30 -- host/auth.sh@68 -- # digest=sha256 00:24:13.178 15:34:30 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:13.178 15:34:30 -- host/auth.sh@68 -- # keyid=3 00:24:13.178 15:34:30 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:13.178 15:34:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:13.178 15:34:30 -- common/autotest_common.sh@10 -- # set +x 00:24:13.178 15:34:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:13.178 15:34:30 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:13.178 15:34:30 -- nvmf/common.sh@717 -- # local ip 00:24:13.178 15:34:30 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:13.178 15:34:30 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:13.178 15:34:30 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:13.178 15:34:30 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:13.178 15:34:30 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:13.178 15:34:30 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:13.178 15:34:30 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:13.178 15:34:30 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:13.178 15:34:30 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:13.178 15:34:30 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:13.178 15:34:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:13.178 15:34:30 -- common/autotest_common.sh@10 -- # set +x 00:24:13.443 nvme0n1 00:24:13.443 15:34:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:13.443 15:34:30 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:13.443 15:34:30 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:13.443 15:34:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:13.443 15:34:30 -- common/autotest_common.sh@10 -- # set +x 00:24:13.443 15:34:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:13.443 15:34:30 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.443 15:34:30 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:13.443 15:34:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:13.443 15:34:30 -- common/autotest_common.sh@10 -- # set +x 00:24:13.443 15:34:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:13.443 15:34:30 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:13.443 15:34:30 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:24:13.443 15:34:30 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:13.443 15:34:30 -- host/auth.sh@44 -- # digest=sha256 00:24:13.443 15:34:30 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:13.443 15:34:30 -- host/auth.sh@44 -- # keyid=4 00:24:13.443 15:34:30 -- host/auth.sh@45 -- # key=DHHC-1:03:ZTM1ZjZkZGIzYzFjMTc2MjRiY2I3YmVjMjU0YWQ3NmVlODgyMTg3OTIwOWE0NDI2OGExMTQwYjhlZDIyZTVjZP9psy8=: 00:24:13.443 15:34:30 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:13.443 15:34:30 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:13.443 15:34:30 -- host/auth.sh@49 -- # echo DHHC-1:03:ZTM1ZjZkZGIzYzFjMTc2MjRiY2I3YmVjMjU0YWQ3NmVlODgyMTg3OTIwOWE0NDI2OGExMTQwYjhlZDIyZTVjZP9psy8=: 00:24:13.443 15:34:30 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 4 00:24:13.443 15:34:30 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:13.443 15:34:30 -- host/auth.sh@68 -- # digest=sha256 00:24:13.443 15:34:30 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:13.443 15:34:30 -- host/auth.sh@68 -- # keyid=4 00:24:13.443 15:34:30 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:13.443 15:34:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:13.443 15:34:30 -- common/autotest_common.sh@10 -- # set +x 00:24:13.443 15:34:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:13.443 15:34:30 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:13.443 15:34:30 -- nvmf/common.sh@717 -- # local ip 00:24:13.443 15:34:30 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:13.443 15:34:30 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:13.443 15:34:30 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:13.443 15:34:30 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:13.443 15:34:30 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:13.443 15:34:30 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:13.443 15:34:30 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:13.443 15:34:30 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:13.443 15:34:30 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:13.443 15:34:30 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:13.443 15:34:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:13.443 15:34:30 -- common/autotest_common.sh@10 -- # set +x 00:24:13.766 nvme0n1 00:24:13.766 15:34:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:13.766 15:34:30 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:13.766 15:34:30 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:13.766 15:34:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:13.766 15:34:30 -- common/autotest_common.sh@10 -- # set +x 00:24:13.766 15:34:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:13.766 15:34:30 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.766 15:34:30 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:13.766 15:34:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:13.766 15:34:30 -- common/autotest_common.sh@10 -- # set +x 00:24:13.766 15:34:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:13.766 15:34:30 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:13.766 15:34:30 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:13.766 15:34:30 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:24:13.766 15:34:30 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:13.766 15:34:30 -- host/auth.sh@44 -- # digest=sha256 00:24:13.766 15:34:30 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:13.766 15:34:30 -- host/auth.sh@44 -- # keyid=0 00:24:13.766 15:34:30 -- host/auth.sh@45 -- # key=DHHC-1:00:ZTVhOGUxYjE5OTk5ZjI5MmE0MmE2YzJiODA3M2QzOTPQkGqb: 00:24:13.766 15:34:30 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:13.766 15:34:30 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:13.766 15:34:30 -- host/auth.sh@49 -- # echo DHHC-1:00:ZTVhOGUxYjE5OTk5ZjI5MmE0MmE2YzJiODA3M2QzOTPQkGqb: 00:24:13.766 15:34:30 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 0 00:24:13.766 15:34:30 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:13.766 15:34:30 -- host/auth.sh@68 -- # digest=sha256 00:24:13.766 15:34:30 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:13.766 15:34:30 -- host/auth.sh@68 -- # keyid=0 00:24:13.766 15:34:30 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:13.766 15:34:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:13.766 15:34:30 -- common/autotest_common.sh@10 -- # set +x 00:24:13.766 15:34:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:13.766 15:34:30 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:13.766 15:34:30 -- nvmf/common.sh@717 -- # local ip 00:24:13.766 15:34:30 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:13.766 15:34:30 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:13.766 15:34:30 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:13.766 15:34:30 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:13.766 15:34:30 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:13.766 15:34:30 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:13.766 15:34:30 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:13.766 15:34:30 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:13.767 15:34:30 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:13.767 15:34:30 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:13.767 15:34:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:13.767 15:34:30 -- common/autotest_common.sh@10 -- # set +x 00:24:13.767 nvme0n1 00:24:13.767 15:34:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:13.767 15:34:31 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:13.767 15:34:31 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:13.767 15:34:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:13.767 15:34:31 -- common/autotest_common.sh@10 -- # set +x 00:24:13.767 15:34:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.054 15:34:31 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.054 15:34:31 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.054 15:34:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.054 15:34:31 -- common/autotest_common.sh@10 -- # set +x 00:24:14.054 15:34:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.054 15:34:31 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:14.054 15:34:31 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:24:14.054 15:34:31 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:14.054 15:34:31 -- host/auth.sh@44 -- # digest=sha256 00:24:14.054 15:34:31 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:14.054 15:34:31 -- host/auth.sh@44 -- # keyid=1 00:24:14.054 15:34:31 -- host/auth.sh@45 -- # key=DHHC-1:00:MjRhMzNiOThhMWQxYWE1MTBmOWZkOWI2MmJlNjQwNDc3NjE3Zjg4NzQ2MmZkNTA3FN6w6g==: 00:24:14.054 15:34:31 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:14.054 15:34:31 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:14.054 15:34:31 -- host/auth.sh@49 -- # echo DHHC-1:00:MjRhMzNiOThhMWQxYWE1MTBmOWZkOWI2MmJlNjQwNDc3NjE3Zjg4NzQ2MmZkNTA3FN6w6g==: 00:24:14.054 15:34:31 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 1 00:24:14.054 15:34:31 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:14.054 15:34:31 -- host/auth.sh@68 -- # digest=sha256 00:24:14.054 15:34:31 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:14.054 15:34:31 -- host/auth.sh@68 -- # keyid=1 00:24:14.054 15:34:31 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:14.054 15:34:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.054 15:34:31 -- common/autotest_common.sh@10 -- # set +x 00:24:14.054 15:34:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.054 15:34:31 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:14.054 15:34:31 -- nvmf/common.sh@717 -- # local ip 00:24:14.054 15:34:31 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:14.054 15:34:31 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:14.054 15:34:31 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.054 15:34:31 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.054 15:34:31 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:14.054 15:34:31 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.054 15:34:31 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:14.054 15:34:31 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:14.055 15:34:31 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:14.055 15:34:31 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:14.055 15:34:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.055 15:34:31 -- common/autotest_common.sh@10 -- # set +x 00:24:14.055 nvme0n1 00:24:14.055 15:34:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.055 15:34:31 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.055 15:34:31 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:14.055 15:34:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.055 15:34:31 -- common/autotest_common.sh@10 -- # set +x 00:24:14.055 15:34:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.055 15:34:31 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.055 15:34:31 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.055 15:34:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.055 15:34:31 -- common/autotest_common.sh@10 -- # set +x 00:24:14.055 15:34:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.055 15:34:31 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:14.055 15:34:31 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:24:14.055 15:34:31 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:14.055 15:34:31 -- host/auth.sh@44 -- # digest=sha256 00:24:14.055 15:34:31 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:14.055 15:34:31 -- host/auth.sh@44 -- # keyid=2 00:24:14.055 15:34:31 -- host/auth.sh@45 -- # key=DHHC-1:01:OTA3ZjQ1YjkxNTg0NmY5ZTBkZmYyYjU4ZWViZDc4YmLNO2iw: 00:24:14.055 15:34:31 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:14.055 15:34:31 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:14.055 15:34:31 -- host/auth.sh@49 -- # echo DHHC-1:01:OTA3ZjQ1YjkxNTg0NmY5ZTBkZmYyYjU4ZWViZDc4YmLNO2iw: 00:24:14.055 15:34:31 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 2 00:24:14.055 15:34:31 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:14.055 15:34:31 -- host/auth.sh@68 -- # digest=sha256 00:24:14.055 15:34:31 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:14.055 15:34:31 -- host/auth.sh@68 -- # keyid=2 00:24:14.055 15:34:31 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:14.055 15:34:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.055 15:34:31 -- common/autotest_common.sh@10 -- # set +x 00:24:14.055 15:34:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.055 15:34:31 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:14.055 15:34:31 -- nvmf/common.sh@717 -- # local ip 00:24:14.317 15:34:31 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:14.317 15:34:31 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:14.317 15:34:31 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.317 15:34:31 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.317 15:34:31 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:14.317 15:34:31 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.317 15:34:31 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:14.317 15:34:31 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:14.317 15:34:31 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:14.317 15:34:31 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:14.317 15:34:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.317 15:34:31 -- common/autotest_common.sh@10 -- # set +x 00:24:14.317 nvme0n1 00:24:14.317 15:34:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.317 15:34:31 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.317 15:34:31 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:14.317 15:34:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.317 15:34:31 -- common/autotest_common.sh@10 -- # set +x 00:24:14.317 15:34:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.317 15:34:31 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.317 15:34:31 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.317 15:34:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.317 15:34:31 -- common/autotest_common.sh@10 -- # set +x 00:24:14.317 15:34:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.317 15:34:31 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:14.317 15:34:31 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:24:14.317 15:34:31 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:14.317 15:34:31 -- host/auth.sh@44 -- # digest=sha256 00:24:14.317 15:34:31 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:14.317 15:34:31 -- host/auth.sh@44 -- # keyid=3 00:24:14.317 15:34:31 -- host/auth.sh@45 -- # key=DHHC-1:02:ZjJiYWM1MzAzNjI1ZjlhZTlkMzI4MDA1NTA3OGQyYjZjNmMxMzE1MGJiYzc4ZDYy6zJpDQ==: 00:24:14.317 15:34:31 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:14.317 15:34:31 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:14.317 15:34:31 -- host/auth.sh@49 -- # echo DHHC-1:02:ZjJiYWM1MzAzNjI1ZjlhZTlkMzI4MDA1NTA3OGQyYjZjNmMxMzE1MGJiYzc4ZDYy6zJpDQ==: 00:24:14.317 15:34:31 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 3 00:24:14.317 15:34:31 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:14.317 15:34:31 -- host/auth.sh@68 -- # digest=sha256 00:24:14.317 15:34:31 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:14.317 15:34:31 -- host/auth.sh@68 -- # keyid=3 00:24:14.317 15:34:31 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:14.317 15:34:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.317 15:34:31 -- common/autotest_common.sh@10 -- # set +x 00:24:14.317 15:34:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.317 15:34:31 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:14.317 15:34:31 -- nvmf/common.sh@717 -- # local ip 00:24:14.317 15:34:31 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:14.317 15:34:31 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:14.317 15:34:31 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.317 15:34:31 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.317 15:34:31 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:14.317 15:34:31 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.317 15:34:31 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:14.317 15:34:31 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:14.317 15:34:31 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:14.317 15:34:31 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:14.317 15:34:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.317 15:34:31 -- common/autotest_common.sh@10 -- # set +x 00:24:14.579 nvme0n1 00:24:14.579 15:34:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.579 15:34:31 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.579 15:34:31 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:14.579 15:34:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.579 15:34:31 -- common/autotest_common.sh@10 -- # set +x 00:24:14.579 15:34:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.579 15:34:31 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.579 15:34:31 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.579 15:34:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.579 15:34:31 -- common/autotest_common.sh@10 -- # set +x 00:24:14.579 15:34:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.579 15:34:32 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:14.579 15:34:32 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:24:14.579 15:34:32 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:14.579 15:34:32 -- host/auth.sh@44 -- # digest=sha256 00:24:14.579 15:34:32 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:14.579 15:34:32 -- host/auth.sh@44 -- # keyid=4 00:24:14.579 15:34:32 -- host/auth.sh@45 -- # key=DHHC-1:03:ZTM1ZjZkZGIzYzFjMTc2MjRiY2I3YmVjMjU0YWQ3NmVlODgyMTg3OTIwOWE0NDI2OGExMTQwYjhlZDIyZTVjZP9psy8=: 00:24:14.579 15:34:32 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:14.579 15:34:32 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:14.579 15:34:32 -- host/auth.sh@49 -- # echo DHHC-1:03:ZTM1ZjZkZGIzYzFjMTc2MjRiY2I3YmVjMjU0YWQ3NmVlODgyMTg3OTIwOWE0NDI2OGExMTQwYjhlZDIyZTVjZP9psy8=: 00:24:14.579 15:34:32 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 4 00:24:14.579 15:34:32 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:14.579 15:34:32 -- host/auth.sh@68 -- # digest=sha256 00:24:14.579 15:34:32 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:14.579 15:34:32 -- host/auth.sh@68 -- # keyid=4 00:24:14.579 15:34:32 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:14.579 15:34:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.579 15:34:32 -- common/autotest_common.sh@10 -- # set +x 00:24:14.579 15:34:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.579 15:34:32 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:14.579 15:34:32 -- nvmf/common.sh@717 -- # local ip 00:24:14.579 15:34:32 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:14.579 15:34:32 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:14.579 15:34:32 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.579 15:34:32 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.579 15:34:32 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:14.579 15:34:32 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.579 15:34:32 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:14.579 15:34:32 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:14.579 15:34:32 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:14.579 15:34:32 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:14.579 15:34:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.579 15:34:32 -- common/autotest_common.sh@10 -- # set +x 00:24:14.841 nvme0n1 00:24:14.841 15:34:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.841 15:34:32 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.841 15:34:32 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:14.841 15:34:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.841 15:34:32 -- common/autotest_common.sh@10 -- # set +x 00:24:14.841 15:34:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.841 15:34:32 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.841 15:34:32 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.841 15:34:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.841 15:34:32 -- common/autotest_common.sh@10 -- # set +x 00:24:14.841 15:34:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.841 15:34:32 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:14.841 15:34:32 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:14.841 15:34:32 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:24:14.841 15:34:32 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:14.841 15:34:32 -- host/auth.sh@44 -- # digest=sha256 00:24:14.841 15:34:32 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:14.841 15:34:32 -- host/auth.sh@44 -- # keyid=0 00:24:14.841 15:34:32 -- host/auth.sh@45 -- # key=DHHC-1:00:ZTVhOGUxYjE5OTk5ZjI5MmE0MmE2YzJiODA3M2QzOTPQkGqb: 00:24:14.841 15:34:32 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:14.841 15:34:32 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:14.841 15:34:32 -- host/auth.sh@49 -- # echo DHHC-1:00:ZTVhOGUxYjE5OTk5ZjI5MmE0MmE2YzJiODA3M2QzOTPQkGqb: 00:24:14.841 15:34:32 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 0 00:24:14.842 15:34:32 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:14.842 15:34:32 -- host/auth.sh@68 -- # digest=sha256 00:24:14.842 15:34:32 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:14.842 15:34:32 -- host/auth.sh@68 -- # keyid=0 00:24:14.842 15:34:32 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:14.842 15:34:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.842 15:34:32 -- common/autotest_common.sh@10 -- # set +x 00:24:14.842 15:34:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.842 15:34:32 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:14.842 15:34:32 -- nvmf/common.sh@717 -- # local ip 00:24:14.842 15:34:32 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:14.842 15:34:32 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:14.842 15:34:32 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.842 15:34:32 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.842 15:34:32 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:14.842 15:34:32 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.842 15:34:32 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:14.842 15:34:32 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:14.842 15:34:32 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:14.842 15:34:32 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:14.842 15:34:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.842 15:34:32 -- common/autotest_common.sh@10 -- # set +x 00:24:15.103 nvme0n1 00:24:15.103 15:34:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.103 15:34:32 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:15.103 15:34:32 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:15.103 15:34:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.103 15:34:32 -- common/autotest_common.sh@10 -- # set +x 00:24:15.364 15:34:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.364 15:34:32 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:15.364 15:34:32 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:15.364 15:34:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.364 15:34:32 -- common/autotest_common.sh@10 -- # set +x 00:24:15.364 15:34:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.364 15:34:32 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:15.364 15:34:32 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:24:15.364 15:34:32 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:15.364 15:34:32 -- host/auth.sh@44 -- # digest=sha256 00:24:15.364 15:34:32 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:15.364 15:34:32 -- host/auth.sh@44 -- # keyid=1 00:24:15.364 15:34:32 -- host/auth.sh@45 -- # key=DHHC-1:00:MjRhMzNiOThhMWQxYWE1MTBmOWZkOWI2MmJlNjQwNDc3NjE3Zjg4NzQ2MmZkNTA3FN6w6g==: 00:24:15.364 15:34:32 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:15.364 15:34:32 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:15.364 15:34:32 -- host/auth.sh@49 -- # echo DHHC-1:00:MjRhMzNiOThhMWQxYWE1MTBmOWZkOWI2MmJlNjQwNDc3NjE3Zjg4NzQ2MmZkNTA3FN6w6g==: 00:24:15.364 15:34:32 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 1 00:24:15.364 15:34:32 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:15.364 15:34:32 -- host/auth.sh@68 -- # digest=sha256 00:24:15.364 15:34:32 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:15.364 15:34:32 -- host/auth.sh@68 -- # keyid=1 00:24:15.364 15:34:32 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:15.364 15:34:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.364 15:34:32 -- common/autotest_common.sh@10 -- # set +x 00:24:15.364 15:34:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.364 15:34:32 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:15.364 15:34:32 -- nvmf/common.sh@717 -- # local ip 00:24:15.364 15:34:32 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:15.364 15:34:32 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:15.364 15:34:32 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:15.364 15:34:32 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:15.364 15:34:32 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:15.364 15:34:32 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:15.364 15:34:32 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:15.365 15:34:32 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:15.365 15:34:32 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:15.365 15:34:32 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:15.365 15:34:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.365 15:34:32 -- common/autotest_common.sh@10 -- # set +x 00:24:15.626 nvme0n1 00:24:15.626 15:34:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.626 15:34:32 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:15.626 15:34:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.626 15:34:32 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:15.626 15:34:32 -- common/autotest_common.sh@10 -- # set +x 00:24:15.626 15:34:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.626 15:34:32 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:15.626 15:34:32 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:15.626 15:34:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.626 15:34:32 -- common/autotest_common.sh@10 -- # set +x 00:24:15.626 15:34:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.626 15:34:32 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:15.626 15:34:32 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:24:15.626 15:34:32 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:15.626 15:34:32 -- host/auth.sh@44 -- # digest=sha256 00:24:15.626 15:34:32 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:15.626 15:34:32 -- host/auth.sh@44 -- # keyid=2 00:24:15.626 15:34:32 -- host/auth.sh@45 -- # key=DHHC-1:01:OTA3ZjQ1YjkxNTg0NmY5ZTBkZmYyYjU4ZWViZDc4YmLNO2iw: 00:24:15.627 15:34:32 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:15.627 15:34:32 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:15.627 15:34:32 -- host/auth.sh@49 -- # echo DHHC-1:01:OTA3ZjQ1YjkxNTg0NmY5ZTBkZmYyYjU4ZWViZDc4YmLNO2iw: 00:24:15.627 15:34:32 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 2 00:24:15.627 15:34:32 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:15.627 15:34:32 -- host/auth.sh@68 -- # digest=sha256 00:24:15.627 15:34:32 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:15.627 15:34:32 -- host/auth.sh@68 -- # keyid=2 00:24:15.627 15:34:32 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:15.627 15:34:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.627 15:34:32 -- common/autotest_common.sh@10 -- # set +x 00:24:15.627 15:34:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.627 15:34:32 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:15.627 15:34:32 -- nvmf/common.sh@717 -- # local ip 00:24:15.627 15:34:32 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:15.627 15:34:32 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:15.627 15:34:32 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:15.627 15:34:32 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:15.627 15:34:32 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:15.627 15:34:32 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:15.627 15:34:32 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:15.627 15:34:32 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:15.627 15:34:32 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:15.627 15:34:32 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:15.627 15:34:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.627 15:34:32 -- common/autotest_common.sh@10 -- # set +x 00:24:15.888 nvme0n1 00:24:15.888 15:34:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.888 15:34:33 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:15.888 15:34:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.888 15:34:33 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:15.888 15:34:33 -- common/autotest_common.sh@10 -- # set +x 00:24:15.888 15:34:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.888 15:34:33 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:15.888 15:34:33 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:15.888 15:34:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.888 15:34:33 -- common/autotest_common.sh@10 -- # set +x 00:24:15.888 15:34:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.888 15:34:33 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:15.888 15:34:33 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:24:15.888 15:34:33 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:15.888 15:34:33 -- host/auth.sh@44 -- # digest=sha256 00:24:15.888 15:34:33 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:15.888 15:34:33 -- host/auth.sh@44 -- # keyid=3 00:24:15.888 15:34:33 -- host/auth.sh@45 -- # key=DHHC-1:02:ZjJiYWM1MzAzNjI1ZjlhZTlkMzI4MDA1NTA3OGQyYjZjNmMxMzE1MGJiYzc4ZDYy6zJpDQ==: 00:24:15.888 15:34:33 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:15.888 15:34:33 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:15.888 15:34:33 -- host/auth.sh@49 -- # echo DHHC-1:02:ZjJiYWM1MzAzNjI1ZjlhZTlkMzI4MDA1NTA3OGQyYjZjNmMxMzE1MGJiYzc4ZDYy6zJpDQ==: 00:24:15.888 15:34:33 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 3 00:24:15.888 15:34:33 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:15.888 15:34:33 -- host/auth.sh@68 -- # digest=sha256 00:24:15.888 15:34:33 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:15.888 15:34:33 -- host/auth.sh@68 -- # keyid=3 00:24:15.888 15:34:33 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:15.888 15:34:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.888 15:34:33 -- common/autotest_common.sh@10 -- # set +x 00:24:15.888 15:34:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.888 15:34:33 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:15.888 15:34:33 -- nvmf/common.sh@717 -- # local ip 00:24:15.888 15:34:33 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:15.888 15:34:33 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:15.888 15:34:33 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:15.888 15:34:33 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:15.888 15:34:33 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:15.888 15:34:33 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:15.888 15:34:33 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:15.888 15:34:33 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:15.888 15:34:33 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:15.888 15:34:33 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:15.888 15:34:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.888 15:34:33 -- common/autotest_common.sh@10 -- # set +x 00:24:16.461 nvme0n1 00:24:16.461 15:34:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.461 15:34:33 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:16.461 15:34:33 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:16.461 15:34:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.461 15:34:33 -- common/autotest_common.sh@10 -- # set +x 00:24:16.461 15:34:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.461 15:34:33 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:16.461 15:34:33 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:16.461 15:34:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.461 15:34:33 -- common/autotest_common.sh@10 -- # set +x 00:24:16.461 15:34:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.461 15:34:33 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:16.461 15:34:33 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:24:16.461 15:34:33 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:16.461 15:34:33 -- host/auth.sh@44 -- # digest=sha256 00:24:16.461 15:34:33 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:16.461 15:34:33 -- host/auth.sh@44 -- # keyid=4 00:24:16.461 15:34:33 -- host/auth.sh@45 -- # key=DHHC-1:03:ZTM1ZjZkZGIzYzFjMTc2MjRiY2I3YmVjMjU0YWQ3NmVlODgyMTg3OTIwOWE0NDI2OGExMTQwYjhlZDIyZTVjZP9psy8=: 00:24:16.461 15:34:33 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:16.461 15:34:33 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:16.461 15:34:33 -- host/auth.sh@49 -- # echo DHHC-1:03:ZTM1ZjZkZGIzYzFjMTc2MjRiY2I3YmVjMjU0YWQ3NmVlODgyMTg3OTIwOWE0NDI2OGExMTQwYjhlZDIyZTVjZP9psy8=: 00:24:16.461 15:34:33 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 4 00:24:16.461 15:34:33 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:16.461 15:34:33 -- host/auth.sh@68 -- # digest=sha256 00:24:16.461 15:34:33 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:16.461 15:34:33 -- host/auth.sh@68 -- # keyid=4 00:24:16.461 15:34:33 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:16.461 15:34:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.461 15:34:33 -- common/autotest_common.sh@10 -- # set +x 00:24:16.461 15:34:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.461 15:34:33 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:16.461 15:34:33 -- nvmf/common.sh@717 -- # local ip 00:24:16.461 15:34:33 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:16.461 15:34:33 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:16.461 15:34:33 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:16.461 15:34:33 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:16.461 15:34:33 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:16.461 15:34:33 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:16.461 15:34:33 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:16.461 15:34:33 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:16.461 15:34:33 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:16.461 15:34:33 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:16.461 15:34:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.461 15:34:33 -- common/autotest_common.sh@10 -- # set +x 00:24:16.723 nvme0n1 00:24:16.723 15:34:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.723 15:34:33 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:16.723 15:34:33 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:16.723 15:34:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.723 15:34:33 -- common/autotest_common.sh@10 -- # set +x 00:24:16.723 15:34:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.723 15:34:34 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:16.723 15:34:34 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:16.723 15:34:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.723 15:34:34 -- common/autotest_common.sh@10 -- # set +x 00:24:16.723 15:34:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.723 15:34:34 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:16.723 15:34:34 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:16.723 15:34:34 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:24:16.723 15:34:34 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:16.723 15:34:34 -- host/auth.sh@44 -- # digest=sha256 00:24:16.723 15:34:34 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:16.723 15:34:34 -- host/auth.sh@44 -- # keyid=0 00:24:16.723 15:34:34 -- host/auth.sh@45 -- # key=DHHC-1:00:ZTVhOGUxYjE5OTk5ZjI5MmE0MmE2YzJiODA3M2QzOTPQkGqb: 00:24:16.723 15:34:34 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:16.723 15:34:34 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:16.723 15:34:34 -- host/auth.sh@49 -- # echo DHHC-1:00:ZTVhOGUxYjE5OTk5ZjI5MmE0MmE2YzJiODA3M2QzOTPQkGqb: 00:24:16.723 15:34:34 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 0 00:24:16.723 15:34:34 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:16.723 15:34:34 -- host/auth.sh@68 -- # digest=sha256 00:24:16.723 15:34:34 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:16.723 15:34:34 -- host/auth.sh@68 -- # keyid=0 00:24:16.723 15:34:34 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:16.723 15:34:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.723 15:34:34 -- common/autotest_common.sh@10 -- # set +x 00:24:16.723 15:34:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.723 15:34:34 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:16.723 15:34:34 -- nvmf/common.sh@717 -- # local ip 00:24:16.723 15:34:34 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:16.723 15:34:34 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:16.723 15:34:34 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:16.723 15:34:34 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:16.723 15:34:34 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:16.723 15:34:34 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:16.723 15:34:34 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:16.723 15:34:34 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:16.723 15:34:34 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:16.723 15:34:34 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:16.723 15:34:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.723 15:34:34 -- common/autotest_common.sh@10 -- # set +x 00:24:17.296 nvme0n1 00:24:17.296 15:34:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:17.296 15:34:34 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:17.296 15:34:34 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:17.296 15:34:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:17.296 15:34:34 -- common/autotest_common.sh@10 -- # set +x 00:24:17.296 15:34:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:17.296 15:34:34 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:17.296 15:34:34 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:17.296 15:34:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:17.296 15:34:34 -- common/autotest_common.sh@10 -- # set +x 00:24:17.296 15:34:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:17.296 15:34:34 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:17.296 15:34:34 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:24:17.296 15:34:34 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:17.296 15:34:34 -- host/auth.sh@44 -- # digest=sha256 00:24:17.296 15:34:34 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:17.296 15:34:34 -- host/auth.sh@44 -- # keyid=1 00:24:17.296 15:34:34 -- host/auth.sh@45 -- # key=DHHC-1:00:MjRhMzNiOThhMWQxYWE1MTBmOWZkOWI2MmJlNjQwNDc3NjE3Zjg4NzQ2MmZkNTA3FN6w6g==: 00:24:17.296 15:34:34 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:17.296 15:34:34 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:17.296 15:34:34 -- host/auth.sh@49 -- # echo DHHC-1:00:MjRhMzNiOThhMWQxYWE1MTBmOWZkOWI2MmJlNjQwNDc3NjE3Zjg4NzQ2MmZkNTA3FN6w6g==: 00:24:17.296 15:34:34 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 1 00:24:17.296 15:34:34 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:17.296 15:34:34 -- host/auth.sh@68 -- # digest=sha256 00:24:17.296 15:34:34 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:17.296 15:34:34 -- host/auth.sh@68 -- # keyid=1 00:24:17.296 15:34:34 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:17.296 15:34:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:17.296 15:34:34 -- common/autotest_common.sh@10 -- # set +x 00:24:17.296 15:34:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:17.296 15:34:34 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:17.296 15:34:34 -- nvmf/common.sh@717 -- # local ip 00:24:17.296 15:34:34 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:17.296 15:34:34 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:17.296 15:34:34 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:17.296 15:34:34 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:17.296 15:34:34 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:17.296 15:34:34 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:17.296 15:34:34 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:17.296 15:34:34 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:17.296 15:34:34 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:17.296 15:34:34 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:17.296 15:34:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:17.296 15:34:34 -- common/autotest_common.sh@10 -- # set +x 00:24:17.869 nvme0n1 00:24:17.869 15:34:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:17.869 15:34:35 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:17.869 15:34:35 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:17.869 15:34:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:17.869 15:34:35 -- common/autotest_common.sh@10 -- # set +x 00:24:17.869 15:34:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:17.869 15:34:35 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:17.869 15:34:35 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:17.869 15:34:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:17.869 15:34:35 -- common/autotest_common.sh@10 -- # set +x 00:24:17.869 15:34:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:17.869 15:34:35 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:17.869 15:34:35 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:24:17.869 15:34:35 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:17.869 15:34:35 -- host/auth.sh@44 -- # digest=sha256 00:24:17.869 15:34:35 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:17.869 15:34:35 -- host/auth.sh@44 -- # keyid=2 00:24:17.869 15:34:35 -- host/auth.sh@45 -- # key=DHHC-1:01:OTA3ZjQ1YjkxNTg0NmY5ZTBkZmYyYjU4ZWViZDc4YmLNO2iw: 00:24:17.869 15:34:35 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:17.869 15:34:35 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:17.869 15:34:35 -- host/auth.sh@49 -- # echo DHHC-1:01:OTA3ZjQ1YjkxNTg0NmY5ZTBkZmYyYjU4ZWViZDc4YmLNO2iw: 00:24:17.869 15:34:35 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 2 00:24:17.869 15:34:35 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:17.869 15:34:35 -- host/auth.sh@68 -- # digest=sha256 00:24:17.869 15:34:35 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:17.869 15:34:35 -- host/auth.sh@68 -- # keyid=2 00:24:17.869 15:34:35 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:17.869 15:34:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:17.869 15:34:35 -- common/autotest_common.sh@10 -- # set +x 00:24:17.869 15:34:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:17.869 15:34:35 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:17.869 15:34:35 -- nvmf/common.sh@717 -- # local ip 00:24:17.869 15:34:35 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:17.869 15:34:35 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:17.869 15:34:35 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:17.869 15:34:35 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:17.869 15:34:35 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:17.869 15:34:35 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:17.869 15:34:35 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:17.869 15:34:35 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:17.869 15:34:35 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:17.869 15:34:35 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:17.869 15:34:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:17.869 15:34:35 -- common/autotest_common.sh@10 -- # set +x 00:24:18.441 nvme0n1 00:24:18.441 15:34:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:18.441 15:34:35 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:18.441 15:34:35 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:18.441 15:34:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:18.441 15:34:35 -- common/autotest_common.sh@10 -- # set +x 00:24:18.441 15:34:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:18.441 15:34:35 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:18.441 15:34:35 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:18.441 15:34:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:18.441 15:34:35 -- common/autotest_common.sh@10 -- # set +x 00:24:18.441 15:34:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:18.441 15:34:35 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:18.441 15:34:35 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:24:18.441 15:34:35 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:18.441 15:34:35 -- host/auth.sh@44 -- # digest=sha256 00:24:18.441 15:34:35 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:18.441 15:34:35 -- host/auth.sh@44 -- # keyid=3 00:24:18.441 15:34:35 -- host/auth.sh@45 -- # key=DHHC-1:02:ZjJiYWM1MzAzNjI1ZjlhZTlkMzI4MDA1NTA3OGQyYjZjNmMxMzE1MGJiYzc4ZDYy6zJpDQ==: 00:24:18.441 15:34:35 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:18.441 15:34:35 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:18.441 15:34:35 -- host/auth.sh@49 -- # echo DHHC-1:02:ZjJiYWM1MzAzNjI1ZjlhZTlkMzI4MDA1NTA3OGQyYjZjNmMxMzE1MGJiYzc4ZDYy6zJpDQ==: 00:24:18.441 15:34:35 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 3 00:24:18.441 15:34:35 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:18.441 15:34:35 -- host/auth.sh@68 -- # digest=sha256 00:24:18.441 15:34:35 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:18.441 15:34:35 -- host/auth.sh@68 -- # keyid=3 00:24:18.441 15:34:35 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:18.441 15:34:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:18.441 15:34:35 -- common/autotest_common.sh@10 -- # set +x 00:24:18.441 15:34:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:18.441 15:34:35 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:18.441 15:34:35 -- nvmf/common.sh@717 -- # local ip 00:24:18.441 15:34:35 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:18.441 15:34:35 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:18.441 15:34:35 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:18.441 15:34:35 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:18.441 15:34:35 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:18.441 15:34:35 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:18.441 15:34:35 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:18.441 15:34:35 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:18.441 15:34:35 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:18.441 15:34:35 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:18.441 15:34:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:18.441 15:34:35 -- common/autotest_common.sh@10 -- # set +x 00:24:18.702 nvme0n1 00:24:18.702 15:34:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:18.702 15:34:36 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:18.702 15:34:36 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:18.702 15:34:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:18.702 15:34:36 -- common/autotest_common.sh@10 -- # set +x 00:24:18.702 15:34:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:18.964 15:34:36 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:18.964 15:34:36 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:18.964 15:34:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:18.964 15:34:36 -- common/autotest_common.sh@10 -- # set +x 00:24:18.964 15:34:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:18.964 15:34:36 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:18.964 15:34:36 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:24:18.964 15:34:36 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:18.964 15:34:36 -- host/auth.sh@44 -- # digest=sha256 00:24:18.964 15:34:36 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:18.964 15:34:36 -- host/auth.sh@44 -- # keyid=4 00:24:18.964 15:34:36 -- host/auth.sh@45 -- # key=DHHC-1:03:ZTM1ZjZkZGIzYzFjMTc2MjRiY2I3YmVjMjU0YWQ3NmVlODgyMTg3OTIwOWE0NDI2OGExMTQwYjhlZDIyZTVjZP9psy8=: 00:24:18.964 15:34:36 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:18.964 15:34:36 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:18.964 15:34:36 -- host/auth.sh@49 -- # echo DHHC-1:03:ZTM1ZjZkZGIzYzFjMTc2MjRiY2I3YmVjMjU0YWQ3NmVlODgyMTg3OTIwOWE0NDI2OGExMTQwYjhlZDIyZTVjZP9psy8=: 00:24:18.964 15:34:36 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 4 00:24:18.964 15:34:36 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:18.964 15:34:36 -- host/auth.sh@68 -- # digest=sha256 00:24:18.964 15:34:36 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:18.964 15:34:36 -- host/auth.sh@68 -- # keyid=4 00:24:18.964 15:34:36 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:18.964 15:34:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:18.964 15:34:36 -- common/autotest_common.sh@10 -- # set +x 00:24:18.964 15:34:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:18.964 15:34:36 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:18.964 15:34:36 -- nvmf/common.sh@717 -- # local ip 00:24:18.964 15:34:36 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:18.964 15:34:36 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:18.964 15:34:36 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:18.964 15:34:36 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:18.964 15:34:36 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:18.964 15:34:36 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:18.964 15:34:36 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:18.964 15:34:36 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:18.964 15:34:36 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:18.964 15:34:36 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:18.964 15:34:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:18.964 15:34:36 -- common/autotest_common.sh@10 -- # set +x 00:24:19.225 nvme0n1 00:24:19.225 15:34:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.225 15:34:36 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:19.225 15:34:36 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:19.225 15:34:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.225 15:34:36 -- common/autotest_common.sh@10 -- # set +x 00:24:19.225 15:34:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.484 15:34:36 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.484 15:34:36 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:19.484 15:34:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.484 15:34:36 -- common/autotest_common.sh@10 -- # set +x 00:24:19.484 15:34:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.484 15:34:36 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:19.484 15:34:36 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:19.484 15:34:36 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:24:19.484 15:34:36 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:19.484 15:34:36 -- host/auth.sh@44 -- # digest=sha256 00:24:19.484 15:34:36 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:19.484 15:34:36 -- host/auth.sh@44 -- # keyid=0 00:24:19.484 15:34:36 -- host/auth.sh@45 -- # key=DHHC-1:00:ZTVhOGUxYjE5OTk5ZjI5MmE0MmE2YzJiODA3M2QzOTPQkGqb: 00:24:19.484 15:34:36 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:19.484 15:34:36 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:19.484 15:34:36 -- host/auth.sh@49 -- # echo DHHC-1:00:ZTVhOGUxYjE5OTk5ZjI5MmE0MmE2YzJiODA3M2QzOTPQkGqb: 00:24:19.484 15:34:36 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 0 00:24:19.484 15:34:36 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:19.484 15:34:36 -- host/auth.sh@68 -- # digest=sha256 00:24:19.484 15:34:36 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:19.484 15:34:36 -- host/auth.sh@68 -- # keyid=0 00:24:19.484 15:34:36 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:19.484 15:34:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.484 15:34:36 -- common/autotest_common.sh@10 -- # set +x 00:24:19.484 15:34:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.484 15:34:36 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:19.484 15:34:36 -- nvmf/common.sh@717 -- # local ip 00:24:19.484 15:34:36 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:19.484 15:34:36 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:19.485 15:34:36 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:19.485 15:34:36 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:19.485 15:34:36 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:19.485 15:34:36 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:19.485 15:34:36 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:19.485 15:34:36 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:19.485 15:34:36 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:19.485 15:34:36 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:19.485 15:34:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.485 15:34:36 -- common/autotest_common.sh@10 -- # set +x 00:24:20.054 nvme0n1 00:24:20.054 15:34:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:20.054 15:34:37 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:20.054 15:34:37 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:20.054 15:34:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:20.054 15:34:37 -- common/autotest_common.sh@10 -- # set +x 00:24:20.054 15:34:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:20.314 15:34:37 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.314 15:34:37 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:20.314 15:34:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:20.314 15:34:37 -- common/autotest_common.sh@10 -- # set +x 00:24:20.314 15:34:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:20.314 15:34:37 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:20.314 15:34:37 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:24:20.314 15:34:37 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:20.314 15:34:37 -- host/auth.sh@44 -- # digest=sha256 00:24:20.314 15:34:37 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:20.314 15:34:37 -- host/auth.sh@44 -- # keyid=1 00:24:20.314 15:34:37 -- host/auth.sh@45 -- # key=DHHC-1:00:MjRhMzNiOThhMWQxYWE1MTBmOWZkOWI2MmJlNjQwNDc3NjE3Zjg4NzQ2MmZkNTA3FN6w6g==: 00:24:20.314 15:34:37 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:20.314 15:34:37 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:20.314 15:34:37 -- host/auth.sh@49 -- # echo DHHC-1:00:MjRhMzNiOThhMWQxYWE1MTBmOWZkOWI2MmJlNjQwNDc3NjE3Zjg4NzQ2MmZkNTA3FN6w6g==: 00:24:20.314 15:34:37 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 1 00:24:20.314 15:34:37 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:20.314 15:34:37 -- host/auth.sh@68 -- # digest=sha256 00:24:20.314 15:34:37 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:20.314 15:34:37 -- host/auth.sh@68 -- # keyid=1 00:24:20.314 15:34:37 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:20.314 15:34:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:20.314 15:34:37 -- common/autotest_common.sh@10 -- # set +x 00:24:20.314 15:34:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:20.314 15:34:37 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:20.314 15:34:37 -- nvmf/common.sh@717 -- # local ip 00:24:20.314 15:34:37 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:20.314 15:34:37 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:20.314 15:34:37 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:20.314 15:34:37 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:20.314 15:34:37 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:20.314 15:34:37 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:20.314 15:34:37 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:20.314 15:34:37 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:20.314 15:34:37 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:20.314 15:34:37 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:20.314 15:34:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:20.314 15:34:37 -- common/autotest_common.sh@10 -- # set +x 00:24:20.886 nvme0n1 00:24:20.886 15:34:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:20.886 15:34:38 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:20.886 15:34:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:20.886 15:34:38 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:20.886 15:34:38 -- common/autotest_common.sh@10 -- # set +x 00:24:20.886 15:34:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:21.147 15:34:38 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:21.147 15:34:38 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:21.147 15:34:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:21.147 15:34:38 -- common/autotest_common.sh@10 -- # set +x 00:24:21.147 15:34:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:21.147 15:34:38 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:21.147 15:34:38 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:24:21.147 15:34:38 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:21.147 15:34:38 -- host/auth.sh@44 -- # digest=sha256 00:24:21.147 15:34:38 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:21.147 15:34:38 -- host/auth.sh@44 -- # keyid=2 00:24:21.147 15:34:38 -- host/auth.sh@45 -- # key=DHHC-1:01:OTA3ZjQ1YjkxNTg0NmY5ZTBkZmYyYjU4ZWViZDc4YmLNO2iw: 00:24:21.147 15:34:38 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:21.147 15:34:38 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:21.147 15:34:38 -- host/auth.sh@49 -- # echo DHHC-1:01:OTA3ZjQ1YjkxNTg0NmY5ZTBkZmYyYjU4ZWViZDc4YmLNO2iw: 00:24:21.147 15:34:38 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 2 00:24:21.147 15:34:38 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:21.147 15:34:38 -- host/auth.sh@68 -- # digest=sha256 00:24:21.147 15:34:38 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:21.147 15:34:38 -- host/auth.sh@68 -- # keyid=2 00:24:21.147 15:34:38 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:21.147 15:34:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:21.147 15:34:38 -- common/autotest_common.sh@10 -- # set +x 00:24:21.147 15:34:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:21.147 15:34:38 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:21.147 15:34:38 -- nvmf/common.sh@717 -- # local ip 00:24:21.147 15:34:38 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:21.147 15:34:38 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:21.147 15:34:38 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:21.147 15:34:38 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:21.147 15:34:38 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:21.147 15:34:38 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:21.147 15:34:38 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:21.147 15:34:38 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:21.147 15:34:38 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:21.147 15:34:38 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:21.147 15:34:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:21.147 15:34:38 -- common/autotest_common.sh@10 -- # set +x 00:24:21.719 nvme0n1 00:24:21.719 15:34:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:21.719 15:34:39 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:21.719 15:34:39 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:21.719 15:34:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:21.719 15:34:39 -- common/autotest_common.sh@10 -- # set +x 00:24:21.719 15:34:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:21.980 15:34:39 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:21.980 15:34:39 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:21.980 15:34:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:21.980 15:34:39 -- common/autotest_common.sh@10 -- # set +x 00:24:21.980 15:34:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:21.980 15:34:39 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:21.980 15:34:39 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:24:21.980 15:34:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:21.980 15:34:39 -- host/auth.sh@44 -- # digest=sha256 00:24:21.980 15:34:39 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:21.980 15:34:39 -- host/auth.sh@44 -- # keyid=3 00:24:21.980 15:34:39 -- host/auth.sh@45 -- # key=DHHC-1:02:ZjJiYWM1MzAzNjI1ZjlhZTlkMzI4MDA1NTA3OGQyYjZjNmMxMzE1MGJiYzc4ZDYy6zJpDQ==: 00:24:21.980 15:34:39 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:21.980 15:34:39 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:21.980 15:34:39 -- host/auth.sh@49 -- # echo DHHC-1:02:ZjJiYWM1MzAzNjI1ZjlhZTlkMzI4MDA1NTA3OGQyYjZjNmMxMzE1MGJiYzc4ZDYy6zJpDQ==: 00:24:21.980 15:34:39 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 3 00:24:21.980 15:34:39 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:21.980 15:34:39 -- host/auth.sh@68 -- # digest=sha256 00:24:21.980 15:34:39 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:21.980 15:34:39 -- host/auth.sh@68 -- # keyid=3 00:24:21.980 15:34:39 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:21.980 15:34:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:21.980 15:34:39 -- common/autotest_common.sh@10 -- # set +x 00:24:21.980 15:34:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:21.980 15:34:39 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:21.980 15:34:39 -- nvmf/common.sh@717 -- # local ip 00:24:21.980 15:34:39 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:21.980 15:34:39 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:21.980 15:34:39 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:21.980 15:34:39 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:21.980 15:34:39 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:21.980 15:34:39 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:21.980 15:34:39 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:21.980 15:34:39 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:21.980 15:34:39 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:21.980 15:34:39 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:21.980 15:34:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:21.980 15:34:39 -- common/autotest_common.sh@10 -- # set +x 00:24:22.551 nvme0n1 00:24:22.551 15:34:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:22.551 15:34:39 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:22.552 15:34:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:22.552 15:34:39 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:22.552 15:34:39 -- common/autotest_common.sh@10 -- # set +x 00:24:22.552 15:34:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:22.813 15:34:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.813 15:34:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:22.813 15:34:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:22.813 15:34:40 -- common/autotest_common.sh@10 -- # set +x 00:24:22.813 15:34:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:22.813 15:34:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:22.813 15:34:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:24:22.813 15:34:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:22.813 15:34:40 -- host/auth.sh@44 -- # digest=sha256 00:24:22.813 15:34:40 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:22.813 15:34:40 -- host/auth.sh@44 -- # keyid=4 00:24:22.813 15:34:40 -- host/auth.sh@45 -- # key=DHHC-1:03:ZTM1ZjZkZGIzYzFjMTc2MjRiY2I3YmVjMjU0YWQ3NmVlODgyMTg3OTIwOWE0NDI2OGExMTQwYjhlZDIyZTVjZP9psy8=: 00:24:22.813 15:34:40 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:22.813 15:34:40 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:22.813 15:34:40 -- host/auth.sh@49 -- # echo DHHC-1:03:ZTM1ZjZkZGIzYzFjMTc2MjRiY2I3YmVjMjU0YWQ3NmVlODgyMTg3OTIwOWE0NDI2OGExMTQwYjhlZDIyZTVjZP9psy8=: 00:24:22.813 15:34:40 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 4 00:24:22.813 15:34:40 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:22.813 15:34:40 -- host/auth.sh@68 -- # digest=sha256 00:24:22.813 15:34:40 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:22.813 15:34:40 -- host/auth.sh@68 -- # keyid=4 00:24:22.813 15:34:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:22.813 15:34:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:22.813 15:34:40 -- common/autotest_common.sh@10 -- # set +x 00:24:22.813 15:34:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:22.813 15:34:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:22.813 15:34:40 -- nvmf/common.sh@717 -- # local ip 00:24:22.813 15:34:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:22.813 15:34:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:22.813 15:34:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:22.813 15:34:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:22.813 15:34:40 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:22.813 15:34:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:22.813 15:34:40 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:22.813 15:34:40 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:22.813 15:34:40 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:22.813 15:34:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:22.813 15:34:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:22.813 15:34:40 -- common/autotest_common.sh@10 -- # set +x 00:24:23.383 nvme0n1 00:24:23.383 15:34:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:23.383 15:34:40 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:23.383 15:34:40 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:23.383 15:34:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:23.383 15:34:40 -- common/autotest_common.sh@10 -- # set +x 00:24:23.383 15:34:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:23.644 15:34:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.644 15:34:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:23.644 15:34:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:23.644 15:34:40 -- common/autotest_common.sh@10 -- # set +x 00:24:23.644 15:34:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:23.644 15:34:40 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:24:23.644 15:34:40 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:23.644 15:34:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:23.644 15:34:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:24:23.644 15:34:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:23.644 15:34:40 -- host/auth.sh@44 -- # digest=sha384 00:24:23.644 15:34:40 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:23.644 15:34:40 -- host/auth.sh@44 -- # keyid=0 00:24:23.644 15:34:40 -- host/auth.sh@45 -- # key=DHHC-1:00:ZTVhOGUxYjE5OTk5ZjI5MmE0MmE2YzJiODA3M2QzOTPQkGqb: 00:24:23.644 15:34:40 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:23.644 15:34:40 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:23.644 15:34:40 -- host/auth.sh@49 -- # echo DHHC-1:00:ZTVhOGUxYjE5OTk5ZjI5MmE0MmE2YzJiODA3M2QzOTPQkGqb: 00:24:23.644 15:34:40 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 0 00:24:23.644 15:34:40 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:23.644 15:34:40 -- host/auth.sh@68 -- # digest=sha384 00:24:23.644 15:34:40 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:23.644 15:34:40 -- host/auth.sh@68 -- # keyid=0 00:24:23.644 15:34:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:23.644 15:34:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:23.644 15:34:40 -- common/autotest_common.sh@10 -- # set +x 00:24:23.644 15:34:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:23.644 15:34:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:23.644 15:34:40 -- nvmf/common.sh@717 -- # local ip 00:24:23.644 15:34:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:23.644 15:34:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:23.644 15:34:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.644 15:34:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.644 15:34:40 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:23.644 15:34:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:23.644 15:34:40 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:23.644 15:34:40 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:23.644 15:34:40 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:23.645 15:34:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:23.645 15:34:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:23.645 15:34:40 -- common/autotest_common.sh@10 -- # set +x 00:24:23.645 nvme0n1 00:24:23.645 15:34:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:23.645 15:34:41 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:23.645 15:34:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:23.645 15:34:41 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:23.645 15:34:41 -- common/autotest_common.sh@10 -- # set +x 00:24:23.645 15:34:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:23.645 15:34:41 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.645 15:34:41 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:23.645 15:34:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:23.645 15:34:41 -- common/autotest_common.sh@10 -- # set +x 00:24:23.645 15:34:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:23.645 15:34:41 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:23.910 15:34:41 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:24:23.910 15:34:41 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:23.910 15:34:41 -- host/auth.sh@44 -- # digest=sha384 00:24:23.910 15:34:41 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:23.910 15:34:41 -- host/auth.sh@44 -- # keyid=1 00:24:23.910 15:34:41 -- host/auth.sh@45 -- # key=DHHC-1:00:MjRhMzNiOThhMWQxYWE1MTBmOWZkOWI2MmJlNjQwNDc3NjE3Zjg4NzQ2MmZkNTA3FN6w6g==: 00:24:23.910 15:34:41 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:23.910 15:34:41 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:23.910 15:34:41 -- host/auth.sh@49 -- # echo DHHC-1:00:MjRhMzNiOThhMWQxYWE1MTBmOWZkOWI2MmJlNjQwNDc3NjE3Zjg4NzQ2MmZkNTA3FN6w6g==: 00:24:23.910 15:34:41 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 1 00:24:23.910 15:34:41 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:23.910 15:34:41 -- host/auth.sh@68 -- # digest=sha384 00:24:23.910 15:34:41 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:23.910 15:34:41 -- host/auth.sh@68 -- # keyid=1 00:24:23.910 15:34:41 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:23.910 15:34:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:23.910 15:34:41 -- common/autotest_common.sh@10 -- # set +x 00:24:23.910 15:34:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:23.910 15:34:41 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:23.910 15:34:41 -- nvmf/common.sh@717 -- # local ip 00:24:23.910 15:34:41 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:23.910 15:34:41 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:23.910 15:34:41 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.910 15:34:41 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.910 15:34:41 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:23.910 15:34:41 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:23.910 15:34:41 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:23.910 15:34:41 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:23.910 15:34:41 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:23.910 15:34:41 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:23.910 15:34:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:23.910 15:34:41 -- common/autotest_common.sh@10 -- # set +x 00:24:23.910 nvme0n1 00:24:23.910 15:34:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:23.910 15:34:41 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:23.910 15:34:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:23.910 15:34:41 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:23.910 15:34:41 -- common/autotest_common.sh@10 -- # set +x 00:24:23.910 15:34:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:23.910 15:34:41 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.910 15:34:41 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:23.910 15:34:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:23.910 15:34:41 -- common/autotest_common.sh@10 -- # set +x 00:24:23.910 15:34:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:23.910 15:34:41 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:23.910 15:34:41 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:24:23.910 15:34:41 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:23.910 15:34:41 -- host/auth.sh@44 -- # digest=sha384 00:24:23.910 15:34:41 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:23.910 15:34:41 -- host/auth.sh@44 -- # keyid=2 00:24:23.910 15:34:41 -- host/auth.sh@45 -- # key=DHHC-1:01:OTA3ZjQ1YjkxNTg0NmY5ZTBkZmYyYjU4ZWViZDc4YmLNO2iw: 00:24:23.910 15:34:41 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:23.910 15:34:41 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:23.910 15:34:41 -- host/auth.sh@49 -- # echo DHHC-1:01:OTA3ZjQ1YjkxNTg0NmY5ZTBkZmYyYjU4ZWViZDc4YmLNO2iw: 00:24:23.910 15:34:41 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 2 00:24:23.910 15:34:41 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:23.910 15:34:41 -- host/auth.sh@68 -- # digest=sha384 00:24:23.910 15:34:41 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:23.910 15:34:41 -- host/auth.sh@68 -- # keyid=2 00:24:23.910 15:34:41 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:23.910 15:34:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:23.910 15:34:41 -- common/autotest_common.sh@10 -- # set +x 00:24:23.910 15:34:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:23.910 15:34:41 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:23.910 15:34:41 -- nvmf/common.sh@717 -- # local ip 00:24:23.910 15:34:41 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:23.910 15:34:41 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:23.910 15:34:41 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.910 15:34:41 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.910 15:34:41 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:23.910 15:34:41 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:23.910 15:34:41 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:23.910 15:34:41 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:23.910 15:34:41 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:23.910 15:34:41 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:23.910 15:34:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:23.910 15:34:41 -- common/autotest_common.sh@10 -- # set +x 00:24:24.181 nvme0n1 00:24:24.181 15:34:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.181 15:34:41 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.181 15:34:41 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:24.181 15:34:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.181 15:34:41 -- common/autotest_common.sh@10 -- # set +x 00:24:24.181 15:34:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.181 15:34:41 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.181 15:34:41 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.181 15:34:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.181 15:34:41 -- common/autotest_common.sh@10 -- # set +x 00:24:24.181 15:34:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.181 15:34:41 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:24.181 15:34:41 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:24:24.181 15:34:41 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:24.181 15:34:41 -- host/auth.sh@44 -- # digest=sha384 00:24:24.181 15:34:41 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:24.181 15:34:41 -- host/auth.sh@44 -- # keyid=3 00:24:24.181 15:34:41 -- host/auth.sh@45 -- # key=DHHC-1:02:ZjJiYWM1MzAzNjI1ZjlhZTlkMzI4MDA1NTA3OGQyYjZjNmMxMzE1MGJiYzc4ZDYy6zJpDQ==: 00:24:24.181 15:34:41 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:24.181 15:34:41 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:24.181 15:34:41 -- host/auth.sh@49 -- # echo DHHC-1:02:ZjJiYWM1MzAzNjI1ZjlhZTlkMzI4MDA1NTA3OGQyYjZjNmMxMzE1MGJiYzc4ZDYy6zJpDQ==: 00:24:24.181 15:34:41 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 3 00:24:24.181 15:34:41 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:24.181 15:34:41 -- host/auth.sh@68 -- # digest=sha384 00:24:24.181 15:34:41 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:24.181 15:34:41 -- host/auth.sh@68 -- # keyid=3 00:24:24.181 15:34:41 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:24.181 15:34:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.181 15:34:41 -- common/autotest_common.sh@10 -- # set +x 00:24:24.181 15:34:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.181 15:34:41 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:24.181 15:34:41 -- nvmf/common.sh@717 -- # local ip 00:24:24.181 15:34:41 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:24.181 15:34:41 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:24.181 15:34:41 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.181 15:34:41 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.181 15:34:41 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:24.181 15:34:41 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.181 15:34:41 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:24.181 15:34:41 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:24.181 15:34:41 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:24.181 15:34:41 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:24.181 15:34:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.181 15:34:41 -- common/autotest_common.sh@10 -- # set +x 00:24:24.441 nvme0n1 00:24:24.441 15:34:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.441 15:34:41 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.441 15:34:41 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:24.441 15:34:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.441 15:34:41 -- common/autotest_common.sh@10 -- # set +x 00:24:24.441 15:34:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.441 15:34:41 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.441 15:34:41 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.441 15:34:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.441 15:34:41 -- common/autotest_common.sh@10 -- # set +x 00:24:24.441 15:34:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.441 15:34:41 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:24.441 15:34:41 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:24:24.441 15:34:41 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:24.441 15:34:41 -- host/auth.sh@44 -- # digest=sha384 00:24:24.441 15:34:41 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:24.441 15:34:41 -- host/auth.sh@44 -- # keyid=4 00:24:24.441 15:34:41 -- host/auth.sh@45 -- # key=DHHC-1:03:ZTM1ZjZkZGIzYzFjMTc2MjRiY2I3YmVjMjU0YWQ3NmVlODgyMTg3OTIwOWE0NDI2OGExMTQwYjhlZDIyZTVjZP9psy8=: 00:24:24.441 15:34:41 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:24.441 15:34:41 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:24.441 15:34:41 -- host/auth.sh@49 -- # echo DHHC-1:03:ZTM1ZjZkZGIzYzFjMTc2MjRiY2I3YmVjMjU0YWQ3NmVlODgyMTg3OTIwOWE0NDI2OGExMTQwYjhlZDIyZTVjZP9psy8=: 00:24:24.441 15:34:41 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 4 00:24:24.441 15:34:41 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:24.441 15:34:41 -- host/auth.sh@68 -- # digest=sha384 00:24:24.441 15:34:41 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:24.441 15:34:41 -- host/auth.sh@68 -- # keyid=4 00:24:24.441 15:34:41 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:24.441 15:34:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.441 15:34:41 -- common/autotest_common.sh@10 -- # set +x 00:24:24.441 15:34:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.441 15:34:41 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:24.441 15:34:41 -- nvmf/common.sh@717 -- # local ip 00:24:24.441 15:34:41 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:24.441 15:34:41 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:24.441 15:34:41 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.441 15:34:41 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.441 15:34:41 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:24.441 15:34:41 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.441 15:34:41 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:24.441 15:34:41 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:24.441 15:34:41 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:24.441 15:34:41 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:24.441 15:34:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.441 15:34:41 -- common/autotest_common.sh@10 -- # set +x 00:24:24.703 nvme0n1 00:24:24.703 15:34:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.703 15:34:41 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.703 15:34:41 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:24.703 15:34:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.703 15:34:41 -- common/autotest_common.sh@10 -- # set +x 00:24:24.703 15:34:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.703 15:34:41 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.703 15:34:41 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.703 15:34:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.703 15:34:41 -- common/autotest_common.sh@10 -- # set +x 00:24:24.703 15:34:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.703 15:34:42 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:24.703 15:34:42 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:24.703 15:34:42 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:24:24.703 15:34:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:24.703 15:34:42 -- host/auth.sh@44 -- # digest=sha384 00:24:24.703 15:34:42 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:24.703 15:34:42 -- host/auth.sh@44 -- # keyid=0 00:24:24.703 15:34:42 -- host/auth.sh@45 -- # key=DHHC-1:00:ZTVhOGUxYjE5OTk5ZjI5MmE0MmE2YzJiODA3M2QzOTPQkGqb: 00:24:24.703 15:34:42 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:24.703 15:34:42 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:24.703 15:34:42 -- host/auth.sh@49 -- # echo DHHC-1:00:ZTVhOGUxYjE5OTk5ZjI5MmE0MmE2YzJiODA3M2QzOTPQkGqb: 00:24:24.703 15:34:42 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 0 00:24:24.703 15:34:42 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:24.703 15:34:42 -- host/auth.sh@68 -- # digest=sha384 00:24:24.703 15:34:42 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:24.703 15:34:42 -- host/auth.sh@68 -- # keyid=0 00:24:24.703 15:34:42 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:24.703 15:34:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.703 15:34:42 -- common/autotest_common.sh@10 -- # set +x 00:24:24.703 15:34:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.703 15:34:42 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:24.703 15:34:42 -- nvmf/common.sh@717 -- # local ip 00:24:24.703 15:34:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:24.703 15:34:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:24.703 15:34:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.703 15:34:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.703 15:34:42 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:24.703 15:34:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.703 15:34:42 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:24.703 15:34:42 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:24.703 15:34:42 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:24.703 15:34:42 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:24.703 15:34:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.703 15:34:42 -- common/autotest_common.sh@10 -- # set +x 00:24:24.964 nvme0n1 00:24:24.964 15:34:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.964 15:34:42 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.964 15:34:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.964 15:34:42 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:24.964 15:34:42 -- common/autotest_common.sh@10 -- # set +x 00:24:24.964 15:34:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.964 15:34:42 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.964 15:34:42 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.964 15:34:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.964 15:34:42 -- common/autotest_common.sh@10 -- # set +x 00:24:24.964 15:34:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.964 15:34:42 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:24.964 15:34:42 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:24:24.964 15:34:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:24.964 15:34:42 -- host/auth.sh@44 -- # digest=sha384 00:24:24.964 15:34:42 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:24.964 15:34:42 -- host/auth.sh@44 -- # keyid=1 00:24:24.964 15:34:42 -- host/auth.sh@45 -- # key=DHHC-1:00:MjRhMzNiOThhMWQxYWE1MTBmOWZkOWI2MmJlNjQwNDc3NjE3Zjg4NzQ2MmZkNTA3FN6w6g==: 00:24:24.964 15:34:42 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:24.964 15:34:42 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:24.964 15:34:42 -- host/auth.sh@49 -- # echo DHHC-1:00:MjRhMzNiOThhMWQxYWE1MTBmOWZkOWI2MmJlNjQwNDc3NjE3Zjg4NzQ2MmZkNTA3FN6w6g==: 00:24:24.964 15:34:42 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 1 00:24:24.964 15:34:42 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:24.964 15:34:42 -- host/auth.sh@68 -- # digest=sha384 00:24:24.964 15:34:42 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:24.964 15:34:42 -- host/auth.sh@68 -- # keyid=1 00:24:24.964 15:34:42 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:24.964 15:34:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.964 15:34:42 -- common/autotest_common.sh@10 -- # set +x 00:24:24.964 15:34:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.964 15:34:42 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:24.964 15:34:42 -- nvmf/common.sh@717 -- # local ip 00:24:24.964 15:34:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:24.964 15:34:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:24.964 15:34:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.964 15:34:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.964 15:34:42 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:24.964 15:34:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.964 15:34:42 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:24.964 15:34:42 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:24.964 15:34:42 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:24.964 15:34:42 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:24.964 15:34:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.964 15:34:42 -- common/autotest_common.sh@10 -- # set +x 00:24:25.225 nvme0n1 00:24:25.225 15:34:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.225 15:34:42 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:25.225 15:34:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.225 15:34:42 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:25.225 15:34:42 -- common/autotest_common.sh@10 -- # set +x 00:24:25.225 15:34:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.225 15:34:42 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.225 15:34:42 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.225 15:34:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.225 15:34:42 -- common/autotest_common.sh@10 -- # set +x 00:24:25.225 15:34:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.225 15:34:42 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:25.225 15:34:42 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:24:25.225 15:34:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:25.225 15:34:42 -- host/auth.sh@44 -- # digest=sha384 00:24:25.225 15:34:42 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:25.225 15:34:42 -- host/auth.sh@44 -- # keyid=2 00:24:25.225 15:34:42 -- host/auth.sh@45 -- # key=DHHC-1:01:OTA3ZjQ1YjkxNTg0NmY5ZTBkZmYyYjU4ZWViZDc4YmLNO2iw: 00:24:25.225 15:34:42 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:25.225 15:34:42 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:25.225 15:34:42 -- host/auth.sh@49 -- # echo DHHC-1:01:OTA3ZjQ1YjkxNTg0NmY5ZTBkZmYyYjU4ZWViZDc4YmLNO2iw: 00:24:25.225 15:34:42 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 2 00:24:25.225 15:34:42 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:25.225 15:34:42 -- host/auth.sh@68 -- # digest=sha384 00:24:25.225 15:34:42 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:25.225 15:34:42 -- host/auth.sh@68 -- # keyid=2 00:24:25.225 15:34:42 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:25.225 15:34:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.225 15:34:42 -- common/autotest_common.sh@10 -- # set +x 00:24:25.225 15:34:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.225 15:34:42 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:25.225 15:34:42 -- nvmf/common.sh@717 -- # local ip 00:24:25.225 15:34:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:25.225 15:34:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:25.225 15:34:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.225 15:34:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.225 15:34:42 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:25.225 15:34:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:25.225 15:34:42 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:25.225 15:34:42 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:25.225 15:34:42 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:25.225 15:34:42 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:25.225 15:34:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.225 15:34:42 -- common/autotest_common.sh@10 -- # set +x 00:24:25.486 nvme0n1 00:24:25.486 15:34:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.486 15:34:42 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:25.486 15:34:42 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:25.486 15:34:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.486 15:34:42 -- common/autotest_common.sh@10 -- # set +x 00:24:25.486 15:34:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.486 15:34:42 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.486 15:34:42 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.486 15:34:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.486 15:34:42 -- common/autotest_common.sh@10 -- # set +x 00:24:25.486 15:34:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.486 15:34:42 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:25.486 15:34:42 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:24:25.486 15:34:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:25.486 15:34:42 -- host/auth.sh@44 -- # digest=sha384 00:24:25.486 15:34:42 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:25.486 15:34:42 -- host/auth.sh@44 -- # keyid=3 00:24:25.486 15:34:42 -- host/auth.sh@45 -- # key=DHHC-1:02:ZjJiYWM1MzAzNjI1ZjlhZTlkMzI4MDA1NTA3OGQyYjZjNmMxMzE1MGJiYzc4ZDYy6zJpDQ==: 00:24:25.486 15:34:42 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:25.486 15:34:42 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:25.486 15:34:42 -- host/auth.sh@49 -- # echo DHHC-1:02:ZjJiYWM1MzAzNjI1ZjlhZTlkMzI4MDA1NTA3OGQyYjZjNmMxMzE1MGJiYzc4ZDYy6zJpDQ==: 00:24:25.487 15:34:42 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 3 00:24:25.487 15:34:42 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:25.487 15:34:42 -- host/auth.sh@68 -- # digest=sha384 00:24:25.487 15:34:42 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:25.487 15:34:42 -- host/auth.sh@68 -- # keyid=3 00:24:25.487 15:34:42 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:25.487 15:34:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.487 15:34:42 -- common/autotest_common.sh@10 -- # set +x 00:24:25.487 15:34:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.487 15:34:42 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:25.487 15:34:42 -- nvmf/common.sh@717 -- # local ip 00:24:25.487 15:34:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:25.487 15:34:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:25.487 15:34:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.487 15:34:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.487 15:34:42 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:25.487 15:34:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:25.487 15:34:42 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:25.487 15:34:42 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:25.487 15:34:42 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:25.487 15:34:42 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:25.487 15:34:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.487 15:34:42 -- common/autotest_common.sh@10 -- # set +x 00:24:25.748 nvme0n1 00:24:25.748 15:34:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.748 15:34:43 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:25.748 15:34:43 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:25.748 15:34:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.748 15:34:43 -- common/autotest_common.sh@10 -- # set +x 00:24:25.748 15:34:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.748 15:34:43 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.748 15:34:43 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.748 15:34:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.748 15:34:43 -- common/autotest_common.sh@10 -- # set +x 00:24:25.748 15:34:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.748 15:34:43 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:25.748 15:34:43 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:24:25.748 15:34:43 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:25.748 15:34:43 -- host/auth.sh@44 -- # digest=sha384 00:24:25.748 15:34:43 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:25.748 15:34:43 -- host/auth.sh@44 -- # keyid=4 00:24:25.748 15:34:43 -- host/auth.sh@45 -- # key=DHHC-1:03:ZTM1ZjZkZGIzYzFjMTc2MjRiY2I3YmVjMjU0YWQ3NmVlODgyMTg3OTIwOWE0NDI2OGExMTQwYjhlZDIyZTVjZP9psy8=: 00:24:25.748 15:34:43 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:25.748 15:34:43 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:25.748 15:34:43 -- host/auth.sh@49 -- # echo DHHC-1:03:ZTM1ZjZkZGIzYzFjMTc2MjRiY2I3YmVjMjU0YWQ3NmVlODgyMTg3OTIwOWE0NDI2OGExMTQwYjhlZDIyZTVjZP9psy8=: 00:24:25.748 15:34:43 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 4 00:24:25.748 15:34:43 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:25.748 15:34:43 -- host/auth.sh@68 -- # digest=sha384 00:24:25.748 15:34:43 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:25.748 15:34:43 -- host/auth.sh@68 -- # keyid=4 00:24:25.748 15:34:43 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:25.748 15:34:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.748 15:34:43 -- common/autotest_common.sh@10 -- # set +x 00:24:25.748 15:34:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.748 15:34:43 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:25.748 15:34:43 -- nvmf/common.sh@717 -- # local ip 00:24:25.748 15:34:43 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:25.748 15:34:43 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:25.748 15:34:43 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.748 15:34:43 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.748 15:34:43 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:25.748 15:34:43 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:25.748 15:34:43 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:25.748 15:34:43 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:25.748 15:34:43 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:25.748 15:34:43 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:25.748 15:34:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.748 15:34:43 -- common/autotest_common.sh@10 -- # set +x 00:24:26.008 nvme0n1 00:24:26.008 15:34:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.008 15:34:43 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:26.008 15:34:43 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:26.008 15:34:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.008 15:34:43 -- common/autotest_common.sh@10 -- # set +x 00:24:26.008 15:34:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.008 15:34:43 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.008 15:34:43 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:26.008 15:34:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.008 15:34:43 -- common/autotest_common.sh@10 -- # set +x 00:24:26.008 15:34:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.008 15:34:43 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:26.008 15:34:43 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:26.008 15:34:43 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:24:26.008 15:34:43 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:26.008 15:34:43 -- host/auth.sh@44 -- # digest=sha384 00:24:26.008 15:34:43 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:26.008 15:34:43 -- host/auth.sh@44 -- # keyid=0 00:24:26.008 15:34:43 -- host/auth.sh@45 -- # key=DHHC-1:00:ZTVhOGUxYjE5OTk5ZjI5MmE0MmE2YzJiODA3M2QzOTPQkGqb: 00:24:26.008 15:34:43 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:26.008 15:34:43 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:26.008 15:34:43 -- host/auth.sh@49 -- # echo DHHC-1:00:ZTVhOGUxYjE5OTk5ZjI5MmE0MmE2YzJiODA3M2QzOTPQkGqb: 00:24:26.008 15:34:43 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 0 00:24:26.008 15:34:43 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:26.008 15:34:43 -- host/auth.sh@68 -- # digest=sha384 00:24:26.008 15:34:43 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:26.008 15:34:43 -- host/auth.sh@68 -- # keyid=0 00:24:26.008 15:34:43 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:26.008 15:34:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.008 15:34:43 -- common/autotest_common.sh@10 -- # set +x 00:24:26.008 15:34:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.008 15:34:43 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:26.008 15:34:43 -- nvmf/common.sh@717 -- # local ip 00:24:26.008 15:34:43 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:26.008 15:34:43 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:26.008 15:34:43 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:26.008 15:34:43 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:26.008 15:34:43 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:26.008 15:34:43 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:26.008 15:34:43 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:26.008 15:34:43 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:26.008 15:34:43 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:26.008 15:34:43 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:26.008 15:34:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.008 15:34:43 -- common/autotest_common.sh@10 -- # set +x 00:24:26.268 nvme0n1 00:24:26.268 15:34:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.268 15:34:43 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:26.268 15:34:43 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:26.268 15:34:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.268 15:34:43 -- common/autotest_common.sh@10 -- # set +x 00:24:26.268 15:34:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.268 15:34:43 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.268 15:34:43 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:26.268 15:34:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.268 15:34:43 -- common/autotest_common.sh@10 -- # set +x 00:24:26.268 15:34:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.268 15:34:43 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:26.268 15:34:43 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:24:26.268 15:34:43 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:26.268 15:34:43 -- host/auth.sh@44 -- # digest=sha384 00:24:26.268 15:34:43 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:26.268 15:34:43 -- host/auth.sh@44 -- # keyid=1 00:24:26.268 15:34:43 -- host/auth.sh@45 -- # key=DHHC-1:00:MjRhMzNiOThhMWQxYWE1MTBmOWZkOWI2MmJlNjQwNDc3NjE3Zjg4NzQ2MmZkNTA3FN6w6g==: 00:24:26.268 15:34:43 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:26.268 15:34:43 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:26.268 15:34:43 -- host/auth.sh@49 -- # echo DHHC-1:00:MjRhMzNiOThhMWQxYWE1MTBmOWZkOWI2MmJlNjQwNDc3NjE3Zjg4NzQ2MmZkNTA3FN6w6g==: 00:24:26.268 15:34:43 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 1 00:24:26.268 15:34:43 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:26.268 15:34:43 -- host/auth.sh@68 -- # digest=sha384 00:24:26.268 15:34:43 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:26.268 15:34:43 -- host/auth.sh@68 -- # keyid=1 00:24:26.268 15:34:43 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:26.268 15:34:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.268 15:34:43 -- common/autotest_common.sh@10 -- # set +x 00:24:26.268 15:34:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.268 15:34:43 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:26.268 15:34:43 -- nvmf/common.sh@717 -- # local ip 00:24:26.268 15:34:43 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:26.268 15:34:43 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:26.268 15:34:43 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:26.268 15:34:43 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:26.268 15:34:43 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:26.268 15:34:43 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:26.268 15:34:43 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:26.268 15:34:43 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:26.268 15:34:43 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:26.529 15:34:43 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:26.529 15:34:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.529 15:34:43 -- common/autotest_common.sh@10 -- # set +x 00:24:26.529 nvme0n1 00:24:26.529 15:34:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.529 15:34:43 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:26.529 15:34:43 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:26.529 15:34:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.529 15:34:43 -- common/autotest_common.sh@10 -- # set +x 00:24:26.529 15:34:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.789 15:34:43 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.789 15:34:43 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:26.789 15:34:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.789 15:34:43 -- common/autotest_common.sh@10 -- # set +x 00:24:26.789 15:34:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.789 15:34:43 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:26.789 15:34:43 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:24:26.789 15:34:43 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:26.789 15:34:43 -- host/auth.sh@44 -- # digest=sha384 00:24:26.789 15:34:44 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:26.789 15:34:44 -- host/auth.sh@44 -- # keyid=2 00:24:26.789 15:34:44 -- host/auth.sh@45 -- # key=DHHC-1:01:OTA3ZjQ1YjkxNTg0NmY5ZTBkZmYyYjU4ZWViZDc4YmLNO2iw: 00:24:26.789 15:34:44 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:26.789 15:34:44 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:26.789 15:34:44 -- host/auth.sh@49 -- # echo DHHC-1:01:OTA3ZjQ1YjkxNTg0NmY5ZTBkZmYyYjU4ZWViZDc4YmLNO2iw: 00:24:26.789 15:34:44 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 2 00:24:26.789 15:34:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:26.789 15:34:44 -- host/auth.sh@68 -- # digest=sha384 00:24:26.789 15:34:44 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:26.789 15:34:44 -- host/auth.sh@68 -- # keyid=2 00:24:26.789 15:34:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:26.789 15:34:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.789 15:34:44 -- common/autotest_common.sh@10 -- # set +x 00:24:26.789 15:34:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.789 15:34:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:26.789 15:34:44 -- nvmf/common.sh@717 -- # local ip 00:24:26.789 15:34:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:26.789 15:34:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:26.789 15:34:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:26.789 15:34:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:26.789 15:34:44 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:26.789 15:34:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:26.789 15:34:44 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:26.789 15:34:44 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:26.789 15:34:44 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:26.789 15:34:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:26.789 15:34:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.789 15:34:44 -- common/autotest_common.sh@10 -- # set +x 00:24:27.050 nvme0n1 00:24:27.050 15:34:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.050 15:34:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:27.050 15:34:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:27.050 15:34:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.050 15:34:44 -- common/autotest_common.sh@10 -- # set +x 00:24:27.050 15:34:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.050 15:34:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:27.050 15:34:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:27.050 15:34:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.050 15:34:44 -- common/autotest_common.sh@10 -- # set +x 00:24:27.050 15:34:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.050 15:34:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:27.050 15:34:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:24:27.050 15:34:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:27.050 15:34:44 -- host/auth.sh@44 -- # digest=sha384 00:24:27.050 15:34:44 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:27.050 15:34:44 -- host/auth.sh@44 -- # keyid=3 00:24:27.050 15:34:44 -- host/auth.sh@45 -- # key=DHHC-1:02:ZjJiYWM1MzAzNjI1ZjlhZTlkMzI4MDA1NTA3OGQyYjZjNmMxMzE1MGJiYzc4ZDYy6zJpDQ==: 00:24:27.050 15:34:44 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:27.050 15:34:44 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:27.050 15:34:44 -- host/auth.sh@49 -- # echo DHHC-1:02:ZjJiYWM1MzAzNjI1ZjlhZTlkMzI4MDA1NTA3OGQyYjZjNmMxMzE1MGJiYzc4ZDYy6zJpDQ==: 00:24:27.050 15:34:44 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 3 00:24:27.050 15:34:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:27.050 15:34:44 -- host/auth.sh@68 -- # digest=sha384 00:24:27.050 15:34:44 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:27.050 15:34:44 -- host/auth.sh@68 -- # keyid=3 00:24:27.050 15:34:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:27.050 15:34:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.050 15:34:44 -- common/autotest_common.sh@10 -- # set +x 00:24:27.050 15:34:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.050 15:34:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:27.050 15:34:44 -- nvmf/common.sh@717 -- # local ip 00:24:27.050 15:34:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:27.050 15:34:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:27.050 15:34:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:27.050 15:34:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:27.050 15:34:44 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:27.050 15:34:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:27.050 15:34:44 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:27.050 15:34:44 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:27.050 15:34:44 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:27.050 15:34:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:27.050 15:34:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.050 15:34:44 -- common/autotest_common.sh@10 -- # set +x 00:24:27.310 nvme0n1 00:24:27.310 15:34:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.310 15:34:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:27.310 15:34:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:27.310 15:34:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.310 15:34:44 -- common/autotest_common.sh@10 -- # set +x 00:24:27.310 15:34:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.310 15:34:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:27.310 15:34:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:27.310 15:34:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.310 15:34:44 -- common/autotest_common.sh@10 -- # set +x 00:24:27.310 15:34:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.310 15:34:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:27.310 15:34:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:24:27.310 15:34:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:27.310 15:34:44 -- host/auth.sh@44 -- # digest=sha384 00:24:27.310 15:34:44 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:27.310 15:34:44 -- host/auth.sh@44 -- # keyid=4 00:24:27.310 15:34:44 -- host/auth.sh@45 -- # key=DHHC-1:03:ZTM1ZjZkZGIzYzFjMTc2MjRiY2I3YmVjMjU0YWQ3NmVlODgyMTg3OTIwOWE0NDI2OGExMTQwYjhlZDIyZTVjZP9psy8=: 00:24:27.310 15:34:44 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:27.310 15:34:44 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:27.310 15:34:44 -- host/auth.sh@49 -- # echo DHHC-1:03:ZTM1ZjZkZGIzYzFjMTc2MjRiY2I3YmVjMjU0YWQ3NmVlODgyMTg3OTIwOWE0NDI2OGExMTQwYjhlZDIyZTVjZP9psy8=: 00:24:27.310 15:34:44 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 4 00:24:27.310 15:34:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:27.310 15:34:44 -- host/auth.sh@68 -- # digest=sha384 00:24:27.310 15:34:44 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:27.310 15:34:44 -- host/auth.sh@68 -- # keyid=4 00:24:27.310 15:34:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:27.310 15:34:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.310 15:34:44 -- common/autotest_common.sh@10 -- # set +x 00:24:27.310 15:34:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.310 15:34:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:27.310 15:34:44 -- nvmf/common.sh@717 -- # local ip 00:24:27.310 15:34:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:27.310 15:34:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:27.310 15:34:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:27.310 15:34:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:27.310 15:34:44 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:27.310 15:34:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:27.310 15:34:44 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:27.310 15:34:44 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:27.310 15:34:44 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:27.310 15:34:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:27.310 15:34:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.310 15:34:44 -- common/autotest_common.sh@10 -- # set +x 00:24:27.570 nvme0n1 00:24:27.570 15:34:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.570 15:34:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:27.570 15:34:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:27.570 15:34:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.570 15:34:44 -- common/autotest_common.sh@10 -- # set +x 00:24:27.570 15:34:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.830 15:34:45 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:27.830 15:34:45 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:27.830 15:34:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.830 15:34:45 -- common/autotest_common.sh@10 -- # set +x 00:24:27.830 15:34:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.830 15:34:45 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:27.830 15:34:45 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:27.830 15:34:45 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:24:27.830 15:34:45 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:27.830 15:34:45 -- host/auth.sh@44 -- # digest=sha384 00:24:27.830 15:34:45 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:27.830 15:34:45 -- host/auth.sh@44 -- # keyid=0 00:24:27.830 15:34:45 -- host/auth.sh@45 -- # key=DHHC-1:00:ZTVhOGUxYjE5OTk5ZjI5MmE0MmE2YzJiODA3M2QzOTPQkGqb: 00:24:27.830 15:34:45 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:27.830 15:34:45 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:27.830 15:34:45 -- host/auth.sh@49 -- # echo DHHC-1:00:ZTVhOGUxYjE5OTk5ZjI5MmE0MmE2YzJiODA3M2QzOTPQkGqb: 00:24:27.830 15:34:45 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 0 00:24:27.830 15:34:45 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:27.830 15:34:45 -- host/auth.sh@68 -- # digest=sha384 00:24:27.830 15:34:45 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:27.830 15:34:45 -- host/auth.sh@68 -- # keyid=0 00:24:27.830 15:34:45 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:27.830 15:34:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.830 15:34:45 -- common/autotest_common.sh@10 -- # set +x 00:24:27.830 15:34:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.830 15:34:45 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:27.830 15:34:45 -- nvmf/common.sh@717 -- # local ip 00:24:27.830 15:34:45 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:27.830 15:34:45 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:27.830 15:34:45 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:27.830 15:34:45 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:27.830 15:34:45 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:27.830 15:34:45 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:27.830 15:34:45 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:27.830 15:34:45 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:27.830 15:34:45 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:27.830 15:34:45 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:27.830 15:34:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.830 15:34:45 -- common/autotest_common.sh@10 -- # set +x 00:24:28.090 nvme0n1 00:24:28.090 15:34:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:28.090 15:34:45 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:28.090 15:34:45 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:28.090 15:34:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:28.090 15:34:45 -- common/autotest_common.sh@10 -- # set +x 00:24:28.350 15:34:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:28.350 15:34:45 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:28.350 15:34:45 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:28.350 15:34:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:28.350 15:34:45 -- common/autotest_common.sh@10 -- # set +x 00:24:28.350 15:34:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:28.350 15:34:45 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:28.350 15:34:45 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:24:28.350 15:34:45 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:28.350 15:34:45 -- host/auth.sh@44 -- # digest=sha384 00:24:28.350 15:34:45 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:28.350 15:34:45 -- host/auth.sh@44 -- # keyid=1 00:24:28.350 15:34:45 -- host/auth.sh@45 -- # key=DHHC-1:00:MjRhMzNiOThhMWQxYWE1MTBmOWZkOWI2MmJlNjQwNDc3NjE3Zjg4NzQ2MmZkNTA3FN6w6g==: 00:24:28.350 15:34:45 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:28.350 15:34:45 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:28.350 15:34:45 -- host/auth.sh@49 -- # echo DHHC-1:00:MjRhMzNiOThhMWQxYWE1MTBmOWZkOWI2MmJlNjQwNDc3NjE3Zjg4NzQ2MmZkNTA3FN6w6g==: 00:24:28.350 15:34:45 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 1 00:24:28.350 15:34:45 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:28.350 15:34:45 -- host/auth.sh@68 -- # digest=sha384 00:24:28.350 15:34:45 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:28.350 15:34:45 -- host/auth.sh@68 -- # keyid=1 00:24:28.350 15:34:45 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:28.350 15:34:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:28.350 15:34:45 -- common/autotest_common.sh@10 -- # set +x 00:24:28.350 15:34:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:28.350 15:34:45 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:28.350 15:34:45 -- nvmf/common.sh@717 -- # local ip 00:24:28.350 15:34:45 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:28.350 15:34:45 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:28.350 15:34:45 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:28.350 15:34:45 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:28.350 15:34:45 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:28.350 15:34:45 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:28.350 15:34:45 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:28.350 15:34:45 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:28.350 15:34:45 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:28.350 15:34:45 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:28.350 15:34:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:28.350 15:34:45 -- common/autotest_common.sh@10 -- # set +x 00:24:28.920 nvme0n1 00:24:28.920 15:34:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:28.920 15:34:46 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:28.920 15:34:46 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:28.920 15:34:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:28.920 15:34:46 -- common/autotest_common.sh@10 -- # set +x 00:24:28.920 15:34:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:28.920 15:34:46 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:28.920 15:34:46 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:28.920 15:34:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:28.920 15:34:46 -- common/autotest_common.sh@10 -- # set +x 00:24:28.920 15:34:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:28.920 15:34:46 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:28.920 15:34:46 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:24:28.920 15:34:46 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:28.920 15:34:46 -- host/auth.sh@44 -- # digest=sha384 00:24:28.920 15:34:46 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:28.920 15:34:46 -- host/auth.sh@44 -- # keyid=2 00:24:28.920 15:34:46 -- host/auth.sh@45 -- # key=DHHC-1:01:OTA3ZjQ1YjkxNTg0NmY5ZTBkZmYyYjU4ZWViZDc4YmLNO2iw: 00:24:28.920 15:34:46 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:28.920 15:34:46 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:28.920 15:34:46 -- host/auth.sh@49 -- # echo DHHC-1:01:OTA3ZjQ1YjkxNTg0NmY5ZTBkZmYyYjU4ZWViZDc4YmLNO2iw: 00:24:28.920 15:34:46 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 2 00:24:28.920 15:34:46 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:28.920 15:34:46 -- host/auth.sh@68 -- # digest=sha384 00:24:28.920 15:34:46 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:28.920 15:34:46 -- host/auth.sh@68 -- # keyid=2 00:24:28.920 15:34:46 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:28.920 15:34:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:28.920 15:34:46 -- common/autotest_common.sh@10 -- # set +x 00:24:28.920 15:34:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:28.920 15:34:46 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:28.920 15:34:46 -- nvmf/common.sh@717 -- # local ip 00:24:28.920 15:34:46 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:28.920 15:34:46 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:28.920 15:34:46 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:28.920 15:34:46 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:28.920 15:34:46 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:28.920 15:34:46 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:28.920 15:34:46 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:28.920 15:34:46 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:28.920 15:34:46 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:28.920 15:34:46 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:28.920 15:34:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:28.920 15:34:46 -- common/autotest_common.sh@10 -- # set +x 00:24:29.180 nvme0n1 00:24:29.180 15:34:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:29.180 15:34:46 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:29.180 15:34:46 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:29.180 15:34:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:29.180 15:34:46 -- common/autotest_common.sh@10 -- # set +x 00:24:29.440 15:34:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:29.440 15:34:46 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:29.440 15:34:46 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:29.440 15:34:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:29.440 15:34:46 -- common/autotest_common.sh@10 -- # set +x 00:24:29.440 15:34:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:29.440 15:34:46 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:29.440 15:34:46 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:24:29.440 15:34:46 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:29.440 15:34:46 -- host/auth.sh@44 -- # digest=sha384 00:24:29.440 15:34:46 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:29.440 15:34:46 -- host/auth.sh@44 -- # keyid=3 00:24:29.440 15:34:46 -- host/auth.sh@45 -- # key=DHHC-1:02:ZjJiYWM1MzAzNjI1ZjlhZTlkMzI4MDA1NTA3OGQyYjZjNmMxMzE1MGJiYzc4ZDYy6zJpDQ==: 00:24:29.440 15:34:46 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:29.440 15:34:46 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:29.440 15:34:46 -- host/auth.sh@49 -- # echo DHHC-1:02:ZjJiYWM1MzAzNjI1ZjlhZTlkMzI4MDA1NTA3OGQyYjZjNmMxMzE1MGJiYzc4ZDYy6zJpDQ==: 00:24:29.440 15:34:46 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 3 00:24:29.440 15:34:46 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:29.440 15:34:46 -- host/auth.sh@68 -- # digest=sha384 00:24:29.440 15:34:46 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:29.440 15:34:46 -- host/auth.sh@68 -- # keyid=3 00:24:29.440 15:34:46 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:29.440 15:34:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:29.440 15:34:46 -- common/autotest_common.sh@10 -- # set +x 00:24:29.440 15:34:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:29.440 15:34:46 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:29.440 15:34:46 -- nvmf/common.sh@717 -- # local ip 00:24:29.440 15:34:46 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:29.440 15:34:46 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:29.440 15:34:46 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:29.440 15:34:46 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:29.440 15:34:46 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:29.440 15:34:46 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:29.440 15:34:46 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:29.440 15:34:46 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:29.440 15:34:46 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:29.440 15:34:46 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:29.440 15:34:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:29.440 15:34:46 -- common/autotest_common.sh@10 -- # set +x 00:24:29.699 nvme0n1 00:24:29.699 15:34:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:29.961 15:34:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:29.961 15:34:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:29.961 15:34:47 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:29.961 15:34:47 -- common/autotest_common.sh@10 -- # set +x 00:24:29.961 15:34:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:29.961 15:34:47 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:29.961 15:34:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:29.961 15:34:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:29.961 15:34:47 -- common/autotest_common.sh@10 -- # set +x 00:24:29.961 15:34:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:29.961 15:34:47 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:29.961 15:34:47 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:24:29.961 15:34:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:29.961 15:34:47 -- host/auth.sh@44 -- # digest=sha384 00:24:29.961 15:34:47 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:29.961 15:34:47 -- host/auth.sh@44 -- # keyid=4 00:24:29.961 15:34:47 -- host/auth.sh@45 -- # key=DHHC-1:03:ZTM1ZjZkZGIzYzFjMTc2MjRiY2I3YmVjMjU0YWQ3NmVlODgyMTg3OTIwOWE0NDI2OGExMTQwYjhlZDIyZTVjZP9psy8=: 00:24:29.961 15:34:47 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:29.961 15:34:47 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:29.961 15:34:47 -- host/auth.sh@49 -- # echo DHHC-1:03:ZTM1ZjZkZGIzYzFjMTc2MjRiY2I3YmVjMjU0YWQ3NmVlODgyMTg3OTIwOWE0NDI2OGExMTQwYjhlZDIyZTVjZP9psy8=: 00:24:29.961 15:34:47 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 4 00:24:29.961 15:34:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:29.961 15:34:47 -- host/auth.sh@68 -- # digest=sha384 00:24:29.961 15:34:47 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:29.961 15:34:47 -- host/auth.sh@68 -- # keyid=4 00:24:29.961 15:34:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:29.961 15:34:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:29.961 15:34:47 -- common/autotest_common.sh@10 -- # set +x 00:24:29.961 15:34:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:29.961 15:34:47 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:29.961 15:34:47 -- nvmf/common.sh@717 -- # local ip 00:24:29.961 15:34:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:29.961 15:34:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:29.961 15:34:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:29.961 15:34:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:29.961 15:34:47 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:29.961 15:34:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:29.961 15:34:47 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:29.961 15:34:47 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:29.961 15:34:47 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:29.961 15:34:47 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:29.961 15:34:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:29.961 15:34:47 -- common/autotest_common.sh@10 -- # set +x 00:24:30.534 nvme0n1 00:24:30.534 15:34:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:30.535 15:34:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:30.535 15:34:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:30.535 15:34:47 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:30.535 15:34:47 -- common/autotest_common.sh@10 -- # set +x 00:24:30.535 15:34:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:30.535 15:34:47 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:30.535 15:34:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:30.535 15:34:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:30.535 15:34:47 -- common/autotest_common.sh@10 -- # set +x 00:24:30.535 15:34:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:30.535 15:34:47 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:30.535 15:34:47 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:30.535 15:34:47 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:24:30.535 15:34:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:30.535 15:34:47 -- host/auth.sh@44 -- # digest=sha384 00:24:30.535 15:34:47 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:30.535 15:34:47 -- host/auth.sh@44 -- # keyid=0 00:24:30.535 15:34:47 -- host/auth.sh@45 -- # key=DHHC-1:00:ZTVhOGUxYjE5OTk5ZjI5MmE0MmE2YzJiODA3M2QzOTPQkGqb: 00:24:30.535 15:34:47 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:30.535 15:34:47 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:30.535 15:34:47 -- host/auth.sh@49 -- # echo DHHC-1:00:ZTVhOGUxYjE5OTk5ZjI5MmE0MmE2YzJiODA3M2QzOTPQkGqb: 00:24:30.535 15:34:47 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 0 00:24:30.535 15:34:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:30.535 15:34:47 -- host/auth.sh@68 -- # digest=sha384 00:24:30.535 15:34:47 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:30.535 15:34:47 -- host/auth.sh@68 -- # keyid=0 00:24:30.535 15:34:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:30.535 15:34:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:30.535 15:34:47 -- common/autotest_common.sh@10 -- # set +x 00:24:30.535 15:34:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:30.535 15:34:47 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:30.535 15:34:47 -- nvmf/common.sh@717 -- # local ip 00:24:30.535 15:34:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:30.535 15:34:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:30.535 15:34:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:30.535 15:34:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:30.535 15:34:47 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:30.535 15:34:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:30.535 15:34:47 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:30.535 15:34:47 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:30.535 15:34:47 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:30.535 15:34:47 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:30.535 15:34:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:30.535 15:34:47 -- common/autotest_common.sh@10 -- # set +x 00:24:31.108 nvme0n1 00:24:31.108 15:34:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:31.108 15:34:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:31.108 15:34:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:31.108 15:34:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:31.108 15:34:48 -- common/autotest_common.sh@10 -- # set +x 00:24:31.108 15:34:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:31.370 15:34:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.370 15:34:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:31.370 15:34:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:31.370 15:34:48 -- common/autotest_common.sh@10 -- # set +x 00:24:31.370 15:34:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:31.370 15:34:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:31.370 15:34:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:24:31.370 15:34:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:31.370 15:34:48 -- host/auth.sh@44 -- # digest=sha384 00:24:31.370 15:34:48 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:31.370 15:34:48 -- host/auth.sh@44 -- # keyid=1 00:24:31.370 15:34:48 -- host/auth.sh@45 -- # key=DHHC-1:00:MjRhMzNiOThhMWQxYWE1MTBmOWZkOWI2MmJlNjQwNDc3NjE3Zjg4NzQ2MmZkNTA3FN6w6g==: 00:24:31.370 15:34:48 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:31.370 15:34:48 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:31.370 15:34:48 -- host/auth.sh@49 -- # echo DHHC-1:00:MjRhMzNiOThhMWQxYWE1MTBmOWZkOWI2MmJlNjQwNDc3NjE3Zjg4NzQ2MmZkNTA3FN6w6g==: 00:24:31.370 15:34:48 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 1 00:24:31.370 15:34:48 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:31.370 15:34:48 -- host/auth.sh@68 -- # digest=sha384 00:24:31.370 15:34:48 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:31.370 15:34:48 -- host/auth.sh@68 -- # keyid=1 00:24:31.370 15:34:48 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:31.370 15:34:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:31.370 15:34:48 -- common/autotest_common.sh@10 -- # set +x 00:24:31.370 15:34:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:31.370 15:34:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:31.370 15:34:48 -- nvmf/common.sh@717 -- # local ip 00:24:31.370 15:34:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:31.370 15:34:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:31.370 15:34:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:31.370 15:34:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:31.370 15:34:48 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:31.370 15:34:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:31.370 15:34:48 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:31.370 15:34:48 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:31.370 15:34:48 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:31.370 15:34:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:31.370 15:34:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:31.370 15:34:48 -- common/autotest_common.sh@10 -- # set +x 00:24:31.941 nvme0n1 00:24:31.941 15:34:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:31.941 15:34:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:31.941 15:34:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:31.941 15:34:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:31.941 15:34:49 -- common/autotest_common.sh@10 -- # set +x 00:24:31.941 15:34:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:31.941 15:34:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.941 15:34:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:31.941 15:34:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:31.941 15:34:49 -- common/autotest_common.sh@10 -- # set +x 00:24:32.201 15:34:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:32.201 15:34:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:32.201 15:34:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:24:32.201 15:34:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:32.201 15:34:49 -- host/auth.sh@44 -- # digest=sha384 00:24:32.201 15:34:49 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:32.201 15:34:49 -- host/auth.sh@44 -- # keyid=2 00:24:32.201 15:34:49 -- host/auth.sh@45 -- # key=DHHC-1:01:OTA3ZjQ1YjkxNTg0NmY5ZTBkZmYyYjU4ZWViZDc4YmLNO2iw: 00:24:32.201 15:34:49 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:32.201 15:34:49 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:32.201 15:34:49 -- host/auth.sh@49 -- # echo DHHC-1:01:OTA3ZjQ1YjkxNTg0NmY5ZTBkZmYyYjU4ZWViZDc4YmLNO2iw: 00:24:32.201 15:34:49 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 2 00:24:32.201 15:34:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:32.201 15:34:49 -- host/auth.sh@68 -- # digest=sha384 00:24:32.201 15:34:49 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:32.201 15:34:49 -- host/auth.sh@68 -- # keyid=2 00:24:32.201 15:34:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:32.201 15:34:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:32.201 15:34:49 -- common/autotest_common.sh@10 -- # set +x 00:24:32.201 15:34:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:32.201 15:34:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:32.201 15:34:49 -- nvmf/common.sh@717 -- # local ip 00:24:32.201 15:34:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:32.201 15:34:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:32.201 15:34:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.201 15:34:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.201 15:34:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:32.201 15:34:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:32.201 15:34:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:32.201 15:34:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:32.201 15:34:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:32.201 15:34:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:32.201 15:34:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:32.201 15:34:49 -- common/autotest_common.sh@10 -- # set +x 00:24:32.771 nvme0n1 00:24:32.771 15:34:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:32.771 15:34:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.771 15:34:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:32.771 15:34:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:32.771 15:34:50 -- common/autotest_common.sh@10 -- # set +x 00:24:32.771 15:34:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:32.771 15:34:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.771 15:34:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:32.771 15:34:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:32.771 15:34:50 -- common/autotest_common.sh@10 -- # set +x 00:24:33.031 15:34:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:33.031 15:34:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:33.031 15:34:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:24:33.031 15:34:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:33.031 15:34:50 -- host/auth.sh@44 -- # digest=sha384 00:24:33.031 15:34:50 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:33.031 15:34:50 -- host/auth.sh@44 -- # keyid=3 00:24:33.031 15:34:50 -- host/auth.sh@45 -- # key=DHHC-1:02:ZjJiYWM1MzAzNjI1ZjlhZTlkMzI4MDA1NTA3OGQyYjZjNmMxMzE1MGJiYzc4ZDYy6zJpDQ==: 00:24:33.031 15:34:50 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:33.031 15:34:50 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:33.031 15:34:50 -- host/auth.sh@49 -- # echo DHHC-1:02:ZjJiYWM1MzAzNjI1ZjlhZTlkMzI4MDA1NTA3OGQyYjZjNmMxMzE1MGJiYzc4ZDYy6zJpDQ==: 00:24:33.031 15:34:50 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 3 00:24:33.031 15:34:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:33.031 15:34:50 -- host/auth.sh@68 -- # digest=sha384 00:24:33.031 15:34:50 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:33.031 15:34:50 -- host/auth.sh@68 -- # keyid=3 00:24:33.031 15:34:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:33.031 15:34:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:33.031 15:34:50 -- common/autotest_common.sh@10 -- # set +x 00:24:33.031 15:34:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:33.031 15:34:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:33.031 15:34:50 -- nvmf/common.sh@717 -- # local ip 00:24:33.031 15:34:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:33.031 15:34:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:33.031 15:34:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.031 15:34:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.031 15:34:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:33.031 15:34:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:33.031 15:34:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:33.031 15:34:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:33.031 15:34:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:33.031 15:34:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:33.031 15:34:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:33.031 15:34:50 -- common/autotest_common.sh@10 -- # set +x 00:24:33.601 nvme0n1 00:24:33.601 15:34:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:33.601 15:34:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:33.601 15:34:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:33.601 15:34:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:33.601 15:34:50 -- common/autotest_common.sh@10 -- # set +x 00:24:33.601 15:34:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:33.601 15:34:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.601 15:34:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:33.601 15:34:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:33.601 15:34:51 -- common/autotest_common.sh@10 -- # set +x 00:24:33.862 15:34:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:33.862 15:34:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:33.862 15:34:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:24:33.862 15:34:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:33.862 15:34:51 -- host/auth.sh@44 -- # digest=sha384 00:24:33.862 15:34:51 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:33.862 15:34:51 -- host/auth.sh@44 -- # keyid=4 00:24:33.862 15:34:51 -- host/auth.sh@45 -- # key=DHHC-1:03:ZTM1ZjZkZGIzYzFjMTc2MjRiY2I3YmVjMjU0YWQ3NmVlODgyMTg3OTIwOWE0NDI2OGExMTQwYjhlZDIyZTVjZP9psy8=: 00:24:33.862 15:34:51 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:33.862 15:34:51 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:33.862 15:34:51 -- host/auth.sh@49 -- # echo DHHC-1:03:ZTM1ZjZkZGIzYzFjMTc2MjRiY2I3YmVjMjU0YWQ3NmVlODgyMTg3OTIwOWE0NDI2OGExMTQwYjhlZDIyZTVjZP9psy8=: 00:24:33.862 15:34:51 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 4 00:24:33.862 15:34:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:33.862 15:34:51 -- host/auth.sh@68 -- # digest=sha384 00:24:33.862 15:34:51 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:33.862 15:34:51 -- host/auth.sh@68 -- # keyid=4 00:24:33.862 15:34:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:33.862 15:34:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:33.862 15:34:51 -- common/autotest_common.sh@10 -- # set +x 00:24:33.862 15:34:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:33.862 15:34:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:33.862 15:34:51 -- nvmf/common.sh@717 -- # local ip 00:24:33.862 15:34:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:33.862 15:34:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:33.862 15:34:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.862 15:34:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.862 15:34:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:33.862 15:34:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:33.862 15:34:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:33.862 15:34:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:33.862 15:34:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:33.862 15:34:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:33.862 15:34:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:33.862 15:34:51 -- common/autotest_common.sh@10 -- # set +x 00:24:34.435 nvme0n1 00:24:34.435 15:34:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.435 15:34:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.435 15:34:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:34.435 15:34:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.435 15:34:51 -- common/autotest_common.sh@10 -- # set +x 00:24:34.435 15:34:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.435 15:34:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.435 15:34:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:34.435 15:34:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.435 15:34:51 -- common/autotest_common.sh@10 -- # set +x 00:24:34.696 15:34:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.696 15:34:51 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:24:34.696 15:34:51 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:34.696 15:34:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:34.696 15:34:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:24:34.696 15:34:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:34.696 15:34:51 -- host/auth.sh@44 -- # digest=sha512 00:24:34.696 15:34:51 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:34.696 15:34:51 -- host/auth.sh@44 -- # keyid=0 00:24:34.696 15:34:51 -- host/auth.sh@45 -- # key=DHHC-1:00:ZTVhOGUxYjE5OTk5ZjI5MmE0MmE2YzJiODA3M2QzOTPQkGqb: 00:24:34.696 15:34:51 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:34.696 15:34:51 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:34.696 15:34:51 -- host/auth.sh@49 -- # echo DHHC-1:00:ZTVhOGUxYjE5OTk5ZjI5MmE0MmE2YzJiODA3M2QzOTPQkGqb: 00:24:34.696 15:34:51 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 0 00:24:34.696 15:34:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:34.696 15:34:51 -- host/auth.sh@68 -- # digest=sha512 00:24:34.696 15:34:51 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:34.696 15:34:51 -- host/auth.sh@68 -- # keyid=0 00:24:34.696 15:34:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:34.696 15:34:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.696 15:34:51 -- common/autotest_common.sh@10 -- # set +x 00:24:34.696 15:34:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.696 15:34:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:34.696 15:34:51 -- nvmf/common.sh@717 -- # local ip 00:24:34.696 15:34:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:34.696 15:34:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:34.696 15:34:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:34.696 15:34:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:34.696 15:34:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:34.696 15:34:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:34.696 15:34:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:34.696 15:34:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:34.696 15:34:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:34.696 15:34:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:34.696 15:34:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.696 15:34:51 -- common/autotest_common.sh@10 -- # set +x 00:24:34.696 nvme0n1 00:24:34.696 15:34:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.696 15:34:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.696 15:34:52 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:34.696 15:34:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.696 15:34:52 -- common/autotest_common.sh@10 -- # set +x 00:24:34.696 15:34:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.696 15:34:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.696 15:34:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:34.696 15:34:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.696 15:34:52 -- common/autotest_common.sh@10 -- # set +x 00:24:34.696 15:34:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.696 15:34:52 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:34.696 15:34:52 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:24:34.696 15:34:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:34.696 15:34:52 -- host/auth.sh@44 -- # digest=sha512 00:24:34.696 15:34:52 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:34.696 15:34:52 -- host/auth.sh@44 -- # keyid=1 00:24:34.696 15:34:52 -- host/auth.sh@45 -- # key=DHHC-1:00:MjRhMzNiOThhMWQxYWE1MTBmOWZkOWI2MmJlNjQwNDc3NjE3Zjg4NzQ2MmZkNTA3FN6w6g==: 00:24:34.696 15:34:52 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:34.696 15:34:52 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:34.696 15:34:52 -- host/auth.sh@49 -- # echo DHHC-1:00:MjRhMzNiOThhMWQxYWE1MTBmOWZkOWI2MmJlNjQwNDc3NjE3Zjg4NzQ2MmZkNTA3FN6w6g==: 00:24:34.696 15:34:52 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 1 00:24:34.696 15:34:52 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:34.696 15:34:52 -- host/auth.sh@68 -- # digest=sha512 00:24:34.696 15:34:52 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:34.696 15:34:52 -- host/auth.sh@68 -- # keyid=1 00:24:34.696 15:34:52 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:34.696 15:34:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.696 15:34:52 -- common/autotest_common.sh@10 -- # set +x 00:24:34.696 15:34:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.696 15:34:52 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:34.696 15:34:52 -- nvmf/common.sh@717 -- # local ip 00:24:34.696 15:34:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:34.696 15:34:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:34.696 15:34:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:34.696 15:34:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:34.696 15:34:52 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:34.696 15:34:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:34.696 15:34:52 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:34.696 15:34:52 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:34.696 15:34:52 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:34.696 15:34:52 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:34.696 15:34:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.696 15:34:52 -- common/autotest_common.sh@10 -- # set +x 00:24:34.958 nvme0n1 00:24:34.958 15:34:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.958 15:34:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.958 15:34:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.958 15:34:52 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:34.958 15:34:52 -- common/autotest_common.sh@10 -- # set +x 00:24:34.958 15:34:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.958 15:34:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.958 15:34:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:34.958 15:34:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.958 15:34:52 -- common/autotest_common.sh@10 -- # set +x 00:24:34.958 15:34:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.958 15:34:52 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:34.958 15:34:52 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:24:34.958 15:34:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:34.958 15:34:52 -- host/auth.sh@44 -- # digest=sha512 00:24:34.958 15:34:52 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:34.958 15:34:52 -- host/auth.sh@44 -- # keyid=2 00:24:34.958 15:34:52 -- host/auth.sh@45 -- # key=DHHC-1:01:OTA3ZjQ1YjkxNTg0NmY5ZTBkZmYyYjU4ZWViZDc4YmLNO2iw: 00:24:34.958 15:34:52 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:34.958 15:34:52 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:34.958 15:34:52 -- host/auth.sh@49 -- # echo DHHC-1:01:OTA3ZjQ1YjkxNTg0NmY5ZTBkZmYyYjU4ZWViZDc4YmLNO2iw: 00:24:34.958 15:34:52 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 2 00:24:34.958 15:34:52 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:34.958 15:34:52 -- host/auth.sh@68 -- # digest=sha512 00:24:34.958 15:34:52 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:34.958 15:34:52 -- host/auth.sh@68 -- # keyid=2 00:24:34.958 15:34:52 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:34.958 15:34:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.958 15:34:52 -- common/autotest_common.sh@10 -- # set +x 00:24:34.958 15:34:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.958 15:34:52 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:34.958 15:34:52 -- nvmf/common.sh@717 -- # local ip 00:24:34.958 15:34:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:34.958 15:34:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:34.958 15:34:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:34.958 15:34:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:34.958 15:34:52 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:34.958 15:34:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:34.958 15:34:52 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:34.958 15:34:52 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:34.958 15:34:52 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:34.958 15:34:52 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:34.958 15:34:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.958 15:34:52 -- common/autotest_common.sh@10 -- # set +x 00:24:35.220 nvme0n1 00:24:35.220 15:34:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.220 15:34:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:35.220 15:34:52 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:35.220 15:34:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.220 15:34:52 -- common/autotest_common.sh@10 -- # set +x 00:24:35.220 15:34:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.220 15:34:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:35.220 15:34:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:35.220 15:34:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.220 15:34:52 -- common/autotest_common.sh@10 -- # set +x 00:24:35.220 15:34:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.220 15:34:52 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:35.220 15:34:52 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:24:35.220 15:34:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:35.220 15:34:52 -- host/auth.sh@44 -- # digest=sha512 00:24:35.220 15:34:52 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:35.220 15:34:52 -- host/auth.sh@44 -- # keyid=3 00:24:35.220 15:34:52 -- host/auth.sh@45 -- # key=DHHC-1:02:ZjJiYWM1MzAzNjI1ZjlhZTlkMzI4MDA1NTA3OGQyYjZjNmMxMzE1MGJiYzc4ZDYy6zJpDQ==: 00:24:35.220 15:34:52 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:35.220 15:34:52 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:35.220 15:34:52 -- host/auth.sh@49 -- # echo DHHC-1:02:ZjJiYWM1MzAzNjI1ZjlhZTlkMzI4MDA1NTA3OGQyYjZjNmMxMzE1MGJiYzc4ZDYy6zJpDQ==: 00:24:35.220 15:34:52 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 3 00:24:35.220 15:34:52 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:35.220 15:34:52 -- host/auth.sh@68 -- # digest=sha512 00:24:35.220 15:34:52 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:35.220 15:34:52 -- host/auth.sh@68 -- # keyid=3 00:24:35.220 15:34:52 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:35.220 15:34:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.220 15:34:52 -- common/autotest_common.sh@10 -- # set +x 00:24:35.220 15:34:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.220 15:34:52 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:35.220 15:34:52 -- nvmf/common.sh@717 -- # local ip 00:24:35.220 15:34:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:35.220 15:34:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:35.220 15:34:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:35.220 15:34:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:35.220 15:34:52 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:35.220 15:34:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:35.220 15:34:52 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:35.220 15:34:52 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:35.220 15:34:52 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:35.220 15:34:52 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:35.220 15:34:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.220 15:34:52 -- common/autotest_common.sh@10 -- # set +x 00:24:35.481 nvme0n1 00:24:35.481 15:34:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.481 15:34:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:35.481 15:34:52 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:35.481 15:34:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.481 15:34:52 -- common/autotest_common.sh@10 -- # set +x 00:24:35.481 15:34:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.481 15:34:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:35.481 15:34:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:35.481 15:34:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.481 15:34:52 -- common/autotest_common.sh@10 -- # set +x 00:24:35.481 15:34:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.481 15:34:52 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:35.481 15:34:52 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:24:35.481 15:34:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:35.481 15:34:52 -- host/auth.sh@44 -- # digest=sha512 00:24:35.481 15:34:52 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:35.481 15:34:52 -- host/auth.sh@44 -- # keyid=4 00:24:35.481 15:34:52 -- host/auth.sh@45 -- # key=DHHC-1:03:ZTM1ZjZkZGIzYzFjMTc2MjRiY2I3YmVjMjU0YWQ3NmVlODgyMTg3OTIwOWE0NDI2OGExMTQwYjhlZDIyZTVjZP9psy8=: 00:24:35.481 15:34:52 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:35.481 15:34:52 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:35.481 15:34:52 -- host/auth.sh@49 -- # echo DHHC-1:03:ZTM1ZjZkZGIzYzFjMTc2MjRiY2I3YmVjMjU0YWQ3NmVlODgyMTg3OTIwOWE0NDI2OGExMTQwYjhlZDIyZTVjZP9psy8=: 00:24:35.481 15:34:52 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 4 00:24:35.481 15:34:52 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:35.481 15:34:52 -- host/auth.sh@68 -- # digest=sha512 00:24:35.481 15:34:52 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:35.481 15:34:52 -- host/auth.sh@68 -- # keyid=4 00:24:35.481 15:34:52 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:35.481 15:34:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.481 15:34:52 -- common/autotest_common.sh@10 -- # set +x 00:24:35.481 15:34:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.481 15:34:52 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:35.481 15:34:52 -- nvmf/common.sh@717 -- # local ip 00:24:35.481 15:34:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:35.481 15:34:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:35.481 15:34:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:35.481 15:34:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:35.481 15:34:52 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:35.481 15:34:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:35.481 15:34:52 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:35.481 15:34:52 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:35.482 15:34:52 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:35.482 15:34:52 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:35.482 15:34:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.482 15:34:52 -- common/autotest_common.sh@10 -- # set +x 00:24:35.742 nvme0n1 00:24:35.742 15:34:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.742 15:34:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:35.742 15:34:52 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:35.742 15:34:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.742 15:34:52 -- common/autotest_common.sh@10 -- # set +x 00:24:35.742 15:34:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.743 15:34:53 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:35.743 15:34:53 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:35.743 15:34:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.743 15:34:53 -- common/autotest_common.sh@10 -- # set +x 00:24:35.743 15:34:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.743 15:34:53 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:35.743 15:34:53 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:35.743 15:34:53 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:24:35.743 15:34:53 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:35.743 15:34:53 -- host/auth.sh@44 -- # digest=sha512 00:24:35.743 15:34:53 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:35.743 15:34:53 -- host/auth.sh@44 -- # keyid=0 00:24:35.743 15:34:53 -- host/auth.sh@45 -- # key=DHHC-1:00:ZTVhOGUxYjE5OTk5ZjI5MmE0MmE2YzJiODA3M2QzOTPQkGqb: 00:24:35.743 15:34:53 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:35.743 15:34:53 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:35.743 15:34:53 -- host/auth.sh@49 -- # echo DHHC-1:00:ZTVhOGUxYjE5OTk5ZjI5MmE0MmE2YzJiODA3M2QzOTPQkGqb: 00:24:35.743 15:34:53 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 0 00:24:35.743 15:34:53 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:35.743 15:34:53 -- host/auth.sh@68 -- # digest=sha512 00:24:35.743 15:34:53 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:35.743 15:34:53 -- host/auth.sh@68 -- # keyid=0 00:24:35.743 15:34:53 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:35.743 15:34:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.743 15:34:53 -- common/autotest_common.sh@10 -- # set +x 00:24:35.743 15:34:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.743 15:34:53 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:35.743 15:34:53 -- nvmf/common.sh@717 -- # local ip 00:24:35.743 15:34:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:35.743 15:34:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:35.743 15:34:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:35.743 15:34:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:35.743 15:34:53 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:35.743 15:34:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:35.743 15:34:53 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:35.743 15:34:53 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:35.743 15:34:53 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:35.743 15:34:53 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:35.743 15:34:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.743 15:34:53 -- common/autotest_common.sh@10 -- # set +x 00:24:36.012 nvme0n1 00:24:36.012 15:34:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.012 15:34:53 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:36.012 15:34:53 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:36.012 15:34:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.012 15:34:53 -- common/autotest_common.sh@10 -- # set +x 00:24:36.012 15:34:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.012 15:34:53 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.012 15:34:53 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:36.012 15:34:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.012 15:34:53 -- common/autotest_common.sh@10 -- # set +x 00:24:36.012 15:34:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.012 15:34:53 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:36.012 15:34:53 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:24:36.012 15:34:53 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:36.012 15:34:53 -- host/auth.sh@44 -- # digest=sha512 00:24:36.012 15:34:53 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:36.012 15:34:53 -- host/auth.sh@44 -- # keyid=1 00:24:36.012 15:34:53 -- host/auth.sh@45 -- # key=DHHC-1:00:MjRhMzNiOThhMWQxYWE1MTBmOWZkOWI2MmJlNjQwNDc3NjE3Zjg4NzQ2MmZkNTA3FN6w6g==: 00:24:36.012 15:34:53 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:36.012 15:34:53 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:36.012 15:34:53 -- host/auth.sh@49 -- # echo DHHC-1:00:MjRhMzNiOThhMWQxYWE1MTBmOWZkOWI2MmJlNjQwNDc3NjE3Zjg4NzQ2MmZkNTA3FN6w6g==: 00:24:36.012 15:34:53 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 1 00:24:36.012 15:34:53 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:36.012 15:34:53 -- host/auth.sh@68 -- # digest=sha512 00:24:36.012 15:34:53 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:36.012 15:34:53 -- host/auth.sh@68 -- # keyid=1 00:24:36.012 15:34:53 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:36.012 15:34:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.012 15:34:53 -- common/autotest_common.sh@10 -- # set +x 00:24:36.012 15:34:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.012 15:34:53 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:36.012 15:34:53 -- nvmf/common.sh@717 -- # local ip 00:24:36.012 15:34:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:36.012 15:34:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:36.012 15:34:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:36.012 15:34:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:36.012 15:34:53 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:36.012 15:34:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:36.012 15:34:53 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:36.012 15:34:53 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:36.012 15:34:53 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:36.012 15:34:53 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:36.012 15:34:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.012 15:34:53 -- common/autotest_common.sh@10 -- # set +x 00:24:36.278 nvme0n1 00:24:36.278 15:34:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.278 15:34:53 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:36.278 15:34:53 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:36.278 15:34:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.278 15:34:53 -- common/autotest_common.sh@10 -- # set +x 00:24:36.278 15:34:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.278 15:34:53 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.278 15:34:53 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:36.278 15:34:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.278 15:34:53 -- common/autotest_common.sh@10 -- # set +x 00:24:36.278 15:34:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.278 15:34:53 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:36.278 15:34:53 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:24:36.278 15:34:53 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:36.278 15:34:53 -- host/auth.sh@44 -- # digest=sha512 00:24:36.278 15:34:53 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:36.278 15:34:53 -- host/auth.sh@44 -- # keyid=2 00:24:36.278 15:34:53 -- host/auth.sh@45 -- # key=DHHC-1:01:OTA3ZjQ1YjkxNTg0NmY5ZTBkZmYyYjU4ZWViZDc4YmLNO2iw: 00:24:36.278 15:34:53 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:36.278 15:34:53 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:36.278 15:34:53 -- host/auth.sh@49 -- # echo DHHC-1:01:OTA3ZjQ1YjkxNTg0NmY5ZTBkZmYyYjU4ZWViZDc4YmLNO2iw: 00:24:36.278 15:34:53 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 2 00:24:36.278 15:34:53 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:36.278 15:34:53 -- host/auth.sh@68 -- # digest=sha512 00:24:36.278 15:34:53 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:36.278 15:34:53 -- host/auth.sh@68 -- # keyid=2 00:24:36.278 15:34:53 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:36.278 15:34:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.278 15:34:53 -- common/autotest_common.sh@10 -- # set +x 00:24:36.278 15:34:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.278 15:34:53 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:36.278 15:34:53 -- nvmf/common.sh@717 -- # local ip 00:24:36.278 15:34:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:36.278 15:34:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:36.278 15:34:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:36.278 15:34:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:36.278 15:34:53 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:36.278 15:34:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:36.278 15:34:53 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:36.278 15:34:53 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:36.278 15:34:53 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:36.278 15:34:53 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:36.278 15:34:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.278 15:34:53 -- common/autotest_common.sh@10 -- # set +x 00:24:36.539 nvme0n1 00:24:36.540 15:34:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.540 15:34:53 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:36.540 15:34:53 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:36.540 15:34:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.540 15:34:53 -- common/autotest_common.sh@10 -- # set +x 00:24:36.540 15:34:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.540 15:34:53 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.540 15:34:53 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:36.540 15:34:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.540 15:34:53 -- common/autotest_common.sh@10 -- # set +x 00:24:36.540 15:34:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.540 15:34:53 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:36.540 15:34:53 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:24:36.540 15:34:53 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:36.540 15:34:53 -- host/auth.sh@44 -- # digest=sha512 00:24:36.540 15:34:53 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:36.540 15:34:53 -- host/auth.sh@44 -- # keyid=3 00:24:36.540 15:34:53 -- host/auth.sh@45 -- # key=DHHC-1:02:ZjJiYWM1MzAzNjI1ZjlhZTlkMzI4MDA1NTA3OGQyYjZjNmMxMzE1MGJiYzc4ZDYy6zJpDQ==: 00:24:36.540 15:34:53 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:36.540 15:34:53 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:36.540 15:34:53 -- host/auth.sh@49 -- # echo DHHC-1:02:ZjJiYWM1MzAzNjI1ZjlhZTlkMzI4MDA1NTA3OGQyYjZjNmMxMzE1MGJiYzc4ZDYy6zJpDQ==: 00:24:36.540 15:34:53 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 3 00:24:36.540 15:34:53 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:36.540 15:34:53 -- host/auth.sh@68 -- # digest=sha512 00:24:36.540 15:34:53 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:36.540 15:34:53 -- host/auth.sh@68 -- # keyid=3 00:24:36.540 15:34:53 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:36.540 15:34:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.540 15:34:53 -- common/autotest_common.sh@10 -- # set +x 00:24:36.540 15:34:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.540 15:34:53 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:36.540 15:34:53 -- nvmf/common.sh@717 -- # local ip 00:24:36.540 15:34:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:36.540 15:34:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:36.540 15:34:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:36.540 15:34:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:36.540 15:34:53 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:36.540 15:34:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:36.540 15:34:53 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:36.540 15:34:53 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:36.540 15:34:53 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:36.540 15:34:53 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:36.540 15:34:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.540 15:34:53 -- common/autotest_common.sh@10 -- # set +x 00:24:36.801 nvme0n1 00:24:36.801 15:34:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.801 15:34:54 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:36.801 15:34:54 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:36.801 15:34:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.801 15:34:54 -- common/autotest_common.sh@10 -- # set +x 00:24:36.801 15:34:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.801 15:34:54 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.801 15:34:54 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:36.801 15:34:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.801 15:34:54 -- common/autotest_common.sh@10 -- # set +x 00:24:36.801 15:34:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.801 15:34:54 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:36.801 15:34:54 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:24:36.801 15:34:54 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:36.801 15:34:54 -- host/auth.sh@44 -- # digest=sha512 00:24:36.801 15:34:54 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:36.801 15:34:54 -- host/auth.sh@44 -- # keyid=4 00:24:36.801 15:34:54 -- host/auth.sh@45 -- # key=DHHC-1:03:ZTM1ZjZkZGIzYzFjMTc2MjRiY2I3YmVjMjU0YWQ3NmVlODgyMTg3OTIwOWE0NDI2OGExMTQwYjhlZDIyZTVjZP9psy8=: 00:24:36.801 15:34:54 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:36.801 15:34:54 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:36.801 15:34:54 -- host/auth.sh@49 -- # echo DHHC-1:03:ZTM1ZjZkZGIzYzFjMTc2MjRiY2I3YmVjMjU0YWQ3NmVlODgyMTg3OTIwOWE0NDI2OGExMTQwYjhlZDIyZTVjZP9psy8=: 00:24:36.801 15:34:54 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 4 00:24:36.801 15:34:54 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:36.801 15:34:54 -- host/auth.sh@68 -- # digest=sha512 00:24:36.801 15:34:54 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:36.801 15:34:54 -- host/auth.sh@68 -- # keyid=4 00:24:36.801 15:34:54 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:36.801 15:34:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.801 15:34:54 -- common/autotest_common.sh@10 -- # set +x 00:24:36.801 15:34:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.801 15:34:54 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:36.801 15:34:54 -- nvmf/common.sh@717 -- # local ip 00:24:36.801 15:34:54 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:36.801 15:34:54 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:36.801 15:34:54 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:36.801 15:34:54 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:36.801 15:34:54 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:36.801 15:34:54 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:36.801 15:34:54 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:36.801 15:34:54 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:36.801 15:34:54 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:36.801 15:34:54 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:36.801 15:34:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.801 15:34:54 -- common/autotest_common.sh@10 -- # set +x 00:24:37.062 nvme0n1 00:24:37.062 15:34:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.062 15:34:54 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:37.063 15:34:54 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:37.063 15:34:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.063 15:34:54 -- common/autotest_common.sh@10 -- # set +x 00:24:37.063 15:34:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.063 15:34:54 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.063 15:34:54 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:37.063 15:34:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.063 15:34:54 -- common/autotest_common.sh@10 -- # set +x 00:24:37.063 15:34:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.063 15:34:54 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:37.063 15:34:54 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:37.063 15:34:54 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:24:37.063 15:34:54 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:37.063 15:34:54 -- host/auth.sh@44 -- # digest=sha512 00:24:37.063 15:34:54 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:37.063 15:34:54 -- host/auth.sh@44 -- # keyid=0 00:24:37.063 15:34:54 -- host/auth.sh@45 -- # key=DHHC-1:00:ZTVhOGUxYjE5OTk5ZjI5MmE0MmE2YzJiODA3M2QzOTPQkGqb: 00:24:37.063 15:34:54 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:37.063 15:34:54 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:37.063 15:34:54 -- host/auth.sh@49 -- # echo DHHC-1:00:ZTVhOGUxYjE5OTk5ZjI5MmE0MmE2YzJiODA3M2QzOTPQkGqb: 00:24:37.063 15:34:54 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 0 00:24:37.063 15:34:54 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:37.063 15:34:54 -- host/auth.sh@68 -- # digest=sha512 00:24:37.063 15:34:54 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:37.063 15:34:54 -- host/auth.sh@68 -- # keyid=0 00:24:37.063 15:34:54 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:37.063 15:34:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.063 15:34:54 -- common/autotest_common.sh@10 -- # set +x 00:24:37.063 15:34:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.063 15:34:54 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:37.063 15:34:54 -- nvmf/common.sh@717 -- # local ip 00:24:37.063 15:34:54 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:37.063 15:34:54 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:37.063 15:34:54 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:37.063 15:34:54 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:37.063 15:34:54 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:37.063 15:34:54 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:37.063 15:34:54 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:37.063 15:34:54 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:37.063 15:34:54 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:37.063 15:34:54 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:37.063 15:34:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.063 15:34:54 -- common/autotest_common.sh@10 -- # set +x 00:24:37.324 nvme0n1 00:24:37.324 15:34:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.324 15:34:54 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:37.324 15:34:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.324 15:34:54 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:37.324 15:34:54 -- common/autotest_common.sh@10 -- # set +x 00:24:37.324 15:34:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.324 15:34:54 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.324 15:34:54 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:37.324 15:34:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.324 15:34:54 -- common/autotest_common.sh@10 -- # set +x 00:24:37.324 15:34:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.324 15:34:54 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:37.324 15:34:54 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:24:37.324 15:34:54 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:37.324 15:34:54 -- host/auth.sh@44 -- # digest=sha512 00:24:37.324 15:34:54 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:37.324 15:34:54 -- host/auth.sh@44 -- # keyid=1 00:24:37.324 15:34:54 -- host/auth.sh@45 -- # key=DHHC-1:00:MjRhMzNiOThhMWQxYWE1MTBmOWZkOWI2MmJlNjQwNDc3NjE3Zjg4NzQ2MmZkNTA3FN6w6g==: 00:24:37.324 15:34:54 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:37.324 15:34:54 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:37.324 15:34:54 -- host/auth.sh@49 -- # echo DHHC-1:00:MjRhMzNiOThhMWQxYWE1MTBmOWZkOWI2MmJlNjQwNDc3NjE3Zjg4NzQ2MmZkNTA3FN6w6g==: 00:24:37.324 15:34:54 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 1 00:24:37.324 15:34:54 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:37.324 15:34:54 -- host/auth.sh@68 -- # digest=sha512 00:24:37.324 15:34:54 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:37.324 15:34:54 -- host/auth.sh@68 -- # keyid=1 00:24:37.324 15:34:54 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:37.324 15:34:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.324 15:34:54 -- common/autotest_common.sh@10 -- # set +x 00:24:37.324 15:34:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.324 15:34:54 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:37.324 15:34:54 -- nvmf/common.sh@717 -- # local ip 00:24:37.324 15:34:54 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:37.324 15:34:54 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:37.324 15:34:54 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:37.324 15:34:54 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:37.324 15:34:54 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:37.324 15:34:54 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:37.324 15:34:54 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:37.324 15:34:54 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:37.324 15:34:54 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:37.324 15:34:54 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:37.324 15:34:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.324 15:34:54 -- common/autotest_common.sh@10 -- # set +x 00:24:37.585 nvme0n1 00:24:37.585 15:34:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.585 15:34:55 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:37.585 15:34:55 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:37.585 15:34:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.585 15:34:55 -- common/autotest_common.sh@10 -- # set +x 00:24:37.585 15:34:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.845 15:34:55 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.845 15:34:55 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:37.845 15:34:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.845 15:34:55 -- common/autotest_common.sh@10 -- # set +x 00:24:37.845 15:34:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.845 15:34:55 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:37.845 15:34:55 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:24:37.845 15:34:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:37.845 15:34:55 -- host/auth.sh@44 -- # digest=sha512 00:24:37.845 15:34:55 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:37.846 15:34:55 -- host/auth.sh@44 -- # keyid=2 00:24:37.846 15:34:55 -- host/auth.sh@45 -- # key=DHHC-1:01:OTA3ZjQ1YjkxNTg0NmY5ZTBkZmYyYjU4ZWViZDc4YmLNO2iw: 00:24:37.846 15:34:55 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:37.846 15:34:55 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:37.846 15:34:55 -- host/auth.sh@49 -- # echo DHHC-1:01:OTA3ZjQ1YjkxNTg0NmY5ZTBkZmYyYjU4ZWViZDc4YmLNO2iw: 00:24:37.846 15:34:55 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 2 00:24:37.846 15:34:55 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:37.846 15:34:55 -- host/auth.sh@68 -- # digest=sha512 00:24:37.846 15:34:55 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:37.846 15:34:55 -- host/auth.sh@68 -- # keyid=2 00:24:37.846 15:34:55 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:37.846 15:34:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.846 15:34:55 -- common/autotest_common.sh@10 -- # set +x 00:24:37.846 15:34:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.846 15:34:55 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:37.846 15:34:55 -- nvmf/common.sh@717 -- # local ip 00:24:37.846 15:34:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:37.846 15:34:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:37.846 15:34:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:37.846 15:34:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:37.846 15:34:55 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:37.846 15:34:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:37.846 15:34:55 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:37.846 15:34:55 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:37.846 15:34:55 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:37.846 15:34:55 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:37.846 15:34:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.846 15:34:55 -- common/autotest_common.sh@10 -- # set +x 00:24:38.106 nvme0n1 00:24:38.106 15:34:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.106 15:34:55 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:38.106 15:34:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.106 15:34:55 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:38.106 15:34:55 -- common/autotest_common.sh@10 -- # set +x 00:24:38.106 15:34:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.106 15:34:55 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.106 15:34:55 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:38.106 15:34:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.106 15:34:55 -- common/autotest_common.sh@10 -- # set +x 00:24:38.106 15:34:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.106 15:34:55 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:38.106 15:34:55 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:24:38.106 15:34:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:38.106 15:34:55 -- host/auth.sh@44 -- # digest=sha512 00:24:38.106 15:34:55 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:38.106 15:34:55 -- host/auth.sh@44 -- # keyid=3 00:24:38.106 15:34:55 -- host/auth.sh@45 -- # key=DHHC-1:02:ZjJiYWM1MzAzNjI1ZjlhZTlkMzI4MDA1NTA3OGQyYjZjNmMxMzE1MGJiYzc4ZDYy6zJpDQ==: 00:24:38.106 15:34:55 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:38.106 15:34:55 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:38.106 15:34:55 -- host/auth.sh@49 -- # echo DHHC-1:02:ZjJiYWM1MzAzNjI1ZjlhZTlkMzI4MDA1NTA3OGQyYjZjNmMxMzE1MGJiYzc4ZDYy6zJpDQ==: 00:24:38.106 15:34:55 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 3 00:24:38.106 15:34:55 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:38.106 15:34:55 -- host/auth.sh@68 -- # digest=sha512 00:24:38.106 15:34:55 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:38.106 15:34:55 -- host/auth.sh@68 -- # keyid=3 00:24:38.106 15:34:55 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:38.106 15:34:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.106 15:34:55 -- common/autotest_common.sh@10 -- # set +x 00:24:38.106 15:34:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.106 15:34:55 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:38.106 15:34:55 -- nvmf/common.sh@717 -- # local ip 00:24:38.106 15:34:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:38.106 15:34:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:38.106 15:34:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:38.106 15:34:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:38.106 15:34:55 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:38.106 15:34:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:38.106 15:34:55 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:38.106 15:34:55 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:38.106 15:34:55 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:38.106 15:34:55 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:38.106 15:34:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.106 15:34:55 -- common/autotest_common.sh@10 -- # set +x 00:24:38.367 nvme0n1 00:24:38.367 15:34:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.367 15:34:55 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:38.367 15:34:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.367 15:34:55 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:38.367 15:34:55 -- common/autotest_common.sh@10 -- # set +x 00:24:38.367 15:34:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.367 15:34:55 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.367 15:34:55 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:38.367 15:34:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.367 15:34:55 -- common/autotest_common.sh@10 -- # set +x 00:24:38.367 15:34:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.367 15:34:55 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:38.367 15:34:55 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:24:38.367 15:34:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:38.367 15:34:55 -- host/auth.sh@44 -- # digest=sha512 00:24:38.367 15:34:55 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:38.367 15:34:55 -- host/auth.sh@44 -- # keyid=4 00:24:38.367 15:34:55 -- host/auth.sh@45 -- # key=DHHC-1:03:ZTM1ZjZkZGIzYzFjMTc2MjRiY2I3YmVjMjU0YWQ3NmVlODgyMTg3OTIwOWE0NDI2OGExMTQwYjhlZDIyZTVjZP9psy8=: 00:24:38.367 15:34:55 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:38.367 15:34:55 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:38.367 15:34:55 -- host/auth.sh@49 -- # echo DHHC-1:03:ZTM1ZjZkZGIzYzFjMTc2MjRiY2I3YmVjMjU0YWQ3NmVlODgyMTg3OTIwOWE0NDI2OGExMTQwYjhlZDIyZTVjZP9psy8=: 00:24:38.367 15:34:55 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 4 00:24:38.367 15:34:55 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:38.367 15:34:55 -- host/auth.sh@68 -- # digest=sha512 00:24:38.367 15:34:55 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:38.367 15:34:55 -- host/auth.sh@68 -- # keyid=4 00:24:38.367 15:34:55 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:38.367 15:34:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.367 15:34:55 -- common/autotest_common.sh@10 -- # set +x 00:24:38.367 15:34:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.367 15:34:55 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:38.367 15:34:55 -- nvmf/common.sh@717 -- # local ip 00:24:38.367 15:34:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:38.367 15:34:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:38.367 15:34:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:38.367 15:34:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:38.367 15:34:55 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:38.367 15:34:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:38.367 15:34:55 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:38.367 15:34:55 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:38.367 15:34:55 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:38.367 15:34:55 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:38.367 15:34:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.367 15:34:55 -- common/autotest_common.sh@10 -- # set +x 00:24:38.629 nvme0n1 00:24:38.629 15:34:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.889 15:34:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:38.889 15:34:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:38.889 15:34:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.889 15:34:56 -- common/autotest_common.sh@10 -- # set +x 00:24:38.889 15:34:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.889 15:34:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.889 15:34:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:38.889 15:34:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.889 15:34:56 -- common/autotest_common.sh@10 -- # set +x 00:24:38.889 15:34:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.889 15:34:56 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:38.889 15:34:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:38.889 15:34:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:24:38.889 15:34:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:38.889 15:34:56 -- host/auth.sh@44 -- # digest=sha512 00:24:38.889 15:34:56 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:38.889 15:34:56 -- host/auth.sh@44 -- # keyid=0 00:24:38.889 15:34:56 -- host/auth.sh@45 -- # key=DHHC-1:00:ZTVhOGUxYjE5OTk5ZjI5MmE0MmE2YzJiODA3M2QzOTPQkGqb: 00:24:38.889 15:34:56 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:38.889 15:34:56 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:38.889 15:34:56 -- host/auth.sh@49 -- # echo DHHC-1:00:ZTVhOGUxYjE5OTk5ZjI5MmE0MmE2YzJiODA3M2QzOTPQkGqb: 00:24:38.889 15:34:56 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 0 00:24:38.889 15:34:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:38.889 15:34:56 -- host/auth.sh@68 -- # digest=sha512 00:24:38.889 15:34:56 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:38.889 15:34:56 -- host/auth.sh@68 -- # keyid=0 00:24:38.889 15:34:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:38.889 15:34:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.889 15:34:56 -- common/autotest_common.sh@10 -- # set +x 00:24:38.889 15:34:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.889 15:34:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:38.889 15:34:56 -- nvmf/common.sh@717 -- # local ip 00:24:38.889 15:34:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:38.889 15:34:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:38.889 15:34:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:38.889 15:34:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:38.889 15:34:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:38.889 15:34:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:38.889 15:34:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:38.889 15:34:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:38.889 15:34:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:38.889 15:34:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:38.889 15:34:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.889 15:34:56 -- common/autotest_common.sh@10 -- # set +x 00:24:39.183 nvme0n1 00:24:39.183 15:34:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:39.183 15:34:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:39.183 15:34:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:39.183 15:34:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:39.183 15:34:56 -- common/autotest_common.sh@10 -- # set +x 00:24:39.183 15:34:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:39.444 15:34:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:39.444 15:34:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:39.444 15:34:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:39.444 15:34:56 -- common/autotest_common.sh@10 -- # set +x 00:24:39.444 15:34:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:39.444 15:34:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:39.444 15:34:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:24:39.444 15:34:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:39.444 15:34:56 -- host/auth.sh@44 -- # digest=sha512 00:24:39.444 15:34:56 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:39.444 15:34:56 -- host/auth.sh@44 -- # keyid=1 00:24:39.444 15:34:56 -- host/auth.sh@45 -- # key=DHHC-1:00:MjRhMzNiOThhMWQxYWE1MTBmOWZkOWI2MmJlNjQwNDc3NjE3Zjg4NzQ2MmZkNTA3FN6w6g==: 00:24:39.444 15:34:56 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:39.444 15:34:56 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:39.444 15:34:56 -- host/auth.sh@49 -- # echo DHHC-1:00:MjRhMzNiOThhMWQxYWE1MTBmOWZkOWI2MmJlNjQwNDc3NjE3Zjg4NzQ2MmZkNTA3FN6w6g==: 00:24:39.444 15:34:56 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 1 00:24:39.444 15:34:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:39.444 15:34:56 -- host/auth.sh@68 -- # digest=sha512 00:24:39.444 15:34:56 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:39.444 15:34:56 -- host/auth.sh@68 -- # keyid=1 00:24:39.444 15:34:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:39.444 15:34:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:39.444 15:34:56 -- common/autotest_common.sh@10 -- # set +x 00:24:39.444 15:34:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:39.444 15:34:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:39.444 15:34:56 -- nvmf/common.sh@717 -- # local ip 00:24:39.444 15:34:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:39.444 15:34:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:39.444 15:34:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:39.444 15:34:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:39.444 15:34:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:39.444 15:34:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:39.444 15:34:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:39.444 15:34:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:39.444 15:34:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:39.444 15:34:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:39.444 15:34:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:39.444 15:34:56 -- common/autotest_common.sh@10 -- # set +x 00:24:40.016 nvme0n1 00:24:40.016 15:34:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:40.016 15:34:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:40.016 15:34:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:40.016 15:34:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.016 15:34:57 -- common/autotest_common.sh@10 -- # set +x 00:24:40.016 15:34:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:40.016 15:34:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:40.016 15:34:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:40.016 15:34:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.016 15:34:57 -- common/autotest_common.sh@10 -- # set +x 00:24:40.016 15:34:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:40.016 15:34:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:40.016 15:34:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:24:40.016 15:34:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:40.016 15:34:57 -- host/auth.sh@44 -- # digest=sha512 00:24:40.016 15:34:57 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:40.016 15:34:57 -- host/auth.sh@44 -- # keyid=2 00:24:40.016 15:34:57 -- host/auth.sh@45 -- # key=DHHC-1:01:OTA3ZjQ1YjkxNTg0NmY5ZTBkZmYyYjU4ZWViZDc4YmLNO2iw: 00:24:40.016 15:34:57 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:40.016 15:34:57 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:40.016 15:34:57 -- host/auth.sh@49 -- # echo DHHC-1:01:OTA3ZjQ1YjkxNTg0NmY5ZTBkZmYyYjU4ZWViZDc4YmLNO2iw: 00:24:40.016 15:34:57 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 2 00:24:40.016 15:34:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:40.016 15:34:57 -- host/auth.sh@68 -- # digest=sha512 00:24:40.016 15:34:57 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:40.016 15:34:57 -- host/auth.sh@68 -- # keyid=2 00:24:40.016 15:34:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:40.016 15:34:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.016 15:34:57 -- common/autotest_common.sh@10 -- # set +x 00:24:40.016 15:34:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:40.016 15:34:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:40.016 15:34:57 -- nvmf/common.sh@717 -- # local ip 00:24:40.016 15:34:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:40.016 15:34:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:40.016 15:34:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:40.016 15:34:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:40.016 15:34:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:40.016 15:34:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:40.016 15:34:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:40.016 15:34:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:40.016 15:34:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:40.016 15:34:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:40.016 15:34:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.016 15:34:57 -- common/autotest_common.sh@10 -- # set +x 00:24:40.276 nvme0n1 00:24:40.276 15:34:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:40.276 15:34:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:40.276 15:34:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.276 15:34:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:40.276 15:34:57 -- common/autotest_common.sh@10 -- # set +x 00:24:40.276 15:34:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:40.559 15:34:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:40.559 15:34:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:40.559 15:34:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.559 15:34:57 -- common/autotest_common.sh@10 -- # set +x 00:24:40.559 15:34:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:40.559 15:34:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:40.559 15:34:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:24:40.559 15:34:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:40.559 15:34:57 -- host/auth.sh@44 -- # digest=sha512 00:24:40.559 15:34:57 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:40.559 15:34:57 -- host/auth.sh@44 -- # keyid=3 00:24:40.559 15:34:57 -- host/auth.sh@45 -- # key=DHHC-1:02:ZjJiYWM1MzAzNjI1ZjlhZTlkMzI4MDA1NTA3OGQyYjZjNmMxMzE1MGJiYzc4ZDYy6zJpDQ==: 00:24:40.559 15:34:57 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:40.559 15:34:57 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:40.559 15:34:57 -- host/auth.sh@49 -- # echo DHHC-1:02:ZjJiYWM1MzAzNjI1ZjlhZTlkMzI4MDA1NTA3OGQyYjZjNmMxMzE1MGJiYzc4ZDYy6zJpDQ==: 00:24:40.559 15:34:57 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 3 00:24:40.559 15:34:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:40.559 15:34:57 -- host/auth.sh@68 -- # digest=sha512 00:24:40.559 15:34:57 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:40.559 15:34:57 -- host/auth.sh@68 -- # keyid=3 00:24:40.559 15:34:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:40.559 15:34:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.559 15:34:57 -- common/autotest_common.sh@10 -- # set +x 00:24:40.559 15:34:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:40.559 15:34:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:40.559 15:34:57 -- nvmf/common.sh@717 -- # local ip 00:24:40.559 15:34:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:40.559 15:34:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:40.559 15:34:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:40.559 15:34:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:40.559 15:34:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:40.559 15:34:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:40.559 15:34:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:40.559 15:34:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:40.559 15:34:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:40.559 15:34:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:40.559 15:34:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.559 15:34:57 -- common/autotest_common.sh@10 -- # set +x 00:24:40.892 nvme0n1 00:24:40.892 15:34:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:40.892 15:34:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:40.892 15:34:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:40.892 15:34:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.892 15:34:58 -- common/autotest_common.sh@10 -- # set +x 00:24:40.892 15:34:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:40.892 15:34:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:40.892 15:34:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:40.892 15:34:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.892 15:34:58 -- common/autotest_common.sh@10 -- # set +x 00:24:40.892 15:34:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:40.892 15:34:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:40.892 15:34:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:24:40.892 15:34:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:40.892 15:34:58 -- host/auth.sh@44 -- # digest=sha512 00:24:40.892 15:34:58 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:40.892 15:34:58 -- host/auth.sh@44 -- # keyid=4 00:24:40.892 15:34:58 -- host/auth.sh@45 -- # key=DHHC-1:03:ZTM1ZjZkZGIzYzFjMTc2MjRiY2I3YmVjMjU0YWQ3NmVlODgyMTg3OTIwOWE0NDI2OGExMTQwYjhlZDIyZTVjZP9psy8=: 00:24:40.892 15:34:58 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:40.892 15:34:58 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:40.892 15:34:58 -- host/auth.sh@49 -- # echo DHHC-1:03:ZTM1ZjZkZGIzYzFjMTc2MjRiY2I3YmVjMjU0YWQ3NmVlODgyMTg3OTIwOWE0NDI2OGExMTQwYjhlZDIyZTVjZP9psy8=: 00:24:40.892 15:34:58 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 4 00:24:40.892 15:34:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:40.892 15:34:58 -- host/auth.sh@68 -- # digest=sha512 00:24:40.892 15:34:58 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:40.892 15:34:58 -- host/auth.sh@68 -- # keyid=4 00:24:40.892 15:34:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:40.892 15:34:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.892 15:34:58 -- common/autotest_common.sh@10 -- # set +x 00:24:40.892 15:34:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:40.892 15:34:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:40.892 15:34:58 -- nvmf/common.sh@717 -- # local ip 00:24:41.158 15:34:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:41.158 15:34:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:41.158 15:34:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:41.158 15:34:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:41.158 15:34:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:41.158 15:34:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:41.158 15:34:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:41.158 15:34:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:41.158 15:34:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:41.158 15:34:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:41.158 15:34:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:41.158 15:34:58 -- common/autotest_common.sh@10 -- # set +x 00:24:41.418 nvme0n1 00:24:41.418 15:34:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:41.418 15:34:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:41.418 15:34:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:41.418 15:34:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:41.418 15:34:58 -- common/autotest_common.sh@10 -- # set +x 00:24:41.418 15:34:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:41.418 15:34:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:41.418 15:34:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:41.418 15:34:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:41.418 15:34:58 -- common/autotest_common.sh@10 -- # set +x 00:24:41.418 15:34:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:41.418 15:34:58 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:41.418 15:34:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:41.418 15:34:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:24:41.418 15:34:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:41.418 15:34:58 -- host/auth.sh@44 -- # digest=sha512 00:24:41.418 15:34:58 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:41.418 15:34:58 -- host/auth.sh@44 -- # keyid=0 00:24:41.418 15:34:58 -- host/auth.sh@45 -- # key=DHHC-1:00:ZTVhOGUxYjE5OTk5ZjI5MmE0MmE2YzJiODA3M2QzOTPQkGqb: 00:24:41.418 15:34:58 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:41.418 15:34:58 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:41.418 15:34:58 -- host/auth.sh@49 -- # echo DHHC-1:00:ZTVhOGUxYjE5OTk5ZjI5MmE0MmE2YzJiODA3M2QzOTPQkGqb: 00:24:41.418 15:34:58 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 0 00:24:41.418 15:34:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:41.418 15:34:58 -- host/auth.sh@68 -- # digest=sha512 00:24:41.418 15:34:58 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:41.418 15:34:58 -- host/auth.sh@68 -- # keyid=0 00:24:41.418 15:34:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:41.418 15:34:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:41.418 15:34:58 -- common/autotest_common.sh@10 -- # set +x 00:24:41.418 15:34:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:41.678 15:34:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:41.678 15:34:58 -- nvmf/common.sh@717 -- # local ip 00:24:41.678 15:34:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:41.678 15:34:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:41.678 15:34:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:41.678 15:34:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:41.678 15:34:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:41.678 15:34:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:41.678 15:34:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:41.679 15:34:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:41.679 15:34:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:41.679 15:34:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:41.679 15:34:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:41.679 15:34:58 -- common/autotest_common.sh@10 -- # set +x 00:24:42.248 nvme0n1 00:24:42.248 15:34:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:42.248 15:34:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:42.248 15:34:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:42.248 15:34:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:42.248 15:34:59 -- common/autotest_common.sh@10 -- # set +x 00:24:42.248 15:34:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:42.248 15:34:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:42.248 15:34:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:42.248 15:34:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:42.248 15:34:59 -- common/autotest_common.sh@10 -- # set +x 00:24:42.248 15:34:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:42.248 15:34:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:42.248 15:34:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:24:42.248 15:34:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:42.248 15:34:59 -- host/auth.sh@44 -- # digest=sha512 00:24:42.249 15:34:59 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:42.249 15:34:59 -- host/auth.sh@44 -- # keyid=1 00:24:42.249 15:34:59 -- host/auth.sh@45 -- # key=DHHC-1:00:MjRhMzNiOThhMWQxYWE1MTBmOWZkOWI2MmJlNjQwNDc3NjE3Zjg4NzQ2MmZkNTA3FN6w6g==: 00:24:42.249 15:34:59 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:42.249 15:34:59 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:42.249 15:34:59 -- host/auth.sh@49 -- # echo DHHC-1:00:MjRhMzNiOThhMWQxYWE1MTBmOWZkOWI2MmJlNjQwNDc3NjE3Zjg4NzQ2MmZkNTA3FN6w6g==: 00:24:42.249 15:34:59 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 1 00:24:42.249 15:34:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:42.249 15:34:59 -- host/auth.sh@68 -- # digest=sha512 00:24:42.249 15:34:59 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:42.249 15:34:59 -- host/auth.sh@68 -- # keyid=1 00:24:42.249 15:34:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:42.249 15:34:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:42.249 15:34:59 -- common/autotest_common.sh@10 -- # set +x 00:24:42.249 15:34:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:42.249 15:34:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:42.249 15:34:59 -- nvmf/common.sh@717 -- # local ip 00:24:42.249 15:34:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:42.249 15:34:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:42.249 15:34:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:42.249 15:34:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:42.249 15:34:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:42.249 15:34:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:42.249 15:34:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:42.249 15:34:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:42.249 15:34:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:42.249 15:34:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:42.249 15:34:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:42.249 15:34:59 -- common/autotest_common.sh@10 -- # set +x 00:24:43.189 nvme0n1 00:24:43.189 15:35:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:43.189 15:35:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:43.189 15:35:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:43.189 15:35:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:43.189 15:35:00 -- common/autotest_common.sh@10 -- # set +x 00:24:43.189 15:35:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:43.189 15:35:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.189 15:35:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:43.189 15:35:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:43.189 15:35:00 -- common/autotest_common.sh@10 -- # set +x 00:24:43.189 15:35:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:43.189 15:35:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:43.189 15:35:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:24:43.189 15:35:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:43.189 15:35:00 -- host/auth.sh@44 -- # digest=sha512 00:24:43.189 15:35:00 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:43.189 15:35:00 -- host/auth.sh@44 -- # keyid=2 00:24:43.189 15:35:00 -- host/auth.sh@45 -- # key=DHHC-1:01:OTA3ZjQ1YjkxNTg0NmY5ZTBkZmYyYjU4ZWViZDc4YmLNO2iw: 00:24:43.189 15:35:00 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:43.189 15:35:00 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:43.189 15:35:00 -- host/auth.sh@49 -- # echo DHHC-1:01:OTA3ZjQ1YjkxNTg0NmY5ZTBkZmYyYjU4ZWViZDc4YmLNO2iw: 00:24:43.189 15:35:00 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 2 00:24:43.189 15:35:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:43.189 15:35:00 -- host/auth.sh@68 -- # digest=sha512 00:24:43.189 15:35:00 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:43.189 15:35:00 -- host/auth.sh@68 -- # keyid=2 00:24:43.189 15:35:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:43.189 15:35:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:43.189 15:35:00 -- common/autotest_common.sh@10 -- # set +x 00:24:43.189 15:35:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:43.189 15:35:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:43.189 15:35:00 -- nvmf/common.sh@717 -- # local ip 00:24:43.189 15:35:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:43.189 15:35:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:43.189 15:35:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:43.189 15:35:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:43.189 15:35:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:43.189 15:35:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:43.189 15:35:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:43.189 15:35:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:43.189 15:35:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:43.189 15:35:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:43.189 15:35:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:43.189 15:35:00 -- common/autotest_common.sh@10 -- # set +x 00:24:44.130 nvme0n1 00:24:44.130 15:35:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:44.130 15:35:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:44.130 15:35:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:44.130 15:35:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:44.130 15:35:01 -- common/autotest_common.sh@10 -- # set +x 00:24:44.130 15:35:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:44.130 15:35:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:44.130 15:35:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:44.130 15:35:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:44.130 15:35:01 -- common/autotest_common.sh@10 -- # set +x 00:24:44.130 15:35:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:44.130 15:35:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:44.130 15:35:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:24:44.130 15:35:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:44.130 15:35:01 -- host/auth.sh@44 -- # digest=sha512 00:24:44.130 15:35:01 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:44.130 15:35:01 -- host/auth.sh@44 -- # keyid=3 00:24:44.130 15:35:01 -- host/auth.sh@45 -- # key=DHHC-1:02:ZjJiYWM1MzAzNjI1ZjlhZTlkMzI4MDA1NTA3OGQyYjZjNmMxMzE1MGJiYzc4ZDYy6zJpDQ==: 00:24:44.130 15:35:01 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:44.130 15:35:01 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:44.130 15:35:01 -- host/auth.sh@49 -- # echo DHHC-1:02:ZjJiYWM1MzAzNjI1ZjlhZTlkMzI4MDA1NTA3OGQyYjZjNmMxMzE1MGJiYzc4ZDYy6zJpDQ==: 00:24:44.130 15:35:01 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 3 00:24:44.130 15:35:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:44.130 15:35:01 -- host/auth.sh@68 -- # digest=sha512 00:24:44.130 15:35:01 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:44.130 15:35:01 -- host/auth.sh@68 -- # keyid=3 00:24:44.130 15:35:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:44.130 15:35:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:44.130 15:35:01 -- common/autotest_common.sh@10 -- # set +x 00:24:44.130 15:35:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:44.130 15:35:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:44.130 15:35:01 -- nvmf/common.sh@717 -- # local ip 00:24:44.130 15:35:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:44.130 15:35:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:44.130 15:35:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:44.130 15:35:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:44.130 15:35:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:44.130 15:35:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:44.130 15:35:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:44.131 15:35:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:44.131 15:35:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:44.131 15:35:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:44.131 15:35:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:44.131 15:35:01 -- common/autotest_common.sh@10 -- # set +x 00:24:44.700 nvme0n1 00:24:44.700 15:35:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:44.700 15:35:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:44.700 15:35:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:44.700 15:35:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:44.700 15:35:02 -- common/autotest_common.sh@10 -- # set +x 00:24:44.700 15:35:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:44.700 15:35:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:44.700 15:35:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:44.700 15:35:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:44.700 15:35:02 -- common/autotest_common.sh@10 -- # set +x 00:24:44.700 15:35:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:44.700 15:35:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:44.700 15:35:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:24:44.700 15:35:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:44.700 15:35:02 -- host/auth.sh@44 -- # digest=sha512 00:24:44.700 15:35:02 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:44.700 15:35:02 -- host/auth.sh@44 -- # keyid=4 00:24:44.700 15:35:02 -- host/auth.sh@45 -- # key=DHHC-1:03:ZTM1ZjZkZGIzYzFjMTc2MjRiY2I3YmVjMjU0YWQ3NmVlODgyMTg3OTIwOWE0NDI2OGExMTQwYjhlZDIyZTVjZP9psy8=: 00:24:44.700 15:35:02 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:44.700 15:35:02 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:44.700 15:35:02 -- host/auth.sh@49 -- # echo DHHC-1:03:ZTM1ZjZkZGIzYzFjMTc2MjRiY2I3YmVjMjU0YWQ3NmVlODgyMTg3OTIwOWE0NDI2OGExMTQwYjhlZDIyZTVjZP9psy8=: 00:24:44.700 15:35:02 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 4 00:24:44.700 15:35:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:44.700 15:35:02 -- host/auth.sh@68 -- # digest=sha512 00:24:44.700 15:35:02 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:44.700 15:35:02 -- host/auth.sh@68 -- # keyid=4 00:24:44.700 15:35:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:44.700 15:35:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:44.700 15:35:02 -- common/autotest_common.sh@10 -- # set +x 00:24:44.700 15:35:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:44.700 15:35:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:44.700 15:35:02 -- nvmf/common.sh@717 -- # local ip 00:24:44.700 15:35:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:44.700 15:35:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:44.700 15:35:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:44.700 15:35:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:44.700 15:35:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:44.700 15:35:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:44.700 15:35:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:44.700 15:35:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:44.700 15:35:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:44.700 15:35:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:44.700 15:35:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:44.700 15:35:02 -- common/autotest_common.sh@10 -- # set +x 00:24:45.642 nvme0n1 00:24:45.642 15:35:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:45.642 15:35:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:45.642 15:35:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:45.642 15:35:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:45.642 15:35:02 -- common/autotest_common.sh@10 -- # set +x 00:24:45.643 15:35:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:45.643 15:35:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:45.643 15:35:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:45.643 15:35:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:45.643 15:35:02 -- common/autotest_common.sh@10 -- # set +x 00:24:45.643 15:35:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:45.643 15:35:02 -- host/auth.sh@117 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:45.643 15:35:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:45.643 15:35:02 -- host/auth.sh@44 -- # digest=sha256 00:24:45.643 15:35:02 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:45.643 15:35:02 -- host/auth.sh@44 -- # keyid=1 00:24:45.643 15:35:02 -- host/auth.sh@45 -- # key=DHHC-1:00:MjRhMzNiOThhMWQxYWE1MTBmOWZkOWI2MmJlNjQwNDc3NjE3Zjg4NzQ2MmZkNTA3FN6w6g==: 00:24:45.643 15:35:02 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:45.643 15:35:02 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:45.643 15:35:02 -- host/auth.sh@49 -- # echo DHHC-1:00:MjRhMzNiOThhMWQxYWE1MTBmOWZkOWI2MmJlNjQwNDc3NjE3Zjg4NzQ2MmZkNTA3FN6w6g==: 00:24:45.643 15:35:02 -- host/auth.sh@118 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:45.643 15:35:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:45.643 15:35:02 -- common/autotest_common.sh@10 -- # set +x 00:24:45.643 15:35:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:45.643 15:35:02 -- host/auth.sh@119 -- # get_main_ns_ip 00:24:45.643 15:35:02 -- nvmf/common.sh@717 -- # local ip 00:24:45.643 15:35:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:45.643 15:35:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:45.643 15:35:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:45.643 15:35:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:45.643 15:35:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:45.643 15:35:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:45.643 15:35:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:45.643 15:35:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:45.643 15:35:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:45.643 15:35:02 -- host/auth.sh@119 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:45.643 15:35:02 -- common/autotest_common.sh@638 -- # local es=0 00:24:45.643 15:35:02 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:45.643 15:35:02 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:24:45.643 15:35:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:45.643 15:35:02 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:24:45.643 15:35:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:45.643 15:35:02 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:45.643 15:35:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:45.643 15:35:02 -- common/autotest_common.sh@10 -- # set +x 00:24:45.643 request: 00:24:45.643 { 00:24:45.643 "name": "nvme0", 00:24:45.643 "trtype": "tcp", 00:24:45.643 "traddr": "10.0.0.1", 00:24:45.643 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:45.643 "adrfam": "ipv4", 00:24:45.643 "trsvcid": "4420", 00:24:45.643 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:45.643 "method": "bdev_nvme_attach_controller", 00:24:45.643 "req_id": 1 00:24:45.643 } 00:24:45.643 Got JSON-RPC error response 00:24:45.643 response: 00:24:45.643 { 00:24:45.643 "code": -32602, 00:24:45.643 "message": "Invalid parameters" 00:24:45.643 } 00:24:45.643 15:35:03 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:24:45.643 15:35:03 -- common/autotest_common.sh@641 -- # es=1 00:24:45.643 15:35:03 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:45.643 15:35:03 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:45.643 15:35:03 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:45.643 15:35:03 -- host/auth.sh@121 -- # rpc_cmd bdev_nvme_get_controllers 00:24:45.643 15:35:03 -- host/auth.sh@121 -- # jq length 00:24:45.643 15:35:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:45.643 15:35:03 -- common/autotest_common.sh@10 -- # set +x 00:24:45.643 15:35:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:45.643 15:35:03 -- host/auth.sh@121 -- # (( 0 == 0 )) 00:24:45.643 15:35:03 -- host/auth.sh@124 -- # get_main_ns_ip 00:24:45.643 15:35:03 -- nvmf/common.sh@717 -- # local ip 00:24:45.643 15:35:03 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:45.643 15:35:03 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:45.643 15:35:03 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:45.643 15:35:03 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:45.643 15:35:03 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:45.643 15:35:03 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:45.643 15:35:03 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:45.643 15:35:03 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:45.643 15:35:03 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:45.643 15:35:03 -- host/auth.sh@124 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:45.643 15:35:03 -- common/autotest_common.sh@638 -- # local es=0 00:24:45.643 15:35:03 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:45.643 15:35:03 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:24:45.643 15:35:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:45.643 15:35:03 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:24:45.643 15:35:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:45.643 15:35:03 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:45.643 15:35:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:45.643 15:35:03 -- common/autotest_common.sh@10 -- # set +x 00:24:45.904 request: 00:24:45.904 { 00:24:45.904 "name": "nvme0", 00:24:45.904 "trtype": "tcp", 00:24:45.904 "traddr": "10.0.0.1", 00:24:45.904 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:45.904 "adrfam": "ipv4", 00:24:45.904 "trsvcid": "4420", 00:24:45.904 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:45.904 "dhchap_key": "key2", 00:24:45.904 "method": "bdev_nvme_attach_controller", 00:24:45.904 "req_id": 1 00:24:45.904 } 00:24:45.904 Got JSON-RPC error response 00:24:45.904 response: 00:24:45.904 { 00:24:45.904 "code": -32602, 00:24:45.904 "message": "Invalid parameters" 00:24:45.904 } 00:24:45.904 15:35:03 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:24:45.904 15:35:03 -- common/autotest_common.sh@641 -- # es=1 00:24:45.904 15:35:03 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:45.904 15:35:03 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:45.904 15:35:03 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:45.904 15:35:03 -- host/auth.sh@127 -- # rpc_cmd bdev_nvme_get_controllers 00:24:45.904 15:35:03 -- host/auth.sh@127 -- # jq length 00:24:45.904 15:35:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:45.904 15:35:03 -- common/autotest_common.sh@10 -- # set +x 00:24:45.904 15:35:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:45.904 15:35:03 -- host/auth.sh@127 -- # (( 0 == 0 )) 00:24:45.904 15:35:03 -- host/auth.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:24:45.904 15:35:03 -- host/auth.sh@130 -- # cleanup 00:24:45.904 15:35:03 -- host/auth.sh@24 -- # nvmftestfini 00:24:45.904 15:35:03 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:45.904 15:35:03 -- nvmf/common.sh@117 -- # sync 00:24:45.904 15:35:03 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:45.904 15:35:03 -- nvmf/common.sh@120 -- # set +e 00:24:45.904 15:35:03 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:45.904 15:35:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:45.904 rmmod nvme_tcp 00:24:45.904 rmmod nvme_fabrics 00:24:45.904 15:35:03 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:45.904 15:35:03 -- nvmf/common.sh@124 -- # set -e 00:24:45.904 15:35:03 -- nvmf/common.sh@125 -- # return 0 00:24:45.904 15:35:03 -- nvmf/common.sh@478 -- # '[' -n 1753862 ']' 00:24:45.904 15:35:03 -- nvmf/common.sh@479 -- # killprocess 1753862 00:24:45.904 15:35:03 -- common/autotest_common.sh@936 -- # '[' -z 1753862 ']' 00:24:45.904 15:35:03 -- common/autotest_common.sh@940 -- # kill -0 1753862 00:24:45.904 15:35:03 -- common/autotest_common.sh@941 -- # uname 00:24:45.904 15:35:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:45.904 15:35:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1753862 00:24:45.904 15:35:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:45.904 15:35:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:45.904 15:35:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1753862' 00:24:45.904 killing process with pid 1753862 00:24:45.904 15:35:03 -- common/autotest_common.sh@955 -- # kill 1753862 00:24:45.904 15:35:03 -- common/autotest_common.sh@960 -- # wait 1753862 00:24:46.164 15:35:03 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:46.164 15:35:03 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:46.164 15:35:03 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:46.164 15:35:03 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:46.164 15:35:03 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:46.164 15:35:03 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:46.164 15:35:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:46.164 15:35:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:48.073 15:35:05 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:48.073 15:35:05 -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:48.333 15:35:05 -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:48.333 15:35:05 -- host/auth.sh@27 -- # clean_kernel_target 00:24:48.333 15:35:05 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:24:48.333 15:35:05 -- nvmf/common.sh@675 -- # echo 0 00:24:48.333 15:35:05 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:48.333 15:35:05 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:48.333 15:35:05 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:48.333 15:35:05 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:48.333 15:35:05 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:24:48.333 15:35:05 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:24:48.333 15:35:05 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:51.631 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:24:51.631 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:24:51.631 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:24:51.631 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:24:51.631 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:24:51.631 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:24:51.631 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:24:51.631 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:24:51.631 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:24:51.631 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:24:51.631 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:24:51.631 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:24:51.631 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:24:51.631 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:24:51.631 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:24:51.631 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:24:51.631 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:24:51.631 15:35:09 -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.IzG /tmp/spdk.key-null.wRe /tmp/spdk.key-sha256.jo2 /tmp/spdk.key-sha384.voO /tmp/spdk.key-sha512.7Id /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:24:51.892 15:35:09 -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:55.190 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:24:55.190 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:24:55.190 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:24:55.190 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:24:55.190 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:24:55.191 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:24:55.191 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:24:55.191 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:24:55.191 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:24:55.191 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:24:55.191 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:24:55.191 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:24:55.191 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:24:55.191 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:24:55.191 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:24:55.191 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:24:55.191 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:24:55.451 00:24:55.451 real 0m57.487s 00:24:55.451 user 0m51.131s 00:24:55.451 sys 0m14.704s 00:24:55.451 15:35:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:55.451 15:35:12 -- common/autotest_common.sh@10 -- # set +x 00:24:55.451 ************************************ 00:24:55.451 END TEST nvmf_auth 00:24:55.451 ************************************ 00:24:55.451 15:35:12 -- nvmf/nvmf.sh@104 -- # [[ tcp == \t\c\p ]] 00:24:55.451 15:35:12 -- nvmf/nvmf.sh@105 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:24:55.451 15:35:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:55.451 15:35:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:55.451 15:35:12 -- common/autotest_common.sh@10 -- # set +x 00:24:55.712 ************************************ 00:24:55.712 START TEST nvmf_digest 00:24:55.712 ************************************ 00:24:55.712 15:35:12 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:24:55.712 * Looking for test storage... 00:24:55.712 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:55.712 15:35:13 -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:55.712 15:35:13 -- nvmf/common.sh@7 -- # uname -s 00:24:55.712 15:35:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:55.712 15:35:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:55.712 15:35:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:55.712 15:35:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:55.712 15:35:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:55.712 15:35:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:55.712 15:35:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:55.712 15:35:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:55.712 15:35:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:55.712 15:35:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:55.712 15:35:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:55.712 15:35:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:55.712 15:35:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:55.712 15:35:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:55.712 15:35:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:55.712 15:35:13 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:55.712 15:35:13 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:55.712 15:35:13 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:55.712 15:35:13 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:55.712 15:35:13 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:55.712 15:35:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.712 15:35:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.712 15:35:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.712 15:35:13 -- paths/export.sh@5 -- # export PATH 00:24:55.712 15:35:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.712 15:35:13 -- nvmf/common.sh@47 -- # : 0 00:24:55.712 15:35:13 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:55.712 15:35:13 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:55.712 15:35:13 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:55.712 15:35:13 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:55.712 15:35:13 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:55.712 15:35:13 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:55.712 15:35:13 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:55.712 15:35:13 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:55.712 15:35:13 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:24:55.712 15:35:13 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:24:55.712 15:35:13 -- host/digest.sh@16 -- # runtime=2 00:24:55.712 15:35:13 -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:24:55.712 15:35:13 -- host/digest.sh@138 -- # nvmftestinit 00:24:55.712 15:35:13 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:55.712 15:35:13 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:55.712 15:35:13 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:55.712 15:35:13 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:55.712 15:35:13 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:55.712 15:35:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:55.712 15:35:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:55.712 15:35:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:55.712 15:35:13 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:24:55.712 15:35:13 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:24:55.712 15:35:13 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:55.712 15:35:13 -- common/autotest_common.sh@10 -- # set +x 00:25:03.858 15:35:20 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:03.858 15:35:20 -- nvmf/common.sh@291 -- # pci_devs=() 00:25:03.858 15:35:20 -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:03.858 15:35:20 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:03.858 15:35:20 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:03.858 15:35:20 -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:03.858 15:35:20 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:03.858 15:35:20 -- nvmf/common.sh@295 -- # net_devs=() 00:25:03.858 15:35:20 -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:03.858 15:35:20 -- nvmf/common.sh@296 -- # e810=() 00:25:03.858 15:35:20 -- nvmf/common.sh@296 -- # local -ga e810 00:25:03.858 15:35:20 -- nvmf/common.sh@297 -- # x722=() 00:25:03.858 15:35:20 -- nvmf/common.sh@297 -- # local -ga x722 00:25:03.858 15:35:20 -- nvmf/common.sh@298 -- # mlx=() 00:25:03.858 15:35:20 -- nvmf/common.sh@298 -- # local -ga mlx 00:25:03.858 15:35:20 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:03.858 15:35:20 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:03.858 15:35:20 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:03.858 15:35:20 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:03.858 15:35:20 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:03.858 15:35:20 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:03.858 15:35:20 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:03.858 15:35:20 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:03.858 15:35:20 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:03.858 15:35:20 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:03.858 15:35:20 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:03.858 15:35:20 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:03.858 15:35:20 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:03.858 15:35:20 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:03.858 15:35:20 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:03.858 15:35:20 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:03.858 15:35:20 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:03.858 15:35:20 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:03.858 15:35:20 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:03.858 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:03.858 15:35:20 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:03.858 15:35:20 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:03.858 15:35:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:03.858 15:35:20 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:03.858 15:35:20 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:03.858 15:35:20 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:03.858 15:35:20 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:03.859 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:03.859 15:35:20 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:03.859 15:35:20 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:03.859 15:35:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:03.859 15:35:20 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:03.859 15:35:20 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:03.859 15:35:20 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:03.859 15:35:20 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:03.859 15:35:20 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:03.859 15:35:20 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:03.859 15:35:20 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:03.859 15:35:20 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:03.859 15:35:20 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:03.859 15:35:20 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:03.859 Found net devices under 0000:31:00.0: cvl_0_0 00:25:03.859 15:35:20 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:03.859 15:35:20 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:03.859 15:35:20 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:03.859 15:35:20 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:03.859 15:35:20 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:03.859 15:35:20 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:03.859 Found net devices under 0000:31:00.1: cvl_0_1 00:25:03.859 15:35:20 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:03.859 15:35:20 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:25:03.859 15:35:20 -- nvmf/common.sh@403 -- # is_hw=yes 00:25:03.859 15:35:20 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:25:03.859 15:35:20 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:25:03.859 15:35:20 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:25:03.859 15:35:20 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:03.859 15:35:20 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:03.859 15:35:20 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:03.859 15:35:20 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:03.859 15:35:20 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:03.859 15:35:20 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:03.859 15:35:20 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:03.859 15:35:20 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:03.859 15:35:20 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:03.859 15:35:20 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:03.859 15:35:20 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:03.859 15:35:20 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:03.859 15:35:20 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:03.859 15:35:20 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:03.859 15:35:20 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:03.859 15:35:20 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:03.859 15:35:20 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:03.859 15:35:20 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:03.859 15:35:20 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:03.859 15:35:20 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:03.859 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:03.859 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.506 ms 00:25:03.859 00:25:03.859 --- 10.0.0.2 ping statistics --- 00:25:03.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:03.859 rtt min/avg/max/mdev = 0.506/0.506/0.506/0.000 ms 00:25:03.859 15:35:20 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:03.859 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:03.859 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:25:03.859 00:25:03.859 --- 10.0.0.1 ping statistics --- 00:25:03.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:03.859 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:25:03.859 15:35:20 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:03.859 15:35:20 -- nvmf/common.sh@411 -- # return 0 00:25:03.859 15:35:20 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:03.859 15:35:20 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:03.859 15:35:20 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:03.859 15:35:20 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:03.859 15:35:20 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:03.859 15:35:20 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:03.859 15:35:20 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:03.859 15:35:20 -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:03.859 15:35:20 -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:25:03.859 15:35:20 -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:25:03.859 15:35:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:03.859 15:35:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:03.859 15:35:20 -- common/autotest_common.sh@10 -- # set +x 00:25:03.859 ************************************ 00:25:03.859 START TEST nvmf_digest_clean 00:25:03.859 ************************************ 00:25:03.859 15:35:20 -- common/autotest_common.sh@1111 -- # run_digest 00:25:03.859 15:35:20 -- host/digest.sh@120 -- # local dsa_initiator 00:25:03.859 15:35:20 -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:25:03.859 15:35:20 -- host/digest.sh@121 -- # dsa_initiator=false 00:25:03.859 15:35:20 -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:25:03.859 15:35:20 -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:25:03.859 15:35:20 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:03.859 15:35:20 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:03.859 15:35:20 -- common/autotest_common.sh@10 -- # set +x 00:25:03.859 15:35:20 -- nvmf/common.sh@470 -- # nvmfpid=1770527 00:25:03.859 15:35:20 -- nvmf/common.sh@471 -- # waitforlisten 1770527 00:25:03.859 15:35:20 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:03.859 15:35:20 -- common/autotest_common.sh@817 -- # '[' -z 1770527 ']' 00:25:03.859 15:35:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:03.859 15:35:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:03.859 15:35:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:03.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:03.859 15:35:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:03.859 15:35:20 -- common/autotest_common.sh@10 -- # set +x 00:25:03.859 [2024-04-26 15:35:20.599792] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:25:03.859 [2024-04-26 15:35:20.599853] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:03.859 EAL: No free 2048 kB hugepages reported on node 1 00:25:03.859 [2024-04-26 15:35:20.671625] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:03.859 [2024-04-26 15:35:20.743792] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:03.859 [2024-04-26 15:35:20.743832] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:03.859 [2024-04-26 15:35:20.743847] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:03.859 [2024-04-26 15:35:20.743854] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:03.859 [2024-04-26 15:35:20.743859] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:03.859 [2024-04-26 15:35:20.743884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:04.119 15:35:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:04.119 15:35:21 -- common/autotest_common.sh@850 -- # return 0 00:25:04.119 15:35:21 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:04.119 15:35:21 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:04.119 15:35:21 -- common/autotest_common.sh@10 -- # set +x 00:25:04.119 15:35:21 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:04.119 15:35:21 -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:25:04.119 15:35:21 -- host/digest.sh@126 -- # common_target_config 00:25:04.119 15:35:21 -- host/digest.sh@43 -- # rpc_cmd 00:25:04.119 15:35:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:04.119 15:35:21 -- common/autotest_common.sh@10 -- # set +x 00:25:04.119 null0 00:25:04.119 [2024-04-26 15:35:21.478669] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:04.119 [2024-04-26 15:35:21.502853] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:04.120 15:35:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:04.120 15:35:21 -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:25:04.120 15:35:21 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:04.120 15:35:21 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:04.120 15:35:21 -- host/digest.sh@80 -- # rw=randread 00:25:04.120 15:35:21 -- host/digest.sh@80 -- # bs=4096 00:25:04.120 15:35:21 -- host/digest.sh@80 -- # qd=128 00:25:04.120 15:35:21 -- host/digest.sh@80 -- # scan_dsa=false 00:25:04.120 15:35:21 -- host/digest.sh@83 -- # bperfpid=1770637 00:25:04.120 15:35:21 -- host/digest.sh@84 -- # waitforlisten 1770637 /var/tmp/bperf.sock 00:25:04.120 15:35:21 -- common/autotest_common.sh@817 -- # '[' -z 1770637 ']' 00:25:04.120 15:35:21 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:04.120 15:35:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:04.120 15:35:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:04.120 15:35:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:04.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:04.120 15:35:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:04.120 15:35:21 -- common/autotest_common.sh@10 -- # set +x 00:25:04.120 [2024-04-26 15:35:21.554947] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:25:04.120 [2024-04-26 15:35:21.554992] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1770637 ] 00:25:04.380 EAL: No free 2048 kB hugepages reported on node 1 00:25:04.380 [2024-04-26 15:35:21.630025] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:04.380 [2024-04-26 15:35:21.693054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:04.951 15:35:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:04.951 15:35:22 -- common/autotest_common.sh@850 -- # return 0 00:25:04.951 15:35:22 -- host/digest.sh@86 -- # false 00:25:04.951 15:35:22 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:04.951 15:35:22 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:05.211 15:35:22 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:05.211 15:35:22 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:05.471 nvme0n1 00:25:05.471 15:35:22 -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:05.471 15:35:22 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:05.471 Running I/O for 2 seconds... 00:25:08.014 00:25:08.014 Latency(us) 00:25:08.014 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:08.014 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:08.014 nvme0n1 : 2.01 19853.79 77.55 0.00 0.00 6440.03 3399.68 16274.77 00:25:08.014 =================================================================================================================== 00:25:08.014 Total : 19853.79 77.55 0.00 0.00 6440.03 3399.68 16274.77 00:25:08.014 0 00:25:08.014 15:35:24 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:08.014 15:35:24 -- host/digest.sh@93 -- # get_accel_stats 00:25:08.014 15:35:24 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:08.014 15:35:24 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:08.014 | select(.opcode=="crc32c") 00:25:08.014 | "\(.module_name) \(.executed)"' 00:25:08.014 15:35:24 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:08.014 15:35:25 -- host/digest.sh@94 -- # false 00:25:08.014 15:35:25 -- host/digest.sh@94 -- # exp_module=software 00:25:08.014 15:35:25 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:08.014 15:35:25 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:08.014 15:35:25 -- host/digest.sh@98 -- # killprocess 1770637 00:25:08.014 15:35:25 -- common/autotest_common.sh@936 -- # '[' -z 1770637 ']' 00:25:08.014 15:35:25 -- common/autotest_common.sh@940 -- # kill -0 1770637 00:25:08.014 15:35:25 -- common/autotest_common.sh@941 -- # uname 00:25:08.014 15:35:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:08.014 15:35:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1770637 00:25:08.014 15:35:25 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:08.014 15:35:25 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:08.014 15:35:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1770637' 00:25:08.014 killing process with pid 1770637 00:25:08.014 15:35:25 -- common/autotest_common.sh@955 -- # kill 1770637 00:25:08.014 Received shutdown signal, test time was about 2.000000 seconds 00:25:08.014 00:25:08.014 Latency(us) 00:25:08.014 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:08.014 =================================================================================================================== 00:25:08.014 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:08.014 15:35:25 -- common/autotest_common.sh@960 -- # wait 1770637 00:25:08.014 15:35:25 -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:25:08.014 15:35:25 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:08.014 15:35:25 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:08.015 15:35:25 -- host/digest.sh@80 -- # rw=randread 00:25:08.015 15:35:25 -- host/digest.sh@80 -- # bs=131072 00:25:08.015 15:35:25 -- host/digest.sh@80 -- # qd=16 00:25:08.015 15:35:25 -- host/digest.sh@80 -- # scan_dsa=false 00:25:08.015 15:35:25 -- host/digest.sh@83 -- # bperfpid=1771318 00:25:08.015 15:35:25 -- host/digest.sh@84 -- # waitforlisten 1771318 /var/tmp/bperf.sock 00:25:08.015 15:35:25 -- common/autotest_common.sh@817 -- # '[' -z 1771318 ']' 00:25:08.015 15:35:25 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:08.015 15:35:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:08.015 15:35:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:08.015 15:35:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:08.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:08.015 15:35:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:08.015 15:35:25 -- common/autotest_common.sh@10 -- # set +x 00:25:08.015 [2024-04-26 15:35:25.275106] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:25:08.015 [2024-04-26 15:35:25.275159] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1771318 ] 00:25:08.015 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:08.015 Zero copy mechanism will not be used. 00:25:08.015 EAL: No free 2048 kB hugepages reported on node 1 00:25:08.015 [2024-04-26 15:35:25.355228] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.015 [2024-04-26 15:35:25.416957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:08.953 15:35:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:08.953 15:35:26 -- common/autotest_common.sh@850 -- # return 0 00:25:08.953 15:35:26 -- host/digest.sh@86 -- # false 00:25:08.953 15:35:26 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:08.953 15:35:26 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:08.953 15:35:26 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:08.953 15:35:26 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:09.213 nvme0n1 00:25:09.213 15:35:26 -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:09.213 15:35:26 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:09.213 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:09.213 Zero copy mechanism will not be used. 00:25:09.213 Running I/O for 2 seconds... 00:25:11.117 00:25:11.117 Latency(us) 00:25:11.117 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:11.117 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:11.117 nvme0n1 : 2.00 5328.20 666.03 0.00 0.00 2999.58 942.08 6744.75 00:25:11.117 =================================================================================================================== 00:25:11.117 Total : 5328.20 666.03 0.00 0.00 2999.58 942.08 6744.75 00:25:11.117 0 00:25:11.117 15:35:28 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:11.117 15:35:28 -- host/digest.sh@93 -- # get_accel_stats 00:25:11.117 15:35:28 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:11.117 15:35:28 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:11.117 | select(.opcode=="crc32c") 00:25:11.117 | "\(.module_name) \(.executed)"' 00:25:11.117 15:35:28 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:11.378 15:35:28 -- host/digest.sh@94 -- # false 00:25:11.378 15:35:28 -- host/digest.sh@94 -- # exp_module=software 00:25:11.378 15:35:28 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:11.378 15:35:28 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:11.378 15:35:28 -- host/digest.sh@98 -- # killprocess 1771318 00:25:11.378 15:35:28 -- common/autotest_common.sh@936 -- # '[' -z 1771318 ']' 00:25:11.378 15:35:28 -- common/autotest_common.sh@940 -- # kill -0 1771318 00:25:11.378 15:35:28 -- common/autotest_common.sh@941 -- # uname 00:25:11.378 15:35:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:11.378 15:35:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1771318 00:25:11.378 15:35:28 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:11.378 15:35:28 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:11.378 15:35:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1771318' 00:25:11.378 killing process with pid 1771318 00:25:11.378 15:35:28 -- common/autotest_common.sh@955 -- # kill 1771318 00:25:11.378 Received shutdown signal, test time was about 2.000000 seconds 00:25:11.378 00:25:11.378 Latency(us) 00:25:11.378 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:11.378 =================================================================================================================== 00:25:11.378 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:11.378 15:35:28 -- common/autotest_common.sh@960 -- # wait 1771318 00:25:11.639 15:35:28 -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:25:11.639 15:35:28 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:11.639 15:35:28 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:11.639 15:35:28 -- host/digest.sh@80 -- # rw=randwrite 00:25:11.639 15:35:28 -- host/digest.sh@80 -- # bs=4096 00:25:11.639 15:35:28 -- host/digest.sh@80 -- # qd=128 00:25:11.639 15:35:28 -- host/digest.sh@80 -- # scan_dsa=false 00:25:11.639 15:35:28 -- host/digest.sh@83 -- # bperfpid=1772054 00:25:11.639 15:35:28 -- host/digest.sh@84 -- # waitforlisten 1772054 /var/tmp/bperf.sock 00:25:11.639 15:35:28 -- common/autotest_common.sh@817 -- # '[' -z 1772054 ']' 00:25:11.639 15:35:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:11.639 15:35:28 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:11.639 15:35:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:11.639 15:35:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:11.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:11.639 15:35:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:11.639 15:35:28 -- common/autotest_common.sh@10 -- # set +x 00:25:11.639 [2024-04-26 15:35:28.943556] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:25:11.639 [2024-04-26 15:35:28.943614] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1772054 ] 00:25:11.639 EAL: No free 2048 kB hugepages reported on node 1 00:25:11.639 [2024-04-26 15:35:29.018298] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:11.639 [2024-04-26 15:35:29.070452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:12.579 15:35:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:12.579 15:35:29 -- common/autotest_common.sh@850 -- # return 0 00:25:12.579 15:35:29 -- host/digest.sh@86 -- # false 00:25:12.579 15:35:29 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:12.579 15:35:29 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:12.579 15:35:29 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:12.579 15:35:29 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:12.839 nvme0n1 00:25:12.839 15:35:30 -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:12.839 15:35:30 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:12.839 Running I/O for 2 seconds... 00:25:15.381 00:25:15.381 Latency(us) 00:25:15.381 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:15.381 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:15.382 nvme0n1 : 2.01 20658.79 80.70 0.00 0.00 6183.45 6062.08 13926.40 00:25:15.382 =================================================================================================================== 00:25:15.382 Total : 20658.79 80.70 0.00 0.00 6183.45 6062.08 13926.40 00:25:15.382 0 00:25:15.382 15:35:32 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:15.382 15:35:32 -- host/digest.sh@93 -- # get_accel_stats 00:25:15.382 15:35:32 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:15.382 15:35:32 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:15.382 | select(.opcode=="crc32c") 00:25:15.382 | "\(.module_name) \(.executed)"' 00:25:15.382 15:35:32 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:15.382 15:35:32 -- host/digest.sh@94 -- # false 00:25:15.382 15:35:32 -- host/digest.sh@94 -- # exp_module=software 00:25:15.382 15:35:32 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:15.382 15:35:32 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:15.382 15:35:32 -- host/digest.sh@98 -- # killprocess 1772054 00:25:15.382 15:35:32 -- common/autotest_common.sh@936 -- # '[' -z 1772054 ']' 00:25:15.382 15:35:32 -- common/autotest_common.sh@940 -- # kill -0 1772054 00:25:15.382 15:35:32 -- common/autotest_common.sh@941 -- # uname 00:25:15.382 15:35:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:15.382 15:35:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1772054 00:25:15.382 15:35:32 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:15.382 15:35:32 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:15.382 15:35:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1772054' 00:25:15.382 killing process with pid 1772054 00:25:15.382 15:35:32 -- common/autotest_common.sh@955 -- # kill 1772054 00:25:15.382 Received shutdown signal, test time was about 2.000000 seconds 00:25:15.382 00:25:15.382 Latency(us) 00:25:15.382 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:15.382 =================================================================================================================== 00:25:15.382 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:15.382 15:35:32 -- common/autotest_common.sh@960 -- # wait 1772054 00:25:15.382 15:35:32 -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:25:15.382 15:35:32 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:15.382 15:35:32 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:15.382 15:35:32 -- host/digest.sh@80 -- # rw=randwrite 00:25:15.382 15:35:32 -- host/digest.sh@80 -- # bs=131072 00:25:15.382 15:35:32 -- host/digest.sh@80 -- # qd=16 00:25:15.382 15:35:32 -- host/digest.sh@80 -- # scan_dsa=false 00:25:15.382 15:35:32 -- host/digest.sh@83 -- # bperfpid=1772838 00:25:15.382 15:35:32 -- host/digest.sh@84 -- # waitforlisten 1772838 /var/tmp/bperf.sock 00:25:15.382 15:35:32 -- common/autotest_common.sh@817 -- # '[' -z 1772838 ']' 00:25:15.382 15:35:32 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:15.382 15:35:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:15.382 15:35:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:15.382 15:35:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:15.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:15.382 15:35:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:15.382 15:35:32 -- common/autotest_common.sh@10 -- # set +x 00:25:15.382 [2024-04-26 15:35:32.625994] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:25:15.382 [2024-04-26 15:35:32.626050] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1772838 ] 00:25:15.382 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:15.382 Zero copy mechanism will not be used. 00:25:15.382 EAL: No free 2048 kB hugepages reported on node 1 00:25:15.382 [2024-04-26 15:35:32.700393] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:15.382 [2024-04-26 15:35:32.752113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:16.324 15:35:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:16.324 15:35:33 -- common/autotest_common.sh@850 -- # return 0 00:25:16.324 15:35:33 -- host/digest.sh@86 -- # false 00:25:16.324 15:35:33 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:16.324 15:35:33 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:16.324 15:35:33 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:16.324 15:35:33 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:16.583 nvme0n1 00:25:16.583 15:35:33 -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:16.583 15:35:33 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:16.583 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:16.583 Zero copy mechanism will not be used. 00:25:16.583 Running I/O for 2 seconds... 00:25:18.526 00:25:18.526 Latency(us) 00:25:18.526 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:18.526 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:18.526 nvme0n1 : 2.00 4291.87 536.48 0.00 0.00 3722.42 1843.20 10376.53 00:25:18.526 =================================================================================================================== 00:25:18.526 Total : 4291.87 536.48 0.00 0.00 3722.42 1843.20 10376.53 00:25:18.526 0 00:25:18.828 15:35:35 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:18.828 15:35:35 -- host/digest.sh@93 -- # get_accel_stats 00:25:18.828 15:35:35 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:18.828 15:35:35 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:18.828 | select(.opcode=="crc32c") 00:25:18.828 | "\(.module_name) \(.executed)"' 00:25:18.828 15:35:35 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:18.828 15:35:36 -- host/digest.sh@94 -- # false 00:25:18.828 15:35:36 -- host/digest.sh@94 -- # exp_module=software 00:25:18.828 15:35:36 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:18.828 15:35:36 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:18.828 15:35:36 -- host/digest.sh@98 -- # killprocess 1772838 00:25:18.828 15:35:36 -- common/autotest_common.sh@936 -- # '[' -z 1772838 ']' 00:25:18.828 15:35:36 -- common/autotest_common.sh@940 -- # kill -0 1772838 00:25:18.828 15:35:36 -- common/autotest_common.sh@941 -- # uname 00:25:18.828 15:35:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:18.828 15:35:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1772838 00:25:18.828 15:35:36 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:18.828 15:35:36 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:18.828 15:35:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1772838' 00:25:18.828 killing process with pid 1772838 00:25:18.828 15:35:36 -- common/autotest_common.sh@955 -- # kill 1772838 00:25:18.828 Received shutdown signal, test time was about 2.000000 seconds 00:25:18.828 00:25:18.828 Latency(us) 00:25:18.828 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:18.828 =================================================================================================================== 00:25:18.828 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:18.828 15:35:36 -- common/autotest_common.sh@960 -- # wait 1772838 00:25:19.089 15:35:36 -- host/digest.sh@132 -- # killprocess 1770527 00:25:19.089 15:35:36 -- common/autotest_common.sh@936 -- # '[' -z 1770527 ']' 00:25:19.089 15:35:36 -- common/autotest_common.sh@940 -- # kill -0 1770527 00:25:19.089 15:35:36 -- common/autotest_common.sh@941 -- # uname 00:25:19.089 15:35:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:19.089 15:35:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1770527 00:25:19.089 15:35:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:19.089 15:35:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:19.089 15:35:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1770527' 00:25:19.089 killing process with pid 1770527 00:25:19.089 15:35:36 -- common/autotest_common.sh@955 -- # kill 1770527 00:25:19.089 15:35:36 -- common/autotest_common.sh@960 -- # wait 1770527 00:25:19.089 00:25:19.089 real 0m15.931s 00:25:19.089 user 0m31.272s 00:25:19.089 sys 0m3.359s 00:25:19.089 15:35:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:19.089 15:35:36 -- common/autotest_common.sh@10 -- # set +x 00:25:19.089 ************************************ 00:25:19.089 END TEST nvmf_digest_clean 00:25:19.089 ************************************ 00:25:19.089 15:35:36 -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:25:19.089 15:35:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:19.089 15:35:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:19.089 15:35:36 -- common/autotest_common.sh@10 -- # set +x 00:25:19.350 ************************************ 00:25:19.350 START TEST nvmf_digest_error 00:25:19.350 ************************************ 00:25:19.350 15:35:36 -- common/autotest_common.sh@1111 -- # run_digest_error 00:25:19.350 15:35:36 -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:25:19.350 15:35:36 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:19.350 15:35:36 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:19.350 15:35:36 -- common/autotest_common.sh@10 -- # set +x 00:25:19.350 15:35:36 -- nvmf/common.sh@470 -- # nvmfpid=1773721 00:25:19.350 15:35:36 -- nvmf/common.sh@471 -- # waitforlisten 1773721 00:25:19.350 15:35:36 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:19.350 15:35:36 -- common/autotest_common.sh@817 -- # '[' -z 1773721 ']' 00:25:19.350 15:35:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:19.350 15:35:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:19.350 15:35:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:19.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:19.350 15:35:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:19.350 15:35:36 -- common/autotest_common.sh@10 -- # set +x 00:25:19.350 [2024-04-26 15:35:36.728607] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:25:19.350 [2024-04-26 15:35:36.728663] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:19.350 EAL: No free 2048 kB hugepages reported on node 1 00:25:19.609 [2024-04-26 15:35:36.799178] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:19.609 [2024-04-26 15:35:36.870871] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:19.609 [2024-04-26 15:35:36.870908] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:19.609 [2024-04-26 15:35:36.870915] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:19.610 [2024-04-26 15:35:36.870922] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:19.610 [2024-04-26 15:35:36.870927] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:19.610 [2024-04-26 15:35:36.870951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:20.180 15:35:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:20.180 15:35:37 -- common/autotest_common.sh@850 -- # return 0 00:25:20.180 15:35:37 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:20.180 15:35:37 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:20.180 15:35:37 -- common/autotest_common.sh@10 -- # set +x 00:25:20.180 15:35:37 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:20.180 15:35:37 -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:25:20.180 15:35:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:20.180 15:35:37 -- common/autotest_common.sh@10 -- # set +x 00:25:20.180 [2024-04-26 15:35:37.540874] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:25:20.180 15:35:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:20.180 15:35:37 -- host/digest.sh@105 -- # common_target_config 00:25:20.180 15:35:37 -- host/digest.sh@43 -- # rpc_cmd 00:25:20.180 15:35:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:20.180 15:35:37 -- common/autotest_common.sh@10 -- # set +x 00:25:20.180 null0 00:25:20.180 [2024-04-26 15:35:37.621676] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:20.441 [2024-04-26 15:35:37.645880] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:20.441 15:35:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:20.441 15:35:37 -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:25:20.441 15:35:37 -- host/digest.sh@54 -- # local rw bs qd 00:25:20.441 15:35:37 -- host/digest.sh@56 -- # rw=randread 00:25:20.441 15:35:37 -- host/digest.sh@56 -- # bs=4096 00:25:20.441 15:35:37 -- host/digest.sh@56 -- # qd=128 00:25:20.441 15:35:37 -- host/digest.sh@58 -- # bperfpid=1773822 00:25:20.441 15:35:37 -- host/digest.sh@60 -- # waitforlisten 1773822 /var/tmp/bperf.sock 00:25:20.441 15:35:37 -- common/autotest_common.sh@817 -- # '[' -z 1773822 ']' 00:25:20.441 15:35:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:20.441 15:35:37 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:25:20.441 15:35:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:20.441 15:35:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:20.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:20.441 15:35:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:20.441 15:35:37 -- common/autotest_common.sh@10 -- # set +x 00:25:20.441 [2024-04-26 15:35:37.697548] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:25:20.441 [2024-04-26 15:35:37.697595] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1773822 ] 00:25:20.441 EAL: No free 2048 kB hugepages reported on node 1 00:25:20.441 [2024-04-26 15:35:37.772235] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:20.441 [2024-04-26 15:35:37.825147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:21.383 15:35:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:21.383 15:35:38 -- common/autotest_common.sh@850 -- # return 0 00:25:21.383 15:35:38 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:21.383 15:35:38 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:21.383 15:35:38 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:21.383 15:35:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:21.383 15:35:38 -- common/autotest_common.sh@10 -- # set +x 00:25:21.383 15:35:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:21.383 15:35:38 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:21.383 15:35:38 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:21.643 nvme0n1 00:25:21.643 15:35:39 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:21.643 15:35:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:21.643 15:35:39 -- common/autotest_common.sh@10 -- # set +x 00:25:21.643 15:35:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:21.643 15:35:39 -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:21.643 15:35:39 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:21.904 Running I/O for 2 seconds... 00:25:21.904 [2024-04-26 15:35:39.122443] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:21.904 [2024-04-26 15:35:39.122473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.904 [2024-04-26 15:35:39.122483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.904 [2024-04-26 15:35:39.134829] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:21.904 [2024-04-26 15:35:39.134853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:1126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.904 [2024-04-26 15:35:39.134860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.904 [2024-04-26 15:35:39.148828] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:21.904 [2024-04-26 15:35:39.148849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:25384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.904 [2024-04-26 15:35:39.148856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.904 [2024-04-26 15:35:39.163230] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:21.905 [2024-04-26 15:35:39.163248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.905 [2024-04-26 15:35:39.163254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.905 [2024-04-26 15:35:39.176756] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:21.905 [2024-04-26 15:35:39.176774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:4941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.905 [2024-04-26 15:35:39.176780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.905 [2024-04-26 15:35:39.189337] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:21.905 [2024-04-26 15:35:39.189354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:10072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.905 [2024-04-26 15:35:39.189360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.905 [2024-04-26 15:35:39.200882] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:21.905 [2024-04-26 15:35:39.200900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.905 [2024-04-26 15:35:39.200906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.905 [2024-04-26 15:35:39.212698] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:21.905 [2024-04-26 15:35:39.212716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.905 [2024-04-26 15:35:39.212722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.905 [2024-04-26 15:35:39.226318] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:21.905 [2024-04-26 15:35:39.226336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:13593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.905 [2024-04-26 15:35:39.226342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.905 [2024-04-26 15:35:39.240234] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:21.905 [2024-04-26 15:35:39.240252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:23870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.905 [2024-04-26 15:35:39.240259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.905 [2024-04-26 15:35:39.253502] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:21.905 [2024-04-26 15:35:39.253520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:22599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.905 [2024-04-26 15:35:39.253526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.905 [2024-04-26 15:35:39.265523] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:21.905 [2024-04-26 15:35:39.265540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:14435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.905 [2024-04-26 15:35:39.265546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.905 [2024-04-26 15:35:39.277822] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:21.905 [2024-04-26 15:35:39.277844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:22833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.905 [2024-04-26 15:35:39.277850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.905 [2024-04-26 15:35:39.290691] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:21.905 [2024-04-26 15:35:39.290709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:24968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.905 [2024-04-26 15:35:39.290715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.905 [2024-04-26 15:35:39.301100] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:21.905 [2024-04-26 15:35:39.301116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:18962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.905 [2024-04-26 15:35:39.301123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.905 [2024-04-26 15:35:39.314657] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:21.905 [2024-04-26 15:35:39.314675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:10771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.905 [2024-04-26 15:35:39.314681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.905 [2024-04-26 15:35:39.328726] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:21.905 [2024-04-26 15:35:39.328744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:4791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.905 [2024-04-26 15:35:39.328754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.905 [2024-04-26 15:35:39.341070] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:21.905 [2024-04-26 15:35:39.341088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.905 [2024-04-26 15:35:39.341094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.166 [2024-04-26 15:35:39.354148] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.166 [2024-04-26 15:35:39.354166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.166 [2024-04-26 15:35:39.354172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.166 [2024-04-26 15:35:39.367719] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.166 [2024-04-26 15:35:39.367736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:14966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.166 [2024-04-26 15:35:39.367742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.166 [2024-04-26 15:35:39.377322] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.166 [2024-04-26 15:35:39.377339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:2558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.166 [2024-04-26 15:35:39.377345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.166 [2024-04-26 15:35:39.391175] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.166 [2024-04-26 15:35:39.391192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:24689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.166 [2024-04-26 15:35:39.391198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.166 [2024-04-26 15:35:39.403797] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.166 [2024-04-26 15:35:39.403814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:12810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.166 [2024-04-26 15:35:39.403820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.166 [2024-04-26 15:35:39.417789] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.166 [2024-04-26 15:35:39.417806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:4341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.166 [2024-04-26 15:35:39.417812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.166 [2024-04-26 15:35:39.430649] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.166 [2024-04-26 15:35:39.430666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:20106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.167 [2024-04-26 15:35:39.430672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.167 [2024-04-26 15:35:39.442978] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.167 [2024-04-26 15:35:39.442994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:24929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.167 [2024-04-26 15:35:39.443000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.167 [2024-04-26 15:35:39.453701] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.167 [2024-04-26 15:35:39.453718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:19133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.167 [2024-04-26 15:35:39.453723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.167 [2024-04-26 15:35:39.468107] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.167 [2024-04-26 15:35:39.468124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:3241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.167 [2024-04-26 15:35:39.468131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.167 [2024-04-26 15:35:39.481134] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.167 [2024-04-26 15:35:39.481151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:4367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.167 [2024-04-26 15:35:39.481157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.167 [2024-04-26 15:35:39.494462] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.167 [2024-04-26 15:35:39.494479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:3408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.167 [2024-04-26 15:35:39.494485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.167 [2024-04-26 15:35:39.507587] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.167 [2024-04-26 15:35:39.507605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.167 [2024-04-26 15:35:39.507611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.167 [2024-04-26 15:35:39.520595] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.167 [2024-04-26 15:35:39.520612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.167 [2024-04-26 15:35:39.520618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.167 [2024-04-26 15:35:39.534353] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.167 [2024-04-26 15:35:39.534369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:13682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.167 [2024-04-26 15:35:39.534375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.167 [2024-04-26 15:35:39.546919] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.167 [2024-04-26 15:35:39.546935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:17186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.167 [2024-04-26 15:35:39.546946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.167 [2024-04-26 15:35:39.556958] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.167 [2024-04-26 15:35:39.556976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:22604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.167 [2024-04-26 15:35:39.556982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.167 [2024-04-26 15:35:39.570165] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.167 [2024-04-26 15:35:39.570181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:11894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.167 [2024-04-26 15:35:39.570188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.167 [2024-04-26 15:35:39.582599] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.167 [2024-04-26 15:35:39.582616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.167 [2024-04-26 15:35:39.582622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.167 [2024-04-26 15:35:39.596477] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.167 [2024-04-26 15:35:39.596495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:9315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.167 [2024-04-26 15:35:39.596501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.167 [2024-04-26 15:35:39.608937] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.167 [2024-04-26 15:35:39.608954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:5840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.167 [2024-04-26 15:35:39.608961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.428 [2024-04-26 15:35:39.622121] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.428 [2024-04-26 15:35:39.622138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:22320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.428 [2024-04-26 15:35:39.622144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.428 [2024-04-26 15:35:39.634919] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.428 [2024-04-26 15:35:39.634936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.428 [2024-04-26 15:35:39.634943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.428 [2024-04-26 15:35:39.646983] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.428 [2024-04-26 15:35:39.646999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:23379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.428 [2024-04-26 15:35:39.647005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.428 [2024-04-26 15:35:39.659510] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.428 [2024-04-26 15:35:39.659530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:18234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.428 [2024-04-26 15:35:39.659536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.428 [2024-04-26 15:35:39.672688] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.428 [2024-04-26 15:35:39.672705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.428 [2024-04-26 15:35:39.672711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.428 [2024-04-26 15:35:39.685312] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.428 [2024-04-26 15:35:39.685329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.428 [2024-04-26 15:35:39.685335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.428 [2024-04-26 15:35:39.697632] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.428 [2024-04-26 15:35:39.697649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:3298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.428 [2024-04-26 15:35:39.697655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.428 [2024-04-26 15:35:39.709727] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.428 [2024-04-26 15:35:39.709744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:10455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.428 [2024-04-26 15:35:39.709750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.428 [2024-04-26 15:35:39.721406] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.428 [2024-04-26 15:35:39.721424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:21375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.428 [2024-04-26 15:35:39.721429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.428 [2024-04-26 15:35:39.734847] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.428 [2024-04-26 15:35:39.734865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:24295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.428 [2024-04-26 15:35:39.734872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.428 [2024-04-26 15:35:39.748355] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.428 [2024-04-26 15:35:39.748372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:15025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.429 [2024-04-26 15:35:39.748378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.429 [2024-04-26 15:35:39.762730] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.429 [2024-04-26 15:35:39.762746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:3361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.429 [2024-04-26 15:35:39.762753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.429 [2024-04-26 15:35:39.776946] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.429 [2024-04-26 15:35:39.776966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:6611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.429 [2024-04-26 15:35:39.776972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.429 [2024-04-26 15:35:39.786456] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.429 [2024-04-26 15:35:39.786473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:10874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.429 [2024-04-26 15:35:39.786479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.429 [2024-04-26 15:35:39.801335] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.429 [2024-04-26 15:35:39.801353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.429 [2024-04-26 15:35:39.801359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.429 [2024-04-26 15:35:39.812655] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.429 [2024-04-26 15:35:39.812672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:22982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.429 [2024-04-26 15:35:39.812679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.429 [2024-04-26 15:35:39.824844] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.429 [2024-04-26 15:35:39.824862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.429 [2024-04-26 15:35:39.824868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.429 [2024-04-26 15:35:39.837989] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.429 [2024-04-26 15:35:39.838006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:17966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.429 [2024-04-26 15:35:39.838013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.429 [2024-04-26 15:35:39.852005] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.429 [2024-04-26 15:35:39.852022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.429 [2024-04-26 15:35:39.852028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.429 [2024-04-26 15:35:39.864359] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.429 [2024-04-26 15:35:39.864376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:17055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.429 [2024-04-26 15:35:39.864382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.689 [2024-04-26 15:35:39.877057] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.689 [2024-04-26 15:35:39.877074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.689 [2024-04-26 15:35:39.877084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.689 [2024-04-26 15:35:39.889650] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.689 [2024-04-26 15:35:39.889667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:11319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.689 [2024-04-26 15:35:39.889673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.689 [2024-04-26 15:35:39.900600] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.689 [2024-04-26 15:35:39.900618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.689 [2024-04-26 15:35:39.900624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.689 [2024-04-26 15:35:39.914004] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.689 [2024-04-26 15:35:39.914021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:6072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.689 [2024-04-26 15:35:39.914028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.689 [2024-04-26 15:35:39.927898] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.689 [2024-04-26 15:35:39.927915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:6509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.689 [2024-04-26 15:35:39.927921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.689 [2024-04-26 15:35:39.940652] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.689 [2024-04-26 15:35:39.940669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.689 [2024-04-26 15:35:39.940675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.689 [2024-04-26 15:35:39.951086] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.689 [2024-04-26 15:35:39.951102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.689 [2024-04-26 15:35:39.951108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.690 [2024-04-26 15:35:39.964608] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.690 [2024-04-26 15:35:39.964625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:14157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.690 [2024-04-26 15:35:39.964631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.690 [2024-04-26 15:35:39.978497] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.690 [2024-04-26 15:35:39.978514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:8301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.690 [2024-04-26 15:35:39.978520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.690 [2024-04-26 15:35:39.990891] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.690 [2024-04-26 15:35:39.990911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:7683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.690 [2024-04-26 15:35:39.990917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.690 [2024-04-26 15:35:40.004042] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.690 [2024-04-26 15:35:40.004060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:10320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.690 [2024-04-26 15:35:40.004066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.690 [2024-04-26 15:35:40.017183] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.690 [2024-04-26 15:35:40.017200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:12234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.690 [2024-04-26 15:35:40.017206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.690 [2024-04-26 15:35:40.030047] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.690 [2024-04-26 15:35:40.030064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:9731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.690 [2024-04-26 15:35:40.030070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.690 [2024-04-26 15:35:40.043040] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.690 [2024-04-26 15:35:40.043057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.690 [2024-04-26 15:35:40.043063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.690 [2024-04-26 15:35:40.054688] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.690 [2024-04-26 15:35:40.054704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:12331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.690 [2024-04-26 15:35:40.054710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.690 [2024-04-26 15:35:40.067743] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.690 [2024-04-26 15:35:40.067760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:11722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.690 [2024-04-26 15:35:40.067766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.690 [2024-04-26 15:35:40.079956] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.690 [2024-04-26 15:35:40.079973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:19847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.690 [2024-04-26 15:35:40.079979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.690 [2024-04-26 15:35:40.093596] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.690 [2024-04-26 15:35:40.093613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.690 [2024-04-26 15:35:40.093622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.690 [2024-04-26 15:35:40.106168] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.690 [2024-04-26 15:35:40.106185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:25313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.690 [2024-04-26 15:35:40.106191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.690 [2024-04-26 15:35:40.116311] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.690 [2024-04-26 15:35:40.116328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.690 [2024-04-26 15:35:40.116334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.690 [2024-04-26 15:35:40.130023] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.690 [2024-04-26 15:35:40.130041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:13311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.690 [2024-04-26 15:35:40.130048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.951 [2024-04-26 15:35:40.143957] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.951 [2024-04-26 15:35:40.143974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:24962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.951 [2024-04-26 15:35:40.143980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.951 [2024-04-26 15:35:40.158390] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.951 [2024-04-26 15:35:40.158407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:17298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.951 [2024-04-26 15:35:40.158413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.951 [2024-04-26 15:35:40.169690] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.951 [2024-04-26 15:35:40.169707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.951 [2024-04-26 15:35:40.169714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.951 [2024-04-26 15:35:40.182519] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.951 [2024-04-26 15:35:40.182536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:1572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.951 [2024-04-26 15:35:40.182542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.951 [2024-04-26 15:35:40.195661] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.951 [2024-04-26 15:35:40.195678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.951 [2024-04-26 15:35:40.195685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.951 [2024-04-26 15:35:40.206232] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.951 [2024-04-26 15:35:40.206253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:14349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.951 [2024-04-26 15:35:40.206259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.951 [2024-04-26 15:35:40.219287] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.951 [2024-04-26 15:35:40.219304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:23996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.951 [2024-04-26 15:35:40.219310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.951 [2024-04-26 15:35:40.233454] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.951 [2024-04-26 15:35:40.233472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.952 [2024-04-26 15:35:40.233478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.952 [2024-04-26 15:35:40.244711] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.952 [2024-04-26 15:35:40.244728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:2556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.952 [2024-04-26 15:35:40.244733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.952 [2024-04-26 15:35:40.257075] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.952 [2024-04-26 15:35:40.257092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:10961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.952 [2024-04-26 15:35:40.257098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.952 [2024-04-26 15:35:40.270713] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.952 [2024-04-26 15:35:40.270730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:3483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.952 [2024-04-26 15:35:40.270736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.952 [2024-04-26 15:35:40.283546] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.952 [2024-04-26 15:35:40.283563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:15039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.952 [2024-04-26 15:35:40.283569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.952 [2024-04-26 15:35:40.294524] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.952 [2024-04-26 15:35:40.294540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.952 [2024-04-26 15:35:40.294547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.952 [2024-04-26 15:35:40.307660] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.952 [2024-04-26 15:35:40.307677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:20750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.952 [2024-04-26 15:35:40.307683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.952 [2024-04-26 15:35:40.320531] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.952 [2024-04-26 15:35:40.320548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.952 [2024-04-26 15:35:40.320553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.952 [2024-04-26 15:35:40.334098] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.952 [2024-04-26 15:35:40.334114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:25366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.952 [2024-04-26 15:35:40.334120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.952 [2024-04-26 15:35:40.345882] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.952 [2024-04-26 15:35:40.345899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:19560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.952 [2024-04-26 15:35:40.345905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.952 [2024-04-26 15:35:40.359249] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.952 [2024-04-26 15:35:40.359266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:5283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.952 [2024-04-26 15:35:40.359272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.952 [2024-04-26 15:35:40.370687] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.952 [2024-04-26 15:35:40.370703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:16168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.952 [2024-04-26 15:35:40.370709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.952 [2024-04-26 15:35:40.384191] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.952 [2024-04-26 15:35:40.384208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:20183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.952 [2024-04-26 15:35:40.384214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.952 [2024-04-26 15:35:40.396008] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:22.952 [2024-04-26 15:35:40.396025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:21009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.952 [2024-04-26 15:35:40.396031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.213 [2024-04-26 15:35:40.409011] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:23.213 [2024-04-26 15:35:40.409028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:10784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.213 [2024-04-26 15:35:40.409034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.213 [2024-04-26 15:35:40.421611] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:23.213 [2024-04-26 15:35:40.421628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:14462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.213 [2024-04-26 15:35:40.421637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.213 [2024-04-26 15:35:40.433810] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:23.213 [2024-04-26 15:35:40.433826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:6954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.213 [2024-04-26 15:35:40.433833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.213 [2024-04-26 15:35:40.445296] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:23.213 [2024-04-26 15:35:40.445313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:8355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.213 [2024-04-26 15:35:40.445319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.213 [2024-04-26 15:35:40.458317] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:23.213 [2024-04-26 15:35:40.458334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:14022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.213 [2024-04-26 15:35:40.458340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.214 [2024-04-26 15:35:40.471724] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:23.214 [2024-04-26 15:35:40.471741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.214 [2024-04-26 15:35:40.471747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.214 [2024-04-26 15:35:40.484016] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:23.214 [2024-04-26 15:35:40.484033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:15589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.214 [2024-04-26 15:35:40.484039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.214 [2024-04-26 15:35:40.497220] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:23.214 [2024-04-26 15:35:40.497237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:23701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.214 [2024-04-26 15:35:40.497243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.214 [2024-04-26 15:35:40.510141] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:23.214 [2024-04-26 15:35:40.510157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:6787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.214 [2024-04-26 15:35:40.510163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.214 [2024-04-26 15:35:40.521711] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:23.214 [2024-04-26 15:35:40.521728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:4307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.214 [2024-04-26 15:35:40.521734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.214 [2024-04-26 15:35:40.534023] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:23.214 [2024-04-26 15:35:40.534042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:12946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.214 [2024-04-26 15:35:40.534048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.214 [2024-04-26 15:35:40.546610] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:23.214 [2024-04-26 15:35:40.546627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:10633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.214 [2024-04-26 15:35:40.546633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.214 [2024-04-26 15:35:40.560416] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:23.214 [2024-04-26 15:35:40.560432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:7669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.214 [2024-04-26 15:35:40.560438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.214 [2024-04-26 15:35:40.572287] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:23.214 [2024-04-26 15:35:40.572304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:15000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.214 [2024-04-26 15:35:40.572310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.214 [2024-04-26 15:35:40.584614] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:23.214 [2024-04-26 15:35:40.584630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:19047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.214 [2024-04-26 15:35:40.584636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.214 [2024-04-26 15:35:40.595966] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:23.214 [2024-04-26 15:35:40.595982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.214 [2024-04-26 15:35:40.595988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.214 [2024-04-26 15:35:40.610713] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:23.214 [2024-04-26 15:35:40.610730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:14446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.214 [2024-04-26 15:35:40.610737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.214 [2024-04-26 15:35:40.620657] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:23.214 [2024-04-26 15:35:40.620674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.214 [2024-04-26 15:35:40.620680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.214 [2024-04-26 15:35:40.635685] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:23.214 [2024-04-26 15:35:40.635702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.214 [2024-04-26 15:35:40.635708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.214 [2024-04-26 15:35:40.646716] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:23.214 [2024-04-26 15:35:40.646733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:7413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.214 [2024-04-26 15:35:40.646739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.214 [2024-04-26 15:35:40.659713] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:23.214 [2024-04-26 15:35:40.659730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.214 [2024-04-26 15:35:40.659736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.476 [2024-04-26 15:35:40.672666] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:23.476 [2024-04-26 15:35:40.672682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.476 [2024-04-26 15:35:40.672688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.476 [2024-04-26 15:35:40.684966] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:23.476 [2024-04-26 15:35:40.684982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:17368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.476 [2024-04-26 15:35:40.684988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.476 [2024-04-26 15:35:40.697870] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:23.476 [2024-04-26 15:35:40.697886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:17573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.476 [2024-04-26 15:35:40.697892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.476 [2024-04-26 15:35:40.709375] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:23.476 [2024-04-26 15:35:40.709391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:24864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.476 [2024-04-26 15:35:40.709397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.476 [2024-04-26 15:35:40.722381] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:23.476 [2024-04-26 15:35:40.722398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.476 [2024-04-26 15:35:40.722404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.476 [2024-04-26 15:35:40.735239] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:23.476 [2024-04-26 15:35:40.735255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.476 [2024-04-26 15:35:40.735262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.476 [2024-04-26 15:35:40.748343] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:23.476 [2024-04-26 15:35:40.748363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:5060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.476 [2024-04-26 15:35:40.748369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.476 [2024-04-26 15:35:40.760795] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:23.476 [2024-04-26 15:35:40.760812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.476 [2024-04-26 15:35:40.760818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.477 [2024-04-26 15:35:40.773859] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:23.477 [2024-04-26 15:35:40.773876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:11998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.477 [2024-04-26 15:35:40.773882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.477 [2024-04-26 15:35:40.786082] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:23.477 [2024-04-26 15:35:40.786097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:4276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.477 [2024-04-26 15:35:40.786103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.477 [2024-04-26 15:35:40.799213] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:23.477 [2024-04-26 15:35:40.799229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.477 [2024-04-26 15:35:40.799236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.477 [2024-04-26 15:35:40.811116] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:23.477 [2024-04-26 15:35:40.811132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:5333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.477 [2024-04-26 15:35:40.811138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.477 [2024-04-26 15:35:40.824751] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:23.477 [2024-04-26 15:35:40.824768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:9198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.477 [2024-04-26 15:35:40.824774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.477 [2024-04-26 15:35:40.836975] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:23.477 [2024-04-26 15:35:40.836991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:23971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.477 [2024-04-26 15:35:40.836997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.477 [2024-04-26 15:35:40.846889] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:23.477 [2024-04-26 15:35:40.846912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:6610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.477 [2024-04-26 15:35:40.846918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.477 [2024-04-26 15:35:40.861686] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:23.477 [2024-04-26 15:35:40.861702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:9764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.477 [2024-04-26 15:35:40.861709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.477 [2024-04-26 15:35:40.875868] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:23.477 [2024-04-26 15:35:40.875885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:10679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.477 [2024-04-26 15:35:40.875891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.477 [2024-04-26 15:35:40.887797] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:23.477 [2024-04-26 15:35:40.887813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:16247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.477 [2024-04-26 15:35:40.887819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.477 [2024-04-26 15:35:40.898998] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:23.477 [2024-04-26 15:35:40.899015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:2935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.477 [2024-04-26 15:35:40.899021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.477 [2024-04-26 15:35:40.911285] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:23.477 [2024-04-26 15:35:40.911302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:3782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.477 [2024-04-26 15:35:40.911308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.739 [2024-04-26 15:35:40.925205] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:23.739 [2024-04-26 15:35:40.925222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:24585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.739 [2024-04-26 15:35:40.925229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.739 [2024-04-26 15:35:40.938355] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:23.739 [2024-04-26 15:35:40.938373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:20024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.739 [2024-04-26 15:35:40.938379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.739 [2024-04-26 15:35:40.951165] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:23.739 [2024-04-26 15:35:40.951182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:6650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.739 [2024-04-26 15:35:40.951188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.739 [2024-04-26 15:35:40.963148] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:23.739 [2024-04-26 15:35:40.963165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:11484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.739 [2024-04-26 15:35:40.963174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.739 [2024-04-26 15:35:40.975405] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:23.739 [2024-04-26 15:35:40.975423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.739 [2024-04-26 15:35:40.975429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.739 [2024-04-26 15:35:40.988224] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:23.739 [2024-04-26 15:35:40.988241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:6558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.739 [2024-04-26 15:35:40.988248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.739 [2024-04-26 15:35:41.001758] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:23.739 [2024-04-26 15:35:41.001775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:13231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.739 [2024-04-26 15:35:41.001781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.739 [2024-04-26 15:35:41.012513] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:23.739 [2024-04-26 15:35:41.012530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:20788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.739 [2024-04-26 15:35:41.012536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.739 [2024-04-26 15:35:41.024875] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:23.739 [2024-04-26 15:35:41.024893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:10844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.739 [2024-04-26 15:35:41.024899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.739 [2024-04-26 15:35:41.038213] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:23.739 [2024-04-26 15:35:41.038231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:18191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.739 [2024-04-26 15:35:41.038237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.739 [2024-04-26 15:35:41.051986] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:23.739 [2024-04-26 15:35:41.052004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:10777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.739 [2024-04-26 15:35:41.052011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.739 [2024-04-26 15:35:41.064626] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:23.739 [2024-04-26 15:35:41.064643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:9548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.739 [2024-04-26 15:35:41.064649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.739 [2024-04-26 15:35:41.077624] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:23.739 [2024-04-26 15:35:41.077644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.739 [2024-04-26 15:35:41.077650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.739 [2024-04-26 15:35:41.087373] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:23.739 [2024-04-26 15:35:41.087389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:6051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.739 [2024-04-26 15:35:41.087395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.739 [2024-04-26 15:35:41.102167] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce62a0) 00:25:23.739 [2024-04-26 15:35:41.102184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:12268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.739 [2024-04-26 15:35:41.102190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.739 00:25:23.739 Latency(us) 00:25:23.739 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:23.739 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:23.739 nvme0n1 : 2.00 20051.23 78.33 0.00 0.00 6377.36 1884.16 18350.08 00:25:23.739 =================================================================================================================== 00:25:23.739 Total : 20051.23 78.33 0.00 0.00 6377.36 1884.16 18350.08 00:25:23.739 0 00:25:23.739 15:35:41 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:23.739 15:35:41 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:23.739 15:35:41 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:23.739 | .driver_specific 00:25:23.739 | .nvme_error 00:25:23.739 | .status_code 00:25:23.739 | .command_transient_transport_error' 00:25:23.739 15:35:41 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:24.000 15:35:41 -- host/digest.sh@71 -- # (( 157 > 0 )) 00:25:24.000 15:35:41 -- host/digest.sh@73 -- # killprocess 1773822 00:25:24.000 15:35:41 -- common/autotest_common.sh@936 -- # '[' -z 1773822 ']' 00:25:24.000 15:35:41 -- common/autotest_common.sh@940 -- # kill -0 1773822 00:25:24.000 15:35:41 -- common/autotest_common.sh@941 -- # uname 00:25:24.000 15:35:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:24.000 15:35:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1773822 00:25:24.000 15:35:41 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:24.000 15:35:41 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:24.000 15:35:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1773822' 00:25:24.000 killing process with pid 1773822 00:25:24.000 15:35:41 -- common/autotest_common.sh@955 -- # kill 1773822 00:25:24.000 Received shutdown signal, test time was about 2.000000 seconds 00:25:24.000 00:25:24.000 Latency(us) 00:25:24.000 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:24.000 =================================================================================================================== 00:25:24.000 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:24.000 15:35:41 -- common/autotest_common.sh@960 -- # wait 1773822 00:25:24.261 15:35:41 -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:25:24.261 15:35:41 -- host/digest.sh@54 -- # local rw bs qd 00:25:24.261 15:35:41 -- host/digest.sh@56 -- # rw=randread 00:25:24.261 15:35:41 -- host/digest.sh@56 -- # bs=131072 00:25:24.261 15:35:41 -- host/digest.sh@56 -- # qd=16 00:25:24.261 15:35:41 -- host/digest.sh@58 -- # bperfpid=1774643 00:25:24.261 15:35:41 -- host/digest.sh@60 -- # waitforlisten 1774643 /var/tmp/bperf.sock 00:25:24.261 15:35:41 -- common/autotest_common.sh@817 -- # '[' -z 1774643 ']' 00:25:24.261 15:35:41 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:25:24.261 15:35:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:24.261 15:35:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:24.261 15:35:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:24.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:24.261 15:35:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:24.261 15:35:41 -- common/autotest_common.sh@10 -- # set +x 00:25:24.261 [2024-04-26 15:35:41.505166] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:25:24.261 [2024-04-26 15:35:41.505222] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1774643 ] 00:25:24.261 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:24.261 Zero copy mechanism will not be used. 00:25:24.261 EAL: No free 2048 kB hugepages reported on node 1 00:25:24.261 [2024-04-26 15:35:41.580208] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:24.261 [2024-04-26 15:35:41.632021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:24.834 15:35:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:24.834 15:35:42 -- common/autotest_common.sh@850 -- # return 0 00:25:24.834 15:35:42 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:24.834 15:35:42 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:25.094 15:35:42 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:25.094 15:35:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:25.094 15:35:42 -- common/autotest_common.sh@10 -- # set +x 00:25:25.094 15:35:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:25.095 15:35:42 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:25.095 15:35:42 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:25.355 nvme0n1 00:25:25.355 15:35:42 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:25.355 15:35:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:25.355 15:35:42 -- common/autotest_common.sh@10 -- # set +x 00:25:25.355 15:35:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:25.355 15:35:42 -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:25.355 15:35:42 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:25.355 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:25.355 Zero copy mechanism will not be used. 00:25:25.355 Running I/O for 2 seconds... 00:25:25.355 [2024-04-26 15:35:42.764009] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:25.355 [2024-04-26 15:35:42.764043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.355 [2024-04-26 15:35:42.764052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:25.355 [2024-04-26 15:35:42.774234] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:25.355 [2024-04-26 15:35:42.774255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.355 [2024-04-26 15:35:42.774263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:25.355 [2024-04-26 15:35:42.782670] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:25.355 [2024-04-26 15:35:42.782689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.355 [2024-04-26 15:35:42.782696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:25.355 [2024-04-26 15:35:42.793085] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:25.355 [2024-04-26 15:35:42.793103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.355 [2024-04-26 15:35:42.793109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.616 [2024-04-26 15:35:42.804681] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:25.616 [2024-04-26 15:35:42.804699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.616 [2024-04-26 15:35:42.804706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:25.616 [2024-04-26 15:35:42.815754] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:25.616 [2024-04-26 15:35:42.815772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.616 [2024-04-26 15:35:42.815779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:25.616 [2024-04-26 15:35:42.828223] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:25.616 [2024-04-26 15:35:42.828241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.616 [2024-04-26 15:35:42.828247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:25.616 [2024-04-26 15:35:42.840368] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:25.616 [2024-04-26 15:35:42.840386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.616 [2024-04-26 15:35:42.840393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.616 [2024-04-26 15:35:42.852526] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:25.617 [2024-04-26 15:35:42.852544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.617 [2024-04-26 15:35:42.852550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:25.617 [2024-04-26 15:35:42.861399] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:25.617 [2024-04-26 15:35:42.861417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.617 [2024-04-26 15:35:42.861423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:25.617 [2024-04-26 15:35:42.872001] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:25.617 [2024-04-26 15:35:42.872024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.617 [2024-04-26 15:35:42.872030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:25.617 [2024-04-26 15:35:42.879267] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:25.617 [2024-04-26 15:35:42.879286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.617 [2024-04-26 15:35:42.879292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.617 [2024-04-26 15:35:42.889041] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:25.617 [2024-04-26 15:35:42.889059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.617 [2024-04-26 15:35:42.889065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:25.617 [2024-04-26 15:35:42.898065] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:25.617 [2024-04-26 15:35:42.898084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.617 [2024-04-26 15:35:42.898090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:25.617 [2024-04-26 15:35:42.908140] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:25.617 [2024-04-26 15:35:42.908159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.617 [2024-04-26 15:35:42.908165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:25.617 [2024-04-26 15:35:42.916424] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:25.617 [2024-04-26 15:35:42.916443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.617 [2024-04-26 15:35:42.916449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.617 [2024-04-26 15:35:42.925195] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:25.617 [2024-04-26 15:35:42.925213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.617 [2024-04-26 15:35:42.925219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:25.617 [2024-04-26 15:35:42.933151] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:25.617 [2024-04-26 15:35:42.933168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.617 [2024-04-26 15:35:42.933174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:25.617 [2024-04-26 15:35:42.944105] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:25.617 [2024-04-26 15:35:42.944123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.617 [2024-04-26 15:35:42.944130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:25.617 [2024-04-26 15:35:42.956251] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:25.617 [2024-04-26 15:35:42.956269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.617 [2024-04-26 15:35:42.956275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.617 [2024-04-26 15:35:42.968743] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:25.617 [2024-04-26 15:35:42.968761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.617 [2024-04-26 15:35:42.968767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:25.617 [2024-04-26 15:35:42.980095] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:25.617 [2024-04-26 15:35:42.980113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.617 [2024-04-26 15:35:42.980119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:25.617 [2024-04-26 15:35:42.990766] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:25.617 [2024-04-26 15:35:42.990785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.617 [2024-04-26 15:35:42.990791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:25.617 [2024-04-26 15:35:43.001415] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:25.617 [2024-04-26 15:35:43.001434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.617 [2024-04-26 15:35:43.001440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.617 [2024-04-26 15:35:43.010920] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:25.617 [2024-04-26 15:35:43.010938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.617 [2024-04-26 15:35:43.010945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:25.617 [2024-04-26 15:35:43.019608] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:25.617 [2024-04-26 15:35:43.019627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.617 [2024-04-26 15:35:43.019633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:25.617 [2024-04-26 15:35:43.029019] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:25.617 [2024-04-26 15:35:43.029038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.617 [2024-04-26 15:35:43.029044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:25.617 [2024-04-26 15:35:43.039214] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:25.617 [2024-04-26 15:35:43.039232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.617 [2024-04-26 15:35:43.039242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.617 [2024-04-26 15:35:43.047655] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:25.617 [2024-04-26 15:35:43.047673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.617 [2024-04-26 15:35:43.047679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:25.617 [2024-04-26 15:35:43.057377] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:25.617 [2024-04-26 15:35:43.057396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.617 [2024-04-26 15:35:43.057403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:25.880 [2024-04-26 15:35:43.067464] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:25.880 [2024-04-26 15:35:43.067483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.880 [2024-04-26 15:35:43.067489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:25.880 [2024-04-26 15:35:43.077317] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:25.880 [2024-04-26 15:35:43.077336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.880 [2024-04-26 15:35:43.077342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.880 [2024-04-26 15:35:43.086478] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:25.880 [2024-04-26 15:35:43.086496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.880 [2024-04-26 15:35:43.086503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:25.880 [2024-04-26 15:35:43.095254] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:25.880 [2024-04-26 15:35:43.095272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.880 [2024-04-26 15:35:43.095278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:25.880 [2024-04-26 15:35:43.105996] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:25.880 [2024-04-26 15:35:43.106013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.880 [2024-04-26 15:35:43.106020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:25.880 [2024-04-26 15:35:43.116371] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:25.880 [2024-04-26 15:35:43.116389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.880 [2024-04-26 15:35:43.116395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.880 [2024-04-26 15:35:43.124941] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:25.880 [2024-04-26 15:35:43.124963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.880 [2024-04-26 15:35:43.124969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:25.880 [2024-04-26 15:35:43.132479] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:25.880 [2024-04-26 15:35:43.132497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.880 [2024-04-26 15:35:43.132503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:25.880 [2024-04-26 15:35:43.143643] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:25.880 [2024-04-26 15:35:43.143661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.880 [2024-04-26 15:35:43.143668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:25.880 [2024-04-26 15:35:43.155275] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:25.880 [2024-04-26 15:35:43.155293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.880 [2024-04-26 15:35:43.155299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.880 [2024-04-26 15:35:43.165195] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:25.880 [2024-04-26 15:35:43.165213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.880 [2024-04-26 15:35:43.165220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:25.880 [2024-04-26 15:35:43.174288] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:25.880 [2024-04-26 15:35:43.174306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.880 [2024-04-26 15:35:43.174312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:25.880 [2024-04-26 15:35:43.185152] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:25.880 [2024-04-26 15:35:43.185170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.880 [2024-04-26 15:35:43.185176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:25.880 [2024-04-26 15:35:43.194733] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:25.880 [2024-04-26 15:35:43.194752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.880 [2024-04-26 15:35:43.194758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.880 [2024-04-26 15:35:43.205798] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:25.880 [2024-04-26 15:35:43.205815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.880 [2024-04-26 15:35:43.205821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:25.880 [2024-04-26 15:35:43.215821] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:25.880 [2024-04-26 15:35:43.215846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.880 [2024-04-26 15:35:43.215853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:25.880 [2024-04-26 15:35:43.224819] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:25.880 [2024-04-26 15:35:43.224842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.880 [2024-04-26 15:35:43.224849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:25.880 [2024-04-26 15:35:43.235689] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:25.880 [2024-04-26 15:35:43.235707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.880 [2024-04-26 15:35:43.235713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.880 [2024-04-26 15:35:43.244685] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:25.880 [2024-04-26 15:35:43.244703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.880 [2024-04-26 15:35:43.244709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:25.880 [2024-04-26 15:35:43.255749] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:25.880 [2024-04-26 15:35:43.255768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.880 [2024-04-26 15:35:43.255774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:25.880 [2024-04-26 15:35:43.265219] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:25.880 [2024-04-26 15:35:43.265238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.880 [2024-04-26 15:35:43.265244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:25.880 [2024-04-26 15:35:43.274733] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:25.880 [2024-04-26 15:35:43.274751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.880 [2024-04-26 15:35:43.274757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.880 [2024-04-26 15:35:43.283750] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:25.880 [2024-04-26 15:35:43.283768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.880 [2024-04-26 15:35:43.283774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:25.880 [2024-04-26 15:35:43.291644] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:25.880 [2024-04-26 15:35:43.291662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.880 [2024-04-26 15:35:43.291671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:25.880 [2024-04-26 15:35:43.300941] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:25.880 [2024-04-26 15:35:43.300959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.881 [2024-04-26 15:35:43.300965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:25.881 [2024-04-26 15:35:43.311297] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:25.881 [2024-04-26 15:35:43.311314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.881 [2024-04-26 15:35:43.311321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.881 [2024-04-26 15:35:43.321603] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:25.881 [2024-04-26 15:35:43.321621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.881 [2024-04-26 15:35:43.321627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.141 [2024-04-26 15:35:43.330769] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.141 [2024-04-26 15:35:43.330787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.141 [2024-04-26 15:35:43.330794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.141 [2024-04-26 15:35:43.340553] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.141 [2024-04-26 15:35:43.340571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.141 [2024-04-26 15:35:43.340577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.141 [2024-04-26 15:35:43.351806] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.141 [2024-04-26 15:35:43.351824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.141 [2024-04-26 15:35:43.351830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.141 [2024-04-26 15:35:43.361013] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.141 [2024-04-26 15:35:43.361032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.141 [2024-04-26 15:35:43.361038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.141 [2024-04-26 15:35:43.370350] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.141 [2024-04-26 15:35:43.370368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.142 [2024-04-26 15:35:43.370374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.142 [2024-04-26 15:35:43.379930] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.142 [2024-04-26 15:35:43.379951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.142 [2024-04-26 15:35:43.379957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.142 [2024-04-26 15:35:43.389034] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.142 [2024-04-26 15:35:43.389052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.142 [2024-04-26 15:35:43.389058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.142 [2024-04-26 15:35:43.398457] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.142 [2024-04-26 15:35:43.398475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.142 [2024-04-26 15:35:43.398481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.142 [2024-04-26 15:35:43.407289] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.142 [2024-04-26 15:35:43.407307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.142 [2024-04-26 15:35:43.407313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.142 [2024-04-26 15:35:43.418027] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.142 [2024-04-26 15:35:43.418045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.142 [2024-04-26 15:35:43.418051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.142 [2024-04-26 15:35:43.427148] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.142 [2024-04-26 15:35:43.427166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.142 [2024-04-26 15:35:43.427172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.142 [2024-04-26 15:35:43.438379] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.142 [2024-04-26 15:35:43.438397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.142 [2024-04-26 15:35:43.438403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.142 [2024-04-26 15:35:43.448141] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.142 [2024-04-26 15:35:43.448159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.142 [2024-04-26 15:35:43.448165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.142 [2024-04-26 15:35:43.458010] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.142 [2024-04-26 15:35:43.458028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.142 [2024-04-26 15:35:43.458034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.142 [2024-04-26 15:35:43.467955] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.142 [2024-04-26 15:35:43.467973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.142 [2024-04-26 15:35:43.467979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.142 [2024-04-26 15:35:43.478352] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.142 [2024-04-26 15:35:43.478369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.142 [2024-04-26 15:35:43.478375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.142 [2024-04-26 15:35:43.487264] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.142 [2024-04-26 15:35:43.487282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.142 [2024-04-26 15:35:43.487288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.142 [2024-04-26 15:35:43.496922] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.142 [2024-04-26 15:35:43.496940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.142 [2024-04-26 15:35:43.496946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.142 [2024-04-26 15:35:43.507388] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.142 [2024-04-26 15:35:43.507406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.142 [2024-04-26 15:35:43.507412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.142 [2024-04-26 15:35:43.517166] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.142 [2024-04-26 15:35:43.517184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.142 [2024-04-26 15:35:43.517190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.142 [2024-04-26 15:35:43.527573] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.142 [2024-04-26 15:35:43.527595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.142 [2024-04-26 15:35:43.527603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.142 [2024-04-26 15:35:43.536373] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.142 [2024-04-26 15:35:43.536392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.142 [2024-04-26 15:35:43.536398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.142 [2024-04-26 15:35:43.544445] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.142 [2024-04-26 15:35:43.544466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.142 [2024-04-26 15:35:43.544473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.142 [2024-04-26 15:35:43.554665] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.142 [2024-04-26 15:35:43.554683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.142 [2024-04-26 15:35:43.554689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.142 [2024-04-26 15:35:43.565615] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.142 [2024-04-26 15:35:43.565633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.142 [2024-04-26 15:35:43.565639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.142 [2024-04-26 15:35:43.576528] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.142 [2024-04-26 15:35:43.576545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.142 [2024-04-26 15:35:43.576552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.142 [2024-04-26 15:35:43.586185] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.142 [2024-04-26 15:35:43.586203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.142 [2024-04-26 15:35:43.586209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.404 [2024-04-26 15:35:43.598163] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.404 [2024-04-26 15:35:43.598181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.404 [2024-04-26 15:35:43.598187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.404 [2024-04-26 15:35:43.607796] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.404 [2024-04-26 15:35:43.607814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.404 [2024-04-26 15:35:43.607820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.404 [2024-04-26 15:35:43.618375] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.404 [2024-04-26 15:35:43.618393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.404 [2024-04-26 15:35:43.618399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.404 [2024-04-26 15:35:43.627620] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.404 [2024-04-26 15:35:43.627638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.404 [2024-04-26 15:35:43.627644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.404 [2024-04-26 15:35:43.635426] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.404 [2024-04-26 15:35:43.635444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.404 [2024-04-26 15:35:43.635451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.404 [2024-04-26 15:35:43.644928] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.404 [2024-04-26 15:35:43.644946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.404 [2024-04-26 15:35:43.644952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.404 [2024-04-26 15:35:43.655194] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.404 [2024-04-26 15:35:43.655212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.404 [2024-04-26 15:35:43.655219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.404 [2024-04-26 15:35:43.665448] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.404 [2024-04-26 15:35:43.665466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.404 [2024-04-26 15:35:43.665472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.404 [2024-04-26 15:35:43.675337] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.404 [2024-04-26 15:35:43.675355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.404 [2024-04-26 15:35:43.675361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.404 [2024-04-26 15:35:43.685554] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.404 [2024-04-26 15:35:43.685572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.404 [2024-04-26 15:35:43.685578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.404 [2024-04-26 15:35:43.696858] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.404 [2024-04-26 15:35:43.696876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.404 [2024-04-26 15:35:43.696882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.404 [2024-04-26 15:35:43.706741] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.404 [2024-04-26 15:35:43.706759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.404 [2024-04-26 15:35:43.706765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.405 [2024-04-26 15:35:43.716528] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.405 [2024-04-26 15:35:43.716546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.405 [2024-04-26 15:35:43.716555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.405 [2024-04-26 15:35:43.726880] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.405 [2024-04-26 15:35:43.726897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.405 [2024-04-26 15:35:43.726904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.405 [2024-04-26 15:35:43.734832] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.405 [2024-04-26 15:35:43.734855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.405 [2024-04-26 15:35:43.734861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.405 [2024-04-26 15:35:43.744553] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.405 [2024-04-26 15:35:43.744571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.405 [2024-04-26 15:35:43.744577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.405 [2024-04-26 15:35:43.753934] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.405 [2024-04-26 15:35:43.753952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.405 [2024-04-26 15:35:43.753958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.405 [2024-04-26 15:35:43.764598] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.405 [2024-04-26 15:35:43.764616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.405 [2024-04-26 15:35:43.764622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.405 [2024-04-26 15:35:43.775077] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.405 [2024-04-26 15:35:43.775095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.405 [2024-04-26 15:35:43.775101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.405 [2024-04-26 15:35:43.784952] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.405 [2024-04-26 15:35:43.784970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.405 [2024-04-26 15:35:43.784976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.405 [2024-04-26 15:35:43.794548] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.405 [2024-04-26 15:35:43.794565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.405 [2024-04-26 15:35:43.794571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.405 [2024-04-26 15:35:43.805284] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.405 [2024-04-26 15:35:43.805304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.405 [2024-04-26 15:35:43.805310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.405 [2024-04-26 15:35:43.814476] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.405 [2024-04-26 15:35:43.814494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.405 [2024-04-26 15:35:43.814500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.405 [2024-04-26 15:35:43.823630] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.405 [2024-04-26 15:35:43.823648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.405 [2024-04-26 15:35:43.823654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.405 [2024-04-26 15:35:43.832797] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.405 [2024-04-26 15:35:43.832815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.405 [2024-04-26 15:35:43.832821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.405 [2024-04-26 15:35:43.841553] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.405 [2024-04-26 15:35:43.841569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.405 [2024-04-26 15:35:43.841576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.668 [2024-04-26 15:35:43.853185] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.668 [2024-04-26 15:35:43.853203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.668 [2024-04-26 15:35:43.853209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.668 [2024-04-26 15:35:43.862797] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.668 [2024-04-26 15:35:43.862815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.668 [2024-04-26 15:35:43.862821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.668 [2024-04-26 15:35:43.873466] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.668 [2024-04-26 15:35:43.873484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.668 [2024-04-26 15:35:43.873490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.668 [2024-04-26 15:35:43.884191] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.668 [2024-04-26 15:35:43.884209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.668 [2024-04-26 15:35:43.884215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.668 [2024-04-26 15:35:43.892902] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.668 [2024-04-26 15:35:43.892921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.668 [2024-04-26 15:35:43.892927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.668 [2024-04-26 15:35:43.902225] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.668 [2024-04-26 15:35:43.902243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.668 [2024-04-26 15:35:43.902249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.668 [2024-04-26 15:35:43.912219] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.668 [2024-04-26 15:35:43.912237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.668 [2024-04-26 15:35:43.912243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.668 [2024-04-26 15:35:43.923537] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.668 [2024-04-26 15:35:43.923555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.668 [2024-04-26 15:35:43.923561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.668 [2024-04-26 15:35:43.933212] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.668 [2024-04-26 15:35:43.933230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.668 [2024-04-26 15:35:43.933236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.668 [2024-04-26 15:35:43.943075] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.668 [2024-04-26 15:35:43.943092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.668 [2024-04-26 15:35:43.943098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.668 [2024-04-26 15:35:43.952390] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.668 [2024-04-26 15:35:43.952407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.668 [2024-04-26 15:35:43.952414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.668 [2024-04-26 15:35:43.961667] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.668 [2024-04-26 15:35:43.961685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.668 [2024-04-26 15:35:43.961691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.668 [2024-04-26 15:35:43.971541] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.668 [2024-04-26 15:35:43.971558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.668 [2024-04-26 15:35:43.971567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.668 [2024-04-26 15:35:43.980298] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.668 [2024-04-26 15:35:43.980315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.668 [2024-04-26 15:35:43.980321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.668 [2024-04-26 15:35:43.989583] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.668 [2024-04-26 15:35:43.989600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.668 [2024-04-26 15:35:43.989606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.668 [2024-04-26 15:35:43.999524] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.668 [2024-04-26 15:35:43.999542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.668 [2024-04-26 15:35:43.999548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.668 [2024-04-26 15:35:44.009322] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.668 [2024-04-26 15:35:44.009340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.668 [2024-04-26 15:35:44.009346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.668 [2024-04-26 15:35:44.019411] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.668 [2024-04-26 15:35:44.019429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.668 [2024-04-26 15:35:44.019435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.668 [2024-04-26 15:35:44.027723] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.668 [2024-04-26 15:35:44.027740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.668 [2024-04-26 15:35:44.027747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.668 [2024-04-26 15:35:44.034809] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.668 [2024-04-26 15:35:44.034826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.668 [2024-04-26 15:35:44.034832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.668 [2024-04-26 15:35:44.045529] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.668 [2024-04-26 15:35:44.045547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.668 [2024-04-26 15:35:44.045552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.668 [2024-04-26 15:35:44.054967] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.668 [2024-04-26 15:35:44.054988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.668 [2024-04-26 15:35:44.054994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.668 [2024-04-26 15:35:44.064106] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.668 [2024-04-26 15:35:44.064124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.668 [2024-04-26 15:35:44.064130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.668 [2024-04-26 15:35:44.076551] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.668 [2024-04-26 15:35:44.076568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.668 [2024-04-26 15:35:44.076574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.668 [2024-04-26 15:35:44.087123] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.668 [2024-04-26 15:35:44.087141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.668 [2024-04-26 15:35:44.087147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.668 [2024-04-26 15:35:44.096090] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.668 [2024-04-26 15:35:44.096108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.668 [2024-04-26 15:35:44.096114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.668 [2024-04-26 15:35:44.107052] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.668 [2024-04-26 15:35:44.107069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.668 [2024-04-26 15:35:44.107075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.668 [2024-04-26 15:35:44.115973] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.668 [2024-04-26 15:35:44.115990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.668 [2024-04-26 15:35:44.115996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.930 [2024-04-26 15:35:44.125731] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.930 [2024-04-26 15:35:44.125749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.930 [2024-04-26 15:35:44.125755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.930 [2024-04-26 15:35:44.135281] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.930 [2024-04-26 15:35:44.135299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.930 [2024-04-26 15:35:44.135308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.930 [2024-04-26 15:35:44.145785] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.930 [2024-04-26 15:35:44.145803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.930 [2024-04-26 15:35:44.145809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.930 [2024-04-26 15:35:44.154338] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.930 [2024-04-26 15:35:44.154356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.930 [2024-04-26 15:35:44.154363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.930 [2024-04-26 15:35:44.166951] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.930 [2024-04-26 15:35:44.166968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.931 [2024-04-26 15:35:44.166974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.931 [2024-04-26 15:35:44.179991] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.931 [2024-04-26 15:35:44.180009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.931 [2024-04-26 15:35:44.180015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.931 [2024-04-26 15:35:44.192790] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.931 [2024-04-26 15:35:44.192808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.931 [2024-04-26 15:35:44.192814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.931 [2024-04-26 15:35:44.205715] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.931 [2024-04-26 15:35:44.205733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.931 [2024-04-26 15:35:44.205739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.931 [2024-04-26 15:35:44.218578] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.931 [2024-04-26 15:35:44.218596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.931 [2024-04-26 15:35:44.218603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.931 [2024-04-26 15:35:44.231436] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.931 [2024-04-26 15:35:44.231454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.931 [2024-04-26 15:35:44.231460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.931 [2024-04-26 15:35:44.245332] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.931 [2024-04-26 15:35:44.245353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.931 [2024-04-26 15:35:44.245359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.931 [2024-04-26 15:35:44.258448] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.931 [2024-04-26 15:35:44.258465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.931 [2024-04-26 15:35:44.258471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.931 [2024-04-26 15:35:44.271495] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.931 [2024-04-26 15:35:44.271512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.931 [2024-04-26 15:35:44.271518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.931 [2024-04-26 15:35:44.284897] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.931 [2024-04-26 15:35:44.284915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.931 [2024-04-26 15:35:44.284921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.931 [2024-04-26 15:35:44.297942] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.931 [2024-04-26 15:35:44.297960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.931 [2024-04-26 15:35:44.297966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.931 [2024-04-26 15:35:44.310752] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.931 [2024-04-26 15:35:44.310769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.931 [2024-04-26 15:35:44.310775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.931 [2024-04-26 15:35:44.322289] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.931 [2024-04-26 15:35:44.322307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.931 [2024-04-26 15:35:44.322313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.931 [2024-04-26 15:35:44.333629] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.931 [2024-04-26 15:35:44.333647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.931 [2024-04-26 15:35:44.333653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.931 [2024-04-26 15:35:44.344904] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.931 [2024-04-26 15:35:44.344921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.931 [2024-04-26 15:35:44.344927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.931 [2024-04-26 15:35:44.357133] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.931 [2024-04-26 15:35:44.357149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.931 [2024-04-26 15:35:44.357155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.931 [2024-04-26 15:35:44.364413] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.931 [2024-04-26 15:35:44.364430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.931 [2024-04-26 15:35:44.364436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.931 [2024-04-26 15:35:44.374447] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:26.931 [2024-04-26 15:35:44.374464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.931 [2024-04-26 15:35:44.374471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.193 [2024-04-26 15:35:44.383205] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:27.193 [2024-04-26 15:35:44.383223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.193 [2024-04-26 15:35:44.383229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.193 [2024-04-26 15:35:44.393069] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:27.193 [2024-04-26 15:35:44.393087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.193 [2024-04-26 15:35:44.393093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.193 [2024-04-26 15:35:44.402593] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:27.193 [2024-04-26 15:35:44.402610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.193 [2024-04-26 15:35:44.402616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.193 [2024-04-26 15:35:44.412163] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:27.193 [2024-04-26 15:35:44.412180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.193 [2024-04-26 15:35:44.412186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.193 [2024-04-26 15:35:44.422251] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:27.193 [2024-04-26 15:35:44.422269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.193 [2024-04-26 15:35:44.422275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.193 [2024-04-26 15:35:44.433330] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:27.193 [2024-04-26 15:35:44.433348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.193 [2024-04-26 15:35:44.433357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.193 [2024-04-26 15:35:44.442227] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:27.193 [2024-04-26 15:35:44.442245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.193 [2024-04-26 15:35:44.442251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.193 [2024-04-26 15:35:44.451318] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:27.193 [2024-04-26 15:35:44.451334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.193 [2024-04-26 15:35:44.451340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.193 [2024-04-26 15:35:44.460570] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:27.193 [2024-04-26 15:35:44.460587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.193 [2024-04-26 15:35:44.460593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.193 [2024-04-26 15:35:44.470563] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:27.193 [2024-04-26 15:35:44.470580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.193 [2024-04-26 15:35:44.470586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.193 [2024-04-26 15:35:44.481169] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:27.193 [2024-04-26 15:35:44.481186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.193 [2024-04-26 15:35:44.481192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.193 [2024-04-26 15:35:44.491353] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:27.193 [2024-04-26 15:35:44.491371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.193 [2024-04-26 15:35:44.491377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.193 [2024-04-26 15:35:44.501586] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:27.193 [2024-04-26 15:35:44.501603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.193 [2024-04-26 15:35:44.501609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.193 [2024-04-26 15:35:44.511970] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:27.193 [2024-04-26 15:35:44.511988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.193 [2024-04-26 15:35:44.511994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.193 [2024-04-26 15:35:44.522107] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:27.193 [2024-04-26 15:35:44.522126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.193 [2024-04-26 15:35:44.522132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.193 [2024-04-26 15:35:44.531429] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:27.193 [2024-04-26 15:35:44.531446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.193 [2024-04-26 15:35:44.531452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.193 [2024-04-26 15:35:44.539732] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:27.193 [2024-04-26 15:35:44.539751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.193 [2024-04-26 15:35:44.539757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.193 [2024-04-26 15:35:44.549064] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:27.193 [2024-04-26 15:35:44.549082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.193 [2024-04-26 15:35:44.549088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.193 [2024-04-26 15:35:44.558335] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:27.193 [2024-04-26 15:35:44.558352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.193 [2024-04-26 15:35:44.558358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.193 [2024-04-26 15:35:44.567355] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:27.193 [2024-04-26 15:35:44.567374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.193 [2024-04-26 15:35:44.567380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.193 [2024-04-26 15:35:44.577651] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:27.193 [2024-04-26 15:35:44.577669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.193 [2024-04-26 15:35:44.577675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.193 [2024-04-26 15:35:44.587413] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:27.193 [2024-04-26 15:35:44.587430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.193 [2024-04-26 15:35:44.587436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.193 [2024-04-26 15:35:44.596894] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:27.193 [2024-04-26 15:35:44.596911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.193 [2024-04-26 15:35:44.596917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.193 [2024-04-26 15:35:44.607424] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:27.193 [2024-04-26 15:35:44.607441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.193 [2024-04-26 15:35:44.607447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.193 [2024-04-26 15:35:44.616565] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:27.193 [2024-04-26 15:35:44.616583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.193 [2024-04-26 15:35:44.616589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.193 [2024-04-26 15:35:44.626524] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:27.193 [2024-04-26 15:35:44.626542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.194 [2024-04-26 15:35:44.626548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.194 [2024-04-26 15:35:44.638258] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:27.194 [2024-04-26 15:35:44.638277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.194 [2024-04-26 15:35:44.638283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.454 [2024-04-26 15:35:44.648344] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:27.454 [2024-04-26 15:35:44.648363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.454 [2024-04-26 15:35:44.648369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.454 [2024-04-26 15:35:44.658065] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:27.454 [2024-04-26 15:35:44.658083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.454 [2024-04-26 15:35:44.658089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.454 [2024-04-26 15:35:44.669426] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:27.454 [2024-04-26 15:35:44.669443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.454 [2024-04-26 15:35:44.669449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.454 [2024-04-26 15:35:44.679094] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:27.454 [2024-04-26 15:35:44.679111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.454 [2024-04-26 15:35:44.679117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.454 [2024-04-26 15:35:44.687923] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:27.454 [2024-04-26 15:35:44.687940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.454 [2024-04-26 15:35:44.687950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.454 [2024-04-26 15:35:44.700396] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:27.454 [2024-04-26 15:35:44.700413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.454 [2024-04-26 15:35:44.700419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.454 [2024-04-26 15:35:44.713701] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:27.454 [2024-04-26 15:35:44.713718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.454 [2024-04-26 15:35:44.713724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.454 [2024-04-26 15:35:44.726427] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:27.454 [2024-04-26 15:35:44.726444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.454 [2024-04-26 15:35:44.726450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.454 [2024-04-26 15:35:44.737151] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:27.454 [2024-04-26 15:35:44.737169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.454 [2024-04-26 15:35:44.737175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.454 [2024-04-26 15:35:44.748308] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:27.455 [2024-04-26 15:35:44.748325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.455 [2024-04-26 15:35:44.748331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.455 [2024-04-26 15:35:44.757909] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c73e10) 00:25:27.455 [2024-04-26 15:35:44.757927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.455 [2024-04-26 15:35:44.757933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.455 00:25:27.455 Latency(us) 00:25:27.455 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:27.455 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:27.455 nvme0n1 : 2.00 3064.13 383.02 0.00 0.00 5217.44 1419.95 13817.17 00:25:27.455 =================================================================================================================== 00:25:27.455 Total : 3064.13 383.02 0.00 0.00 5217.44 1419.95 13817.17 00:25:27.455 0 00:25:27.455 15:35:44 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:27.455 15:35:44 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:27.455 15:35:44 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:27.455 | .driver_specific 00:25:27.455 | .nvme_error 00:25:27.455 | .status_code 00:25:27.455 | .command_transient_transport_error' 00:25:27.455 15:35:44 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:27.714 15:35:44 -- host/digest.sh@71 -- # (( 198 > 0 )) 00:25:27.714 15:35:44 -- host/digest.sh@73 -- # killprocess 1774643 00:25:27.714 15:35:44 -- common/autotest_common.sh@936 -- # '[' -z 1774643 ']' 00:25:27.714 15:35:44 -- common/autotest_common.sh@940 -- # kill -0 1774643 00:25:27.714 15:35:44 -- common/autotest_common.sh@941 -- # uname 00:25:27.714 15:35:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:27.714 15:35:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1774643 00:25:27.714 15:35:44 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:27.714 15:35:44 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:27.714 15:35:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1774643' 00:25:27.714 killing process with pid 1774643 00:25:27.714 15:35:44 -- common/autotest_common.sh@955 -- # kill 1774643 00:25:27.714 Received shutdown signal, test time was about 2.000000 seconds 00:25:27.715 00:25:27.715 Latency(us) 00:25:27.715 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:27.715 =================================================================================================================== 00:25:27.715 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:27.715 15:35:44 -- common/autotest_common.sh@960 -- # wait 1774643 00:25:27.715 15:35:45 -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:25:27.715 15:35:45 -- host/digest.sh@54 -- # local rw bs qd 00:25:27.715 15:35:45 -- host/digest.sh@56 -- # rw=randwrite 00:25:27.715 15:35:45 -- host/digest.sh@56 -- # bs=4096 00:25:27.715 15:35:45 -- host/digest.sh@56 -- # qd=128 00:25:27.715 15:35:45 -- host/digest.sh@58 -- # bperfpid=1775414 00:25:27.715 15:35:45 -- host/digest.sh@60 -- # waitforlisten 1775414 /var/tmp/bperf.sock 00:25:27.715 15:35:45 -- common/autotest_common.sh@817 -- # '[' -z 1775414 ']' 00:25:27.715 15:35:45 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:25:27.715 15:35:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:27.715 15:35:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:27.715 15:35:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:27.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:27.715 15:35:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:27.715 15:35:45 -- common/autotest_common.sh@10 -- # set +x 00:25:27.974 [2024-04-26 15:35:45.168097] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:25:27.974 [2024-04-26 15:35:45.168156] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1775414 ] 00:25:27.974 EAL: No free 2048 kB hugepages reported on node 1 00:25:27.974 [2024-04-26 15:35:45.243004] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:27.974 [2024-04-26 15:35:45.294711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:28.544 15:35:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:28.544 15:35:45 -- common/autotest_common.sh@850 -- # return 0 00:25:28.544 15:35:45 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:28.544 15:35:45 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:28.804 15:35:46 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:28.804 15:35:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:28.804 15:35:46 -- common/autotest_common.sh@10 -- # set +x 00:25:28.804 15:35:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:28.804 15:35:46 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:28.804 15:35:46 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:29.064 nvme0n1 00:25:29.064 15:35:46 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:29.064 15:35:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:29.064 15:35:46 -- common/autotest_common.sh@10 -- # set +x 00:25:29.064 15:35:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:29.064 15:35:46 -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:29.064 15:35:46 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:29.325 Running I/O for 2 seconds... 00:25:29.325 [2024-04-26 15:35:46.561230] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190e4578 00:25:29.325 [2024-04-26 15:35:46.562185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:24002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.325 [2024-04-26 15:35:46.562211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:29.325 [2024-04-26 15:35:46.573415] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190e99d8 00:25:29.325 [2024-04-26 15:35:46.574368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:17016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.325 [2024-04-26 15:35:46.574386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:29.325 [2024-04-26 15:35:46.587147] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190eee38 00:25:29.325 [2024-04-26 15:35:46.588826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:9716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.325 [2024-04-26 15:35:46.588846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:29.325 [2024-04-26 15:35:46.597786] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190ebb98 00:25:29.325 [2024-04-26 15:35:46.598820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:9792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.325 [2024-04-26 15:35:46.598839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:29.325 [2024-04-26 15:35:46.609927] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190fb048 00:25:29.325 [2024-04-26 15:35:46.610960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.325 [2024-04-26 15:35:46.610976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:29.325 [2024-04-26 15:35:46.622054] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190f5be8 00:25:29.325 [2024-04-26 15:35:46.623050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.325 [2024-04-26 15:35:46.623066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:29.325 [2024-04-26 15:35:46.634098] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190e1b48 00:25:29.325 [2024-04-26 15:35:46.635042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:11099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.325 [2024-04-26 15:35:46.635057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:29.325 [2024-04-26 15:35:46.646224] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190f0bc0 00:25:29.325 [2024-04-26 15:35:46.647179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:1571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.325 [2024-04-26 15:35:46.647196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:29.325 [2024-04-26 15:35:46.658337] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190ea248 00:25:29.325 [2024-04-26 15:35:46.659264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:6451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.325 [2024-04-26 15:35:46.659280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:29.325 [2024-04-26 15:35:46.670447] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190e6300 00:25:29.325 [2024-04-26 15:35:46.671392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.325 [2024-04-26 15:35:46.671408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:29.325 [2024-04-26 15:35:46.682593] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190ec840 00:25:29.325 [2024-04-26 15:35:46.683593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:18666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.325 [2024-04-26 15:35:46.683610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:29.325 [2024-04-26 15:35:46.694628] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190f0bc0 00:25:29.325 [2024-04-26 15:35:46.695573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:24026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.325 [2024-04-26 15:35:46.695589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:29.326 [2024-04-26 15:35:46.706751] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190e1b48 00:25:29.326 [2024-04-26 15:35:46.707695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.326 [2024-04-26 15:35:46.707711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:29.326 [2024-04-26 15:35:46.718845] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190f2d80 00:25:29.326 [2024-04-26 15:35:46.719774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:12811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.326 [2024-04-26 15:35:46.719790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:29.326 [2024-04-26 15:35:46.730977] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190e95a0 00:25:29.326 [2024-04-26 15:35:46.731872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.326 [2024-04-26 15:35:46.731888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:29.326 [2024-04-26 15:35:46.744726] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190e0ea0 00:25:29.326 [2024-04-26 15:35:46.746404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.326 [2024-04-26 15:35:46.746419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:29.326 [2024-04-26 15:35:46.755306] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190ee5c8 00:25:29.326 [2024-04-26 15:35:46.756331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.326 [2024-04-26 15:35:46.756346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:29.326 [2024-04-26 15:35:46.767404] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190ec840 00:25:29.326 [2024-04-26 15:35:46.768435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.326 [2024-04-26 15:35:46.768451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:29.587 [2024-04-26 15:35:46.778671] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190f2d80 00:25:29.587 [2024-04-26 15:35:46.779644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:20252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.587 [2024-04-26 15:35:46.779661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:29.587 [2024-04-26 15:35:46.791505] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190e1b48 00:25:29.587 [2024-04-26 15:35:46.792449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:19418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.587 [2024-04-26 15:35:46.792465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:29.587 [2024-04-26 15:35:46.805131] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190f0bc0 00:25:29.587 [2024-04-26 15:35:46.806758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:10632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.587 [2024-04-26 15:35:46.806773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:29.587 [2024-04-26 15:35:46.815724] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190e3d08 00:25:29.587 [2024-04-26 15:35:46.816752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:17590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.587 [2024-04-26 15:35:46.816768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:29.587 [2024-04-26 15:35:46.827042] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190f2d80 00:25:29.587 [2024-04-26 15:35:46.828025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:16262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.587 [2024-04-26 15:35:46.828041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:29.587 [2024-04-26 15:35:46.839898] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190e99d8 00:25:29.587 [2024-04-26 15:35:46.840827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:9185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.587 [2024-04-26 15:35:46.840846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:29.587 [2024-04-26 15:35:46.852018] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190e0ea0 00:25:29.587 [2024-04-26 15:35:46.852904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.587 [2024-04-26 15:35:46.852924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:29.587 [2024-04-26 15:35:46.864133] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190f3a28 00:25:29.587 [2024-04-26 15:35:46.865073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:2738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.587 [2024-04-26 15:35:46.865089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:29.587 [2024-04-26 15:35:46.876221] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190ea248 00:25:29.587 [2024-04-26 15:35:46.877166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:13220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.587 [2024-04-26 15:35:46.877182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:29.587 [2024-04-26 15:35:46.888589] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190e6b70 00:25:29.587 [2024-04-26 15:35:46.889627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.587 [2024-04-26 15:35:46.889643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:29.587 [2024-04-26 15:35:46.900704] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190fdeb0 00:25:29.587 [2024-04-26 15:35:46.901698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:13350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.587 [2024-04-26 15:35:46.901714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:29.587 [2024-04-26 15:35:46.912715] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190e0ea0 00:25:29.587 [2024-04-26 15:35:46.913660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.587 [2024-04-26 15:35:46.913677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:29.587 [2024-04-26 15:35:46.924862] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190f1430 00:25:29.587 [2024-04-26 15:35:46.925896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:19669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.587 [2024-04-26 15:35:46.925911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:29.587 [2024-04-26 15:35:46.936934] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190fbcf0 00:25:29.587 [2024-04-26 15:35:46.937957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:19457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.587 [2024-04-26 15:35:46.937972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:29.587 [2024-04-26 15:35:46.949024] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190f31b8 00:25:29.587 [2024-04-26 15:35:46.950057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:9859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.587 [2024-04-26 15:35:46.950072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:29.587 [2024-04-26 15:35:46.961125] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190ee5c8 00:25:29.587 [2024-04-26 15:35:46.962165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:20087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.587 [2024-04-26 15:35:46.962181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:29.587 [2024-04-26 15:35:46.973241] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190e0a68 00:25:29.587 [2024-04-26 15:35:46.974278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:10402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.587 [2024-04-26 15:35:46.974294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:29.588 [2024-04-26 15:35:46.985322] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190fda78 00:25:29.588 [2024-04-26 15:35:46.986355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:10745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.588 [2024-04-26 15:35:46.986370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:29.588 [2024-04-26 15:35:46.997408] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190f6890 00:25:29.588 [2024-04-26 15:35:46.998446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.588 [2024-04-26 15:35:46.998463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:29.588 [2024-04-26 15:35:47.009503] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190f1430 00:25:29.588 [2024-04-26 15:35:47.010529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.588 [2024-04-26 15:35:47.010545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:29.588 [2024-04-26 15:35:47.021642] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190fbcf0 00:25:29.588 [2024-04-26 15:35:47.022672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:22214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.588 [2024-04-26 15:35:47.022687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:29.588 [2024-04-26 15:35:47.033718] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190f31b8 00:25:29.588 [2024-04-26 15:35:47.034763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:9372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.588 [2024-04-26 15:35:47.034779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:29.850 [2024-04-26 15:35:47.045819] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190ee5c8 00:25:29.850 [2024-04-26 15:35:47.046849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.850 [2024-04-26 15:35:47.046864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:29.850 [2024-04-26 15:35:47.057915] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190e0a68 00:25:29.850 [2024-04-26 15:35:47.058936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:2895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.850 [2024-04-26 15:35:47.058951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:29.850 [2024-04-26 15:35:47.069988] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190fda78 00:25:29.850 [2024-04-26 15:35:47.070979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:10030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.850 [2024-04-26 15:35:47.070995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:29.850 [2024-04-26 15:35:47.082002] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190df118 00:25:29.850 [2024-04-26 15:35:47.082916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:9488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.850 [2024-04-26 15:35:47.082932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:29.850 [2024-04-26 15:35:47.094098] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190f0bc0 00:25:29.850 [2024-04-26 15:35:47.095015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:19989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.850 [2024-04-26 15:35:47.095030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:29.850 [2024-04-26 15:35:47.106170] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190f6cc8 00:25:29.850 [2024-04-26 15:35:47.107104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:22454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.850 [2024-04-26 15:35:47.107119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:29.850 [2024-04-26 15:35:47.118229] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190e38d0 00:25:29.850 [2024-04-26 15:35:47.119149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:9205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.850 [2024-04-26 15:35:47.119164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:29.850 [2024-04-26 15:35:47.131847] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190e99d8 00:25:29.850 [2024-04-26 15:35:47.133512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.850 [2024-04-26 15:35:47.133528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:29.850 [2024-04-26 15:35:47.142282] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190dece0 00:25:29.850 [2024-04-26 15:35:47.143265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.850 [2024-04-26 15:35:47.143280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:29.850 [2024-04-26 15:35:47.154414] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190dfdc0 00:25:29.850 [2024-04-26 15:35:47.155385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:13428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.850 [2024-04-26 15:35:47.155400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:29.850 [2024-04-26 15:35:47.166549] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190ebfd0 00:25:29.850 [2024-04-26 15:35:47.167524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.850 [2024-04-26 15:35:47.167545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:29.850 [2024-04-26 15:35:47.178670] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190de038 00:25:29.850 [2024-04-26 15:35:47.179648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:11389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.850 [2024-04-26 15:35:47.179663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:29.850 [2024-04-26 15:35:47.190766] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190e4578 00:25:29.850 [2024-04-26 15:35:47.191707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:17438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.850 [2024-04-26 15:35:47.191723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:29.850 [2024-04-26 15:35:47.202267] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190df118 00:25:29.850 [2024-04-26 15:35:47.203224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:10793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.850 [2024-04-26 15:35:47.203239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:29.850 [2024-04-26 15:35:47.215527] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190fc128 00:25:29.850 [2024-04-26 15:35:47.216682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:19966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.850 [2024-04-26 15:35:47.216698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:29.850 [2024-04-26 15:35:47.227650] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190fb048 00:25:29.850 [2024-04-26 15:35:47.228801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:52 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.850 [2024-04-26 15:35:47.228816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:29.850 [2024-04-26 15:35:47.239765] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190e1b48 00:25:29.850 [2024-04-26 15:35:47.240943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:11866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.850 [2024-04-26 15:35:47.240958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:29.850 [2024-04-26 15:35:47.251080] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190e3498 00:25:29.850 [2024-04-26 15:35:47.252215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:3108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.851 [2024-04-26 15:35:47.252230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:29.851 [2024-04-26 15:35:47.263935] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190e23b8 00:25:29.851 [2024-04-26 15:35:47.265044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.851 [2024-04-26 15:35:47.265059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:29.851 [2024-04-26 15:35:47.276043] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190ed920 00:25:29.851 [2024-04-26 15:35:47.277136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:1851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.851 [2024-04-26 15:35:47.277156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:29.851 [2024-04-26 15:35:47.288144] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190eea00 00:25:29.851 [2024-04-26 15:35:47.289278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:12719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.851 [2024-04-26 15:35:47.289293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:30.112 [2024-04-26 15:35:47.300258] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190e88f8 00:25:30.112 [2024-04-26 15:35:47.301394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:10853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.112 [2024-04-26 15:35:47.301410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:30.112 [2024-04-26 15:35:47.312355] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190e6fa8 00:25:30.112 [2024-04-26 15:35:47.313481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:12669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.112 [2024-04-26 15:35:47.313497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:30.112 [2024-04-26 15:35:47.324452] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190f0ff8 00:25:30.112 [2024-04-26 15:35:47.325569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:12585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.112 [2024-04-26 15:35:47.325584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:30.112 [2024-04-26 15:35:47.336562] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190f7970 00:25:30.112 [2024-04-26 15:35:47.337698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:24400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.112 [2024-04-26 15:35:47.337713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:30.112 [2024-04-26 15:35:47.348693] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190f8a50 00:25:30.112 [2024-04-26 15:35:47.349828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:12186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.112 [2024-04-26 15:35:47.349845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:30.112 [2024-04-26 15:35:47.360786] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190e4140 00:25:30.112 [2024-04-26 15:35:47.361917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:18506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.112 [2024-04-26 15:35:47.361932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:30.112 [2024-04-26 15:35:47.372902] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190e5ec8 00:25:30.112 [2024-04-26 15:35:47.374025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.112 [2024-04-26 15:35:47.374041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:30.112 [2024-04-26 15:35:47.384985] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190e0ea0 00:25:30.112 [2024-04-26 15:35:47.386093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:6050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.112 [2024-04-26 15:35:47.386108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:30.112 [2024-04-26 15:35:47.397063] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190fc560 00:25:30.112 [2024-04-26 15:35:47.398225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.112 [2024-04-26 15:35:47.398240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:30.112 [2024-04-26 15:35:47.409197] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190f2510 00:25:30.112 [2024-04-26 15:35:47.410345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:10765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.112 [2024-04-26 15:35:47.410360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:30.112 [2024-04-26 15:35:47.421304] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190e9e10 00:25:30.112 [2024-04-26 15:35:47.422448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:21129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.112 [2024-04-26 15:35:47.422463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:30.112 [2024-04-26 15:35:47.433412] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190eaef0 00:25:30.112 [2024-04-26 15:35:47.434567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:5174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.112 [2024-04-26 15:35:47.434582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:30.112 [2024-04-26 15:35:47.445536] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190fef90 00:25:30.112 [2024-04-26 15:35:47.446688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:14098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.112 [2024-04-26 15:35:47.446703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:30.112 [2024-04-26 15:35:47.457648] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190fda78 00:25:30.112 [2024-04-26 15:35:47.458801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:16430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.112 [2024-04-26 15:35:47.458816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:30.112 [2024-04-26 15:35:47.469764] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190e7818 00:25:30.112 [2024-04-26 15:35:47.470917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:2458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.112 [2024-04-26 15:35:47.470932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:30.112 [2024-04-26 15:35:47.481877] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190e4de8 00:25:30.112 [2024-04-26 15:35:47.483004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.112 [2024-04-26 15:35:47.483020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:30.112 [2024-04-26 15:35:47.493973] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190f3e60 00:25:30.112 [2024-04-26 15:35:47.495118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:17956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.112 [2024-04-26 15:35:47.495133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:30.112 [2024-04-26 15:35:47.506095] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190f6cc8 00:25:30.112 [2024-04-26 15:35:47.507247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:25162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.112 [2024-04-26 15:35:47.507263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:30.112 [2024-04-26 15:35:47.518205] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190f5378 00:25:30.112 [2024-04-26 15:35:47.519355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.112 [2024-04-26 15:35:47.519370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:30.112 [2024-04-26 15:35:47.530308] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190e95a0 00:25:30.112 [2024-04-26 15:35:47.531471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:12907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.112 [2024-04-26 15:35:47.531486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:30.113 [2024-04-26 15:35:47.542475] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190df988 00:25:30.113 [2024-04-26 15:35:47.543626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:13769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.113 [2024-04-26 15:35:47.543641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:30.113 [2024-04-26 15:35:47.554608] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190ebb98 00:25:30.113 [2024-04-26 15:35:47.555780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:45 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.113 [2024-04-26 15:35:47.555795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:30.374 [2024-04-26 15:35:47.566746] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190e1b48 00:25:30.374 [2024-04-26 15:35:47.567896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.374 [2024-04-26 15:35:47.567911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:30.374 [2024-04-26 15:35:47.578865] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190fb048 00:25:30.374 [2024-04-26 15:35:47.580040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.374 [2024-04-26 15:35:47.580056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:30.374 [2024-04-26 15:35:47.590934] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190fc128 00:25:30.374 [2024-04-26 15:35:47.592062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:17439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.374 [2024-04-26 15:35:47.592080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:30.374 [2024-04-26 15:35:47.603021] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190fd208 00:25:30.375 [2024-04-26 15:35:47.604171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:8146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.375 [2024-04-26 15:35:47.604186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:30.375 [2024-04-26 15:35:47.615155] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190f31b8 00:25:30.375 [2024-04-26 15:35:47.616311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:8058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.375 [2024-04-26 15:35:47.616326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:30.375 [2024-04-26 15:35:47.627245] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190eaab8 00:25:30.375 [2024-04-26 15:35:47.628403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.375 [2024-04-26 15:35:47.628417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:30.375 [2024-04-26 15:35:47.639337] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190ff3c8 00:25:30.375 [2024-04-26 15:35:47.640490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:23033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.375 [2024-04-26 15:35:47.640505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:30.375 [2024-04-26 15:35:47.651459] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190fe720 00:25:30.375 [2024-04-26 15:35:47.652619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.375 [2024-04-26 15:35:47.652634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:30.375 [2024-04-26 15:35:47.663602] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190ddc00 00:25:30.375 [2024-04-26 15:35:47.664754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.375 [2024-04-26 15:35:47.664769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:30.375 [2024-04-26 15:35:47.675713] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190e6b70 00:25:30.375 [2024-04-26 15:35:47.676864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:20834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.375 [2024-04-26 15:35:47.676879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:30.375 [2024-04-26 15:35:47.687799] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190f4b08 00:25:30.375 [2024-04-26 15:35:47.688920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:5332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.375 [2024-04-26 15:35:47.688934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:30.375 [2024-04-26 15:35:47.699895] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190f3a28 00:25:30.375 [2024-04-26 15:35:47.701020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:6975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.375 [2024-04-26 15:35:47.701035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:30.375 [2024-04-26 15:35:47.711989] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190f9b30 00:25:30.375 [2024-04-26 15:35:47.713139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.375 [2024-04-26 15:35:47.713154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:30.375 [2024-04-26 15:35:47.724117] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190f57b0 00:25:30.375 [2024-04-26 15:35:47.725262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:11047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.375 [2024-04-26 15:35:47.725277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:30.375 [2024-04-26 15:35:47.736231] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190dece0 00:25:30.375 [2024-04-26 15:35:47.737374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.375 [2024-04-26 15:35:47.737389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:30.375 [2024-04-26 15:35:47.748438] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190dfdc0 00:25:30.375 [2024-04-26 15:35:47.749594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.375 [2024-04-26 15:35:47.749609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:30.375 [2024-04-26 15:35:47.760532] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190ebfd0 00:25:30.375 [2024-04-26 15:35:47.761686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:5413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.375 [2024-04-26 15:35:47.761701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:30.375 [2024-04-26 15:35:47.772630] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190fa3a0 00:25:30.375 [2024-04-26 15:35:47.773772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:5496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.375 [2024-04-26 15:35:47.773787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:30.375 [2024-04-26 15:35:47.784715] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190fb480 00:25:30.375 [2024-04-26 15:35:47.785864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:6908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.375 [2024-04-26 15:35:47.785880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:30.375 [2024-04-26 15:35:47.796805] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190fc560 00:25:30.375 [2024-04-26 15:35:47.797916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:14381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.375 [2024-04-26 15:35:47.797932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:30.375 [2024-04-26 15:35:47.808900] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190ec840 00:25:30.375 [2024-04-26 15:35:47.810011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.375 [2024-04-26 15:35:47.810026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:30.375 [2024-04-26 15:35:47.821018] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190f20d8 00:25:30.375 [2024-04-26 15:35:47.822150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:8041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.375 [2024-04-26 15:35:47.822165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:30.637 [2024-04-26 15:35:47.833136] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190e0a68 00:25:30.637 [2024-04-26 15:35:47.834267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:20153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.637 [2024-04-26 15:35:47.834282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:30.637 [2024-04-26 15:35:47.845252] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190e6300 00:25:30.637 [2024-04-26 15:35:47.846376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:10984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.637 [2024-04-26 15:35:47.846391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:30.637 [2024-04-26 15:35:47.857413] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190de8a8 00:25:30.637 [2024-04-26 15:35:47.858542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:6774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.637 [2024-04-26 15:35:47.858558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:30.637 [2024-04-26 15:35:47.869525] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190f96f8 00:25:30.637 [2024-04-26 15:35:47.870654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.637 [2024-04-26 15:35:47.870669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:30.637 [2024-04-26 15:35:47.881794] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190f8618 00:25:30.637 [2024-04-26 15:35:47.882916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.637 [2024-04-26 15:35:47.882931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:30.637 [2024-04-26 15:35:47.893897] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190f0350 00:25:30.637 [2024-04-26 15:35:47.894988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:16698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.637 [2024-04-26 15:35:47.895003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:30.637 [2024-04-26 15:35:47.905981] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190f6458 00:25:30.637 [2024-04-26 15:35:47.907096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:25508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.637 [2024-04-26 15:35:47.907114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:30.637 [2024-04-26 15:35:47.918062] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190f4b08 00:25:30.637 [2024-04-26 15:35:47.919194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:20583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.637 [2024-04-26 15:35:47.919209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:30.637 [2024-04-26 15:35:47.930188] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190e8d30 00:25:30.637 [2024-04-26 15:35:47.931320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:18111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.637 [2024-04-26 15:35:47.931335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:30.637 [2024-04-26 15:35:47.942339] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190ee5c8 00:25:30.637 [2024-04-26 15:35:47.943469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:10351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.637 [2024-04-26 15:35:47.943483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:30.637 [2024-04-26 15:35:47.954466] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190f7538 00:25:30.637 [2024-04-26 15:35:47.955605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.637 [2024-04-26 15:35:47.955620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:30.637 [2024-04-26 15:35:47.966581] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190e27f0 00:25:30.637 [2024-04-26 15:35:47.967693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:2444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.637 [2024-04-26 15:35:47.967708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:30.637 [2024-04-26 15:35:47.980270] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190e38d0 00:25:30.637 [2024-04-26 15:35:47.982097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.637 [2024-04-26 15:35:47.982112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:30.637 [2024-04-26 15:35:47.991221] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190e88f8 00:25:30.637 [2024-04-26 15:35:47.992538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:6145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.638 [2024-04-26 15:35:47.992554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:30.638 [2024-04-26 15:35:48.003499] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190fc128 00:25:30.638 [2024-04-26 15:35:48.004806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:7348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.638 [2024-04-26 15:35:48.004821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:30.638 [2024-04-26 15:35:48.015754] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190e0a68 00:25:30.638 [2024-04-26 15:35:48.017062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.638 [2024-04-26 15:35:48.017078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:30.638 [2024-04-26 15:35:48.027887] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190f20d8 00:25:30.638 [2024-04-26 15:35:48.029203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.638 [2024-04-26 15:35:48.029218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:30.638 [2024-04-26 15:35:48.039987] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190ec840 00:25:30.638 [2024-04-26 15:35:48.041250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:10140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.638 [2024-04-26 15:35:48.041265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:30.638 [2024-04-26 15:35:48.052118] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190e5220 00:25:30.638 [2024-04-26 15:35:48.053417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:18611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.638 [2024-04-26 15:35:48.053432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:30.638 [2024-04-26 15:35:48.064238] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190ef6a8 00:25:30.638 [2024-04-26 15:35:48.065540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:8770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.638 [2024-04-26 15:35:48.065555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:30.638 [2024-04-26 15:35:48.076344] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190f81e0 00:25:30.638 [2024-04-26 15:35:48.077647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:18054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.638 [2024-04-26 15:35:48.077663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:30.900 [2024-04-26 15:35:48.088490] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190f92c0 00:25:30.900 [2024-04-26 15:35:48.089799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:15644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.900 [2024-04-26 15:35:48.089814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:30.900 [2024-04-26 15:35:48.100632] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190de470 00:25:30.900 [2024-04-26 15:35:48.101915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:14993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.900 [2024-04-26 15:35:48.101930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:30.900 [2024-04-26 15:35:48.112720] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190e6738 00:25:30.900 [2024-04-26 15:35:48.114015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.900 [2024-04-26 15:35:48.114030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:30.900 [2024-04-26 15:35:48.124862] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190eb328 00:25:30.900 [2024-04-26 15:35:48.126170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.900 [2024-04-26 15:35:48.126186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:30.900 [2024-04-26 15:35:48.136979] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190fdeb0 00:25:30.900 [2024-04-26 15:35:48.138283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.900 [2024-04-26 15:35:48.138297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:30.900 [2024-04-26 15:35:48.149121] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190fd640 00:25:30.900 [2024-04-26 15:35:48.150424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:3735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.900 [2024-04-26 15:35:48.150439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:30.900 [2024-04-26 15:35:48.161240] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190e73e0 00:25:30.900 [2024-04-26 15:35:48.162537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:14906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.900 [2024-04-26 15:35:48.162552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:30.900 [2024-04-26 15:35:48.173348] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190e9e10 00:25:30.900 [2024-04-26 15:35:48.174613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.900 [2024-04-26 15:35:48.174628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:30.900 [2024-04-26 15:35:48.185466] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190f2510 00:25:30.900 [2024-04-26 15:35:48.186791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:6185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.900 [2024-04-26 15:35:48.186806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:30.900 [2024-04-26 15:35:48.197616] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190fc560 00:25:30.900 [2024-04-26 15:35:48.198916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:11451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.900 [2024-04-26 15:35:48.198931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:30.900 [2024-04-26 15:35:48.209710] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190e0ea0 00:25:30.900 [2024-04-26 15:35:48.211011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:3686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.900 [2024-04-26 15:35:48.211027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:30.900 [2024-04-26 15:35:48.221859] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190e8088 00:25:30.900 [2024-04-26 15:35:48.223165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:11586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.900 [2024-04-26 15:35:48.223183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:30.900 [2024-04-26 15:35:48.235436] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190f5378 00:25:30.900 [2024-04-26 15:35:48.237440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:21085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.900 [2024-04-26 15:35:48.237455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:30.900 [2024-04-26 15:35:48.246396] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190e4de8 00:25:30.900 [2024-04-26 15:35:48.247876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:4773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.900 [2024-04-26 15:35:48.247891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:30.900 [2024-04-26 15:35:48.258660] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190f3a28 00:25:30.900 [2024-04-26 15:35:48.260116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.900 [2024-04-26 15:35:48.260130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:30.900 [2024-04-26 15:35:48.270822] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190f0788 00:25:30.900 [2024-04-26 15:35:48.272252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:16712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.900 [2024-04-26 15:35:48.272267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:30.900 [2024-04-26 15:35:48.282970] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190fb8b8 00:25:30.900 [2024-04-26 15:35:48.284441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:11205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.900 [2024-04-26 15:35:48.284455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:30.900 [2024-04-26 15:35:48.295104] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190fa7d8 00:25:30.901 [2024-04-26 15:35:48.296528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:4524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.901 [2024-04-26 15:35:48.296543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:30.901 [2024-04-26 15:35:48.307206] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190ec408 00:25:30.901 [2024-04-26 15:35:48.308669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:16368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.901 [2024-04-26 15:35:48.308684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:30.901 [2024-04-26 15:35:48.319320] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190e01f8 00:25:30.901 [2024-04-26 15:35:48.320753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:25178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.901 [2024-04-26 15:35:48.320768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:30.901 [2024-04-26 15:35:48.331444] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190ecc78 00:25:30.901 [2024-04-26 15:35:48.332928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:3883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.901 [2024-04-26 15:35:48.332942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:30.901 [2024-04-26 15:35:48.343586] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190e5658 00:25:30.901 [2024-04-26 15:35:48.345014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.901 [2024-04-26 15:35:48.345029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:31.163 [2024-04-26 15:35:48.355684] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190ef270 00:25:31.163 [2024-04-26 15:35:48.357146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:12899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.163 [2024-04-26 15:35:48.357161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:31.163 [2024-04-26 15:35:48.367795] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190f8618 00:25:31.163 [2024-04-26 15:35:48.369260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:11670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.163 [2024-04-26 15:35:48.369275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:31.163 [2024-04-26 15:35:48.379932] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190f96f8 00:25:31.163 [2024-04-26 15:35:48.381393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:8586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.163 [2024-04-26 15:35:48.381409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:31.163 [2024-04-26 15:35:48.392030] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190f20d8 00:25:31.163 [2024-04-26 15:35:48.393498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:23264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.163 [2024-04-26 15:35:48.393513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:31.163 [2024-04-26 15:35:48.404173] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190e0a68 00:25:31.163 [2024-04-26 15:35:48.405625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:15873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.163 [2024-04-26 15:35:48.405640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:31.163 [2024-04-26 15:35:48.416301] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190fc128 00:25:31.163 [2024-04-26 15:35:48.417769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:1740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.163 [2024-04-26 15:35:48.417784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:31.163 [2024-04-26 15:35:48.428390] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190e95a0 00:25:31.163 [2024-04-26 15:35:48.429858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.163 [2024-04-26 15:35:48.429873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:31.163 [2024-04-26 15:35:48.440526] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190f5378 00:25:31.163 [2024-04-26 15:35:48.441998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.163 [2024-04-26 15:35:48.442013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:31.163 [2024-04-26 15:35:48.452651] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190f6cc8 00:25:31.163 [2024-04-26 15:35:48.454086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:24396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.163 [2024-04-26 15:35:48.454101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:31.163 [2024-04-26 15:35:48.464783] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190f3e60 00:25:31.163 [2024-04-26 15:35:48.466248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:6780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.163 [2024-04-26 15:35:48.466263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:31.163 [2024-04-26 15:35:48.476948] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190f0bc0 00:25:31.163 [2024-04-26 15:35:48.478416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:25108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.163 [2024-04-26 15:35:48.478432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:31.164 [2024-04-26 15:35:48.489087] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190fac10 00:25:31.164 [2024-04-26 15:35:48.490551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:2259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.164 [2024-04-26 15:35:48.490566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:31.164 [2024-04-26 15:35:48.501198] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190e1710 00:25:31.164 [2024-04-26 15:35:48.502669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:12279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.164 [2024-04-26 15:35:48.502684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:31.164 [2024-04-26 15:35:48.513321] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190eb760 00:25:31.164 [2024-04-26 15:35:48.514791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.164 [2024-04-26 15:35:48.514806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:31.164 [2024-04-26 15:35:48.525457] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190df550 00:25:31.164 [2024-04-26 15:35:48.526917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:19238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.164 [2024-04-26 15:35:48.526934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:31.164 [2024-04-26 15:35:48.537584] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190ed0b0 00:25:31.164 [2024-04-26 15:35:48.539021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:13981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.164 [2024-04-26 15:35:48.539036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:31.164 [2024-04-26 15:35:48.549697] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf0b10) with pdu=0x2000190eff18 00:25:31.164 [2024-04-26 15:35:48.551143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:16616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.164 [2024-04-26 15:35:48.551159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:31.164 00:25:31.164 Latency(us) 00:25:31.164 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:31.164 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:31.164 nvme0n1 : 2.00 21051.90 82.23 0.00 0.00 6072.55 2211.84 14199.47 00:25:31.164 =================================================================================================================== 00:25:31.164 Total : 21051.90 82.23 0.00 0.00 6072.55 2211.84 14199.47 00:25:31.164 0 00:25:31.164 15:35:48 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:31.164 15:35:48 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:31.164 | .driver_specific 00:25:31.164 | .nvme_error 00:25:31.164 | .status_code 00:25:31.164 | .command_transient_transport_error' 00:25:31.164 15:35:48 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:31.164 15:35:48 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:31.426 15:35:48 -- host/digest.sh@71 -- # (( 165 > 0 )) 00:25:31.426 15:35:48 -- host/digest.sh@73 -- # killprocess 1775414 00:25:31.426 15:35:48 -- common/autotest_common.sh@936 -- # '[' -z 1775414 ']' 00:25:31.426 15:35:48 -- common/autotest_common.sh@940 -- # kill -0 1775414 00:25:31.426 15:35:48 -- common/autotest_common.sh@941 -- # uname 00:25:31.426 15:35:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:31.426 15:35:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1775414 00:25:31.426 15:35:48 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:31.426 15:35:48 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:31.426 15:35:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1775414' 00:25:31.426 killing process with pid 1775414 00:25:31.426 15:35:48 -- common/autotest_common.sh@955 -- # kill 1775414 00:25:31.426 Received shutdown signal, test time was about 2.000000 seconds 00:25:31.426 00:25:31.426 Latency(us) 00:25:31.426 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:31.426 =================================================================================================================== 00:25:31.426 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:31.426 15:35:48 -- common/autotest_common.sh@960 -- # wait 1775414 00:25:31.687 15:35:48 -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:25:31.687 15:35:48 -- host/digest.sh@54 -- # local rw bs qd 00:25:31.687 15:35:48 -- host/digest.sh@56 -- # rw=randwrite 00:25:31.687 15:35:48 -- host/digest.sh@56 -- # bs=131072 00:25:31.687 15:35:48 -- host/digest.sh@56 -- # qd=16 00:25:31.687 15:35:48 -- host/digest.sh@58 -- # bperfpid=1776121 00:25:31.687 15:35:48 -- host/digest.sh@60 -- # waitforlisten 1776121 /var/tmp/bperf.sock 00:25:31.687 15:35:48 -- common/autotest_common.sh@817 -- # '[' -z 1776121 ']' 00:25:31.687 15:35:48 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:25:31.687 15:35:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:31.687 15:35:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:31.687 15:35:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:31.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:31.687 15:35:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:31.687 15:35:48 -- common/autotest_common.sh@10 -- # set +x 00:25:31.687 [2024-04-26 15:35:48.953218] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:25:31.687 [2024-04-26 15:35:48.953276] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1776121 ] 00:25:31.687 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:31.687 Zero copy mechanism will not be used. 00:25:31.687 EAL: No free 2048 kB hugepages reported on node 1 00:25:31.687 [2024-04-26 15:35:49.029277] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:31.687 [2024-04-26 15:35:49.081125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:32.629 15:35:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:32.629 15:35:49 -- common/autotest_common.sh@850 -- # return 0 00:25:32.629 15:35:49 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:32.629 15:35:49 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:32.629 15:35:49 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:32.629 15:35:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:32.629 15:35:49 -- common/autotest_common.sh@10 -- # set +x 00:25:32.629 15:35:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:32.629 15:35:49 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:32.629 15:35:49 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:32.889 nvme0n1 00:25:32.889 15:35:50 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:32.889 15:35:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:32.889 15:35:50 -- common/autotest_common.sh@10 -- # set +x 00:25:32.889 15:35:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:32.889 15:35:50 -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:32.889 15:35:50 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:32.889 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:32.889 Zero copy mechanism will not be used. 00:25:32.889 Running I/O for 2 seconds... 00:25:32.889 [2024-04-26 15:35:50.224964] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:32.889 [2024-04-26 15:35:50.225325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.889 [2024-04-26 15:35:50.225351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.889 [2024-04-26 15:35:50.235512] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:32.889 [2024-04-26 15:35:50.235861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.889 [2024-04-26 15:35:50.235879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.889 [2024-04-26 15:35:50.246747] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:32.889 [2024-04-26 15:35:50.247068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.889 [2024-04-26 15:35:50.247085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.889 [2024-04-26 15:35:50.258700] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:32.889 [2024-04-26 15:35:50.259041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.889 [2024-04-26 15:35:50.259062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.889 [2024-04-26 15:35:50.267800] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:32.889 [2024-04-26 15:35:50.267879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.889 [2024-04-26 15:35:50.267895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.889 [2024-04-26 15:35:50.276727] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:32.889 [2024-04-26 15:35:50.277075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.889 [2024-04-26 15:35:50.277092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.889 [2024-04-26 15:35:50.287410] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:32.889 [2024-04-26 15:35:50.287760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.889 [2024-04-26 15:35:50.287776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.889 [2024-04-26 15:35:50.297691] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:32.889 [2024-04-26 15:35:50.298031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.889 [2024-04-26 15:35:50.298047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.889 [2024-04-26 15:35:50.306748] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:32.889 [2024-04-26 15:35:50.307100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.889 [2024-04-26 15:35:50.307117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.889 [2024-04-26 15:35:50.316864] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:32.889 [2024-04-26 15:35:50.317218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.889 [2024-04-26 15:35:50.317234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.889 [2024-04-26 15:35:50.325876] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:32.889 [2024-04-26 15:35:50.326235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.889 [2024-04-26 15:35:50.326252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.889 [2024-04-26 15:35:50.333116] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:32.889 [2024-04-26 15:35:50.333464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.889 [2024-04-26 15:35:50.333480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.149 [2024-04-26 15:35:50.343895] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.149 [2024-04-26 15:35:50.344237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.149 [2024-04-26 15:35:50.344253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.149 [2024-04-26 15:35:50.351982] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.149 [2024-04-26 15:35:50.352319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.149 [2024-04-26 15:35:50.352335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.149 [2024-04-26 15:35:50.359500] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.149 [2024-04-26 15:35:50.359717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.149 [2024-04-26 15:35:50.359733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.149 [2024-04-26 15:35:50.366125] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.149 [2024-04-26 15:35:50.366341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.149 [2024-04-26 15:35:50.366358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.149 [2024-04-26 15:35:50.376906] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.149 [2024-04-26 15:35:50.377279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.149 [2024-04-26 15:35:50.377295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.149 [2024-04-26 15:35:50.383223] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.149 [2024-04-26 15:35:50.383568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.149 [2024-04-26 15:35:50.383585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.149 [2024-04-26 15:35:50.393127] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.149 [2024-04-26 15:35:50.393463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.149 [2024-04-26 15:35:50.393479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.149 [2024-04-26 15:35:50.398151] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.149 [2024-04-26 15:35:50.398236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.149 [2024-04-26 15:35:50.398251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.149 [2024-04-26 15:35:50.403879] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.149 [2024-04-26 15:35:50.404217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.149 [2024-04-26 15:35:50.404237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.149 [2024-04-26 15:35:50.411823] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.149 [2024-04-26 15:35:50.412178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.149 [2024-04-26 15:35:50.412195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.149 [2024-04-26 15:35:50.419850] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.149 [2024-04-26 15:35:50.420062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.149 [2024-04-26 15:35:50.420077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.149 [2024-04-26 15:35:50.428686] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.149 [2024-04-26 15:35:50.429033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.149 [2024-04-26 15:35:50.429049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.149 [2024-04-26 15:35:50.435481] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.149 [2024-04-26 15:35:50.435555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.149 [2024-04-26 15:35:50.435569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.149 [2024-04-26 15:35:50.445238] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.149 [2024-04-26 15:35:50.445462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.149 [2024-04-26 15:35:50.445479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.149 [2024-04-26 15:35:50.454212] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.149 [2024-04-26 15:35:50.454551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.149 [2024-04-26 15:35:50.454567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.149 [2024-04-26 15:35:50.460654] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.149 [2024-04-26 15:35:50.460997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.149 [2024-04-26 15:35:50.461014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.149 [2024-04-26 15:35:50.468300] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.149 [2024-04-26 15:35:50.468641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.149 [2024-04-26 15:35:50.468657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.149 [2024-04-26 15:35:50.473974] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.149 [2024-04-26 15:35:50.474303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.149 [2024-04-26 15:35:50.474319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.149 [2024-04-26 15:35:50.479724] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.149 [2024-04-26 15:35:50.480065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.149 [2024-04-26 15:35:50.480081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.149 [2024-04-26 15:35:50.485088] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.149 [2024-04-26 15:35:50.485300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.149 [2024-04-26 15:35:50.485315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.149 [2024-04-26 15:35:50.495249] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.149 [2024-04-26 15:35:50.495581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.149 [2024-04-26 15:35:50.495597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.149 [2024-04-26 15:35:50.500123] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.149 [2024-04-26 15:35:50.500450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.149 [2024-04-26 15:35:50.500466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.149 [2024-04-26 15:35:50.505216] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.149 [2024-04-26 15:35:50.505538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.149 [2024-04-26 15:35:50.505555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.149 [2024-04-26 15:35:50.511843] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.150 [2024-04-26 15:35:50.512193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.150 [2024-04-26 15:35:50.512209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.150 [2024-04-26 15:35:50.519794] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.150 [2024-04-26 15:35:50.520106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.150 [2024-04-26 15:35:50.520122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.150 [2024-04-26 15:35:50.526953] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.150 [2024-04-26 15:35:50.527297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.150 [2024-04-26 15:35:50.527313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.150 [2024-04-26 15:35:50.532257] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.150 [2024-04-26 15:35:50.532577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.150 [2024-04-26 15:35:50.532593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.150 [2024-04-26 15:35:50.541566] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.150 [2024-04-26 15:35:50.541900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.150 [2024-04-26 15:35:50.541915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.150 [2024-04-26 15:35:50.547443] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.150 [2024-04-26 15:35:50.547655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.150 [2024-04-26 15:35:50.547671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.150 [2024-04-26 15:35:50.553619] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.150 [2024-04-26 15:35:50.553832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.150 [2024-04-26 15:35:50.553853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.150 [2024-04-26 15:35:50.559904] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.150 [2024-04-26 15:35:50.560273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.150 [2024-04-26 15:35:50.560288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.150 [2024-04-26 15:35:50.570104] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.150 [2024-04-26 15:35:50.570437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.150 [2024-04-26 15:35:50.570453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.150 [2024-04-26 15:35:50.579492] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.150 [2024-04-26 15:35:50.579840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.150 [2024-04-26 15:35:50.579856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.150 [2024-04-26 15:35:50.587427] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.150 [2024-04-26 15:35:50.587761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.150 [2024-04-26 15:35:50.587777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.150 [2024-04-26 15:35:50.595270] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.150 [2024-04-26 15:35:50.595625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.150 [2024-04-26 15:35:50.595644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.411 [2024-04-26 15:35:50.601002] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.411 [2024-04-26 15:35:50.601341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.411 [2024-04-26 15:35:50.601357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.411 [2024-04-26 15:35:50.607537] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.411 [2024-04-26 15:35:50.607876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.411 [2024-04-26 15:35:50.607892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.411 [2024-04-26 15:35:50.613868] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.411 [2024-04-26 15:35:50.614187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.411 [2024-04-26 15:35:50.614203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.411 [2024-04-26 15:35:50.622033] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.411 [2024-04-26 15:35:50.622370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.411 [2024-04-26 15:35:50.622386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.411 [2024-04-26 15:35:50.629110] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.411 [2024-04-26 15:35:50.629443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.411 [2024-04-26 15:35:50.629459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.411 [2024-04-26 15:35:50.638425] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.411 [2024-04-26 15:35:50.638683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.411 [2024-04-26 15:35:50.638699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.411 [2024-04-26 15:35:50.648845] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.411 [2024-04-26 15:35:50.649192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.411 [2024-04-26 15:35:50.649207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.411 [2024-04-26 15:35:50.659147] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.411 [2024-04-26 15:35:50.659500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.411 [2024-04-26 15:35:50.659516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.411 [2024-04-26 15:35:50.669467] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.411 [2024-04-26 15:35:50.669574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.411 [2024-04-26 15:35:50.669589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.411 [2024-04-26 15:35:50.680536] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.411 [2024-04-26 15:35:50.680628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.411 [2024-04-26 15:35:50.680642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.411 [2024-04-26 15:35:50.691011] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.411 [2024-04-26 15:35:50.691084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.411 [2024-04-26 15:35:50.691098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.411 [2024-04-26 15:35:50.701503] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.411 [2024-04-26 15:35:50.701871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.411 [2024-04-26 15:35:50.701887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.411 [2024-04-26 15:35:50.712516] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.411 [2024-04-26 15:35:50.712871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.411 [2024-04-26 15:35:50.712886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.411 [2024-04-26 15:35:50.724168] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.411 [2024-04-26 15:35:50.724501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.411 [2024-04-26 15:35:50.724517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.411 [2024-04-26 15:35:50.733145] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.411 [2024-04-26 15:35:50.733225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.411 [2024-04-26 15:35:50.733239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.411 [2024-04-26 15:35:50.744898] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.411 [2024-04-26 15:35:50.745229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.411 [2024-04-26 15:35:50.745245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.411 [2024-04-26 15:35:50.754607] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.411 [2024-04-26 15:35:50.754947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.411 [2024-04-26 15:35:50.754964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.411 [2024-04-26 15:35:50.764466] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.411 [2024-04-26 15:35:50.764815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.411 [2024-04-26 15:35:50.764831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.411 [2024-04-26 15:35:50.774155] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.411 [2024-04-26 15:35:50.774502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.411 [2024-04-26 15:35:50.774518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.411 [2024-04-26 15:35:50.782353] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.411 [2024-04-26 15:35:50.782694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.411 [2024-04-26 15:35:50.782710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.411 [2024-04-26 15:35:50.792181] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.411 [2024-04-26 15:35:50.792510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.411 [2024-04-26 15:35:50.792526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.411 [2024-04-26 15:35:50.799934] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.411 [2024-04-26 15:35:50.800282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.411 [2024-04-26 15:35:50.800298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.411 [2024-04-26 15:35:50.807438] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.411 [2024-04-26 15:35:50.807753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.411 [2024-04-26 15:35:50.807769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.411 [2024-04-26 15:35:50.815413] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.411 [2024-04-26 15:35:50.815760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.411 [2024-04-26 15:35:50.815776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.411 [2024-04-26 15:35:50.823411] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.411 [2024-04-26 15:35:50.823492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.411 [2024-04-26 15:35:50.823506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.411 [2024-04-26 15:35:50.831588] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.411 [2024-04-26 15:35:50.831942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.411 [2024-04-26 15:35:50.831961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.411 [2024-04-26 15:35:50.840921] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.411 [2024-04-26 15:35:50.841252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.411 [2024-04-26 15:35:50.841267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.411 [2024-04-26 15:35:50.851936] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.411 [2024-04-26 15:35:50.852276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.411 [2024-04-26 15:35:50.852292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.672 [2024-04-26 15:35:50.860349] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.672 [2024-04-26 15:35:50.860703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.672 [2024-04-26 15:35:50.860720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.672 [2024-04-26 15:35:50.870474] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.672 [2024-04-26 15:35:50.870829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.672 [2024-04-26 15:35:50.870849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.672 [2024-04-26 15:35:50.879672] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.672 [2024-04-26 15:35:50.880027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.672 [2024-04-26 15:35:50.880043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.672 [2024-04-26 15:35:50.887594] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.672 [2024-04-26 15:35:50.887958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.672 [2024-04-26 15:35:50.887974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.672 [2024-04-26 15:35:50.895112] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.672 [2024-04-26 15:35:50.895450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.672 [2024-04-26 15:35:50.895466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.672 [2024-04-26 15:35:50.902040] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.672 [2024-04-26 15:35:50.902382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.672 [2024-04-26 15:35:50.902398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.672 [2024-04-26 15:35:50.910647] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.672 [2024-04-26 15:35:50.910889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.672 [2024-04-26 15:35:50.910905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.672 [2024-04-26 15:35:50.919872] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.672 [2024-04-26 15:35:50.920220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.672 [2024-04-26 15:35:50.920236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.672 [2024-04-26 15:35:50.929149] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.672 [2024-04-26 15:35:50.929242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.672 [2024-04-26 15:35:50.929256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.672 [2024-04-26 15:35:50.940971] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.672 [2024-04-26 15:35:50.941325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.672 [2024-04-26 15:35:50.941342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.672 [2024-04-26 15:35:50.949966] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.672 [2024-04-26 15:35:50.950317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.672 [2024-04-26 15:35:50.950333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.672 [2024-04-26 15:35:50.960075] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.672 [2024-04-26 15:35:50.960162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.672 [2024-04-26 15:35:50.960175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.672 [2024-04-26 15:35:50.971360] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.672 [2024-04-26 15:35:50.971697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.672 [2024-04-26 15:35:50.971713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.672 [2024-04-26 15:35:50.982970] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.672 [2024-04-26 15:35:50.983305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.672 [2024-04-26 15:35:50.983321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.672 [2024-04-26 15:35:50.993114] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.672 [2024-04-26 15:35:50.993432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.672 [2024-04-26 15:35:50.993448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.672 [2024-04-26 15:35:51.002134] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.672 [2024-04-26 15:35:51.002477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.672 [2024-04-26 15:35:51.002493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.672 [2024-04-26 15:35:51.012374] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.672 [2024-04-26 15:35:51.012728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.672 [2024-04-26 15:35:51.012743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.672 [2024-04-26 15:35:51.022211] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.672 [2024-04-26 15:35:51.022435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.672 [2024-04-26 15:35:51.022451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.672 [2024-04-26 15:35:51.031785] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.672 [2024-04-26 15:35:51.031868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.672 [2024-04-26 15:35:51.031887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.672 [2024-04-26 15:35:51.041388] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.672 [2024-04-26 15:35:51.041722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.672 [2024-04-26 15:35:51.041739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.672 [2024-04-26 15:35:51.048338] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.672 [2024-04-26 15:35:51.048660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.672 [2024-04-26 15:35:51.048676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.672 [2024-04-26 15:35:51.053804] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.672 [2024-04-26 15:35:51.054161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.672 [2024-04-26 15:35:51.054177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.672 [2024-04-26 15:35:51.061550] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.672 [2024-04-26 15:35:51.061883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.672 [2024-04-26 15:35:51.061899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.672 [2024-04-26 15:35:51.068874] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.672 [2024-04-26 15:35:51.069195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.672 [2024-04-26 15:35:51.069218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.672 [2024-04-26 15:35:51.074162] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.673 [2024-04-26 15:35:51.074510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.673 [2024-04-26 15:35:51.074527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.673 [2024-04-26 15:35:51.081210] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.673 [2024-04-26 15:35:51.081527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.673 [2024-04-26 15:35:51.081543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.673 [2024-04-26 15:35:51.087232] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.673 [2024-04-26 15:35:51.087441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.673 [2024-04-26 15:35:51.087458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.673 [2024-04-26 15:35:51.096086] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.673 [2024-04-26 15:35:51.096173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.673 [2024-04-26 15:35:51.096187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.673 [2024-04-26 15:35:51.100988] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.673 [2024-04-26 15:35:51.101326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.673 [2024-04-26 15:35:51.101342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.673 [2024-04-26 15:35:51.108428] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.673 [2024-04-26 15:35:51.108769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.673 [2024-04-26 15:35:51.108786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.673 [2024-04-26 15:35:51.118239] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.673 [2024-04-26 15:35:51.118567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.673 [2024-04-26 15:35:51.118584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.934 [2024-04-26 15:35:51.128867] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.934 [2024-04-26 15:35:51.129210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.934 [2024-04-26 15:35:51.129226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.934 [2024-04-26 15:35:51.141557] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.934 [2024-04-26 15:35:51.141915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.934 [2024-04-26 15:35:51.141931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.934 [2024-04-26 15:35:51.154118] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.934 [2024-04-26 15:35:51.154463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.934 [2024-04-26 15:35:51.154479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.934 [2024-04-26 15:35:51.166380] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.934 [2024-04-26 15:35:51.166732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.934 [2024-04-26 15:35:51.166749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.934 [2024-04-26 15:35:51.175890] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.934 [2024-04-26 15:35:51.176223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.934 [2024-04-26 15:35:51.176239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.934 [2024-04-26 15:35:51.182790] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.934 [2024-04-26 15:35:51.183148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.934 [2024-04-26 15:35:51.183164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.934 [2024-04-26 15:35:51.188516] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.934 [2024-04-26 15:35:51.188596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.934 [2024-04-26 15:35:51.188611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.934 [2024-04-26 15:35:51.196608] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.934 [2024-04-26 15:35:51.196822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.934 [2024-04-26 15:35:51.196842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.934 [2024-04-26 15:35:51.201670] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.934 [2024-04-26 15:35:51.202014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.934 [2024-04-26 15:35:51.202030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.934 [2024-04-26 15:35:51.208847] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.934 [2024-04-26 15:35:51.209240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.934 [2024-04-26 15:35:51.209256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.934 [2024-04-26 15:35:51.216285] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.934 [2024-04-26 15:35:51.216584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.934 [2024-04-26 15:35:51.216600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.934 [2024-04-26 15:35:51.221025] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.934 [2024-04-26 15:35:51.221363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.934 [2024-04-26 15:35:51.221379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.934 [2024-04-26 15:35:51.228187] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.934 [2024-04-26 15:35:51.228522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.934 [2024-04-26 15:35:51.228538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.934 [2024-04-26 15:35:51.233804] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.934 [2024-04-26 15:35:51.234020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.934 [2024-04-26 15:35:51.234036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.934 [2024-04-26 15:35:51.239117] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.934 [2024-04-26 15:35:51.239436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.934 [2024-04-26 15:35:51.239452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.934 [2024-04-26 15:35:51.244582] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.934 [2024-04-26 15:35:51.244907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.934 [2024-04-26 15:35:51.244923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.934 [2024-04-26 15:35:51.253411] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.934 [2024-04-26 15:35:51.253728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.934 [2024-04-26 15:35:51.253743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.934 [2024-04-26 15:35:51.260555] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.934 [2024-04-26 15:35:51.260815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.934 [2024-04-26 15:35:51.260830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.934 [2024-04-26 15:35:51.269123] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.935 [2024-04-26 15:35:51.269456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.935 [2024-04-26 15:35:51.269475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.935 [2024-04-26 15:35:51.276082] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.935 [2024-04-26 15:35:51.276416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.935 [2024-04-26 15:35:51.276432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.935 [2024-04-26 15:35:51.283567] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.935 [2024-04-26 15:35:51.283895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.935 [2024-04-26 15:35:51.283910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.935 [2024-04-26 15:35:51.288540] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.935 [2024-04-26 15:35:51.288873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.935 [2024-04-26 15:35:51.288889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.935 [2024-04-26 15:35:51.294178] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.935 [2024-04-26 15:35:51.294502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.935 [2024-04-26 15:35:51.294517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.935 [2024-04-26 15:35:51.300126] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.935 [2024-04-26 15:35:51.300336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.935 [2024-04-26 15:35:51.300351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.935 [2024-04-26 15:35:51.306768] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.935 [2024-04-26 15:35:51.307125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.935 [2024-04-26 15:35:51.307141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.935 [2024-04-26 15:35:51.314243] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.935 [2024-04-26 15:35:51.314592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.935 [2024-04-26 15:35:51.314608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.935 [2024-04-26 15:35:51.320717] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.935 [2024-04-26 15:35:51.320929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.935 [2024-04-26 15:35:51.320945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.935 [2024-04-26 15:35:51.329252] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.935 [2024-04-26 15:35:51.329584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.935 [2024-04-26 15:35:51.329600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.935 [2024-04-26 15:35:51.337823] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.935 [2024-04-26 15:35:51.338147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.935 [2024-04-26 15:35:51.338163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.935 [2024-04-26 15:35:51.346729] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.935 [2024-04-26 15:35:51.347061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.935 [2024-04-26 15:35:51.347077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.935 [2024-04-26 15:35:51.356597] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.935 [2024-04-26 15:35:51.356936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.935 [2024-04-26 15:35:51.356953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.935 [2024-04-26 15:35:51.367604] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.935 [2024-04-26 15:35:51.367949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.935 [2024-04-26 15:35:51.367966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.935 [2024-04-26 15:35:51.377479] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:33.935 [2024-04-26 15:35:51.377695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.935 [2024-04-26 15:35:51.377711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:34.196 [2024-04-26 15:35:51.387998] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.196 [2024-04-26 15:35:51.388323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.196 [2024-04-26 15:35:51.388339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.196 [2024-04-26 15:35:51.399050] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.196 [2024-04-26 15:35:51.399210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.196 [2024-04-26 15:35:51.399225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:34.196 [2024-04-26 15:35:51.408386] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.196 [2024-04-26 15:35:51.408720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.196 [2024-04-26 15:35:51.408736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:34.196 [2024-04-26 15:35:51.419982] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.196 [2024-04-26 15:35:51.420329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.196 [2024-04-26 15:35:51.420345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:34.196 [2024-04-26 15:35:51.430497] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.196 [2024-04-26 15:35:51.430843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.196 [2024-04-26 15:35:51.430859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.196 [2024-04-26 15:35:51.440651] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.196 [2024-04-26 15:35:51.441000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.196 [2024-04-26 15:35:51.441017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:34.196 [2024-04-26 15:35:51.449432] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.196 [2024-04-26 15:35:51.449647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.196 [2024-04-26 15:35:51.449663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:34.196 [2024-04-26 15:35:51.456607] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.196 [2024-04-26 15:35:51.456955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.196 [2024-04-26 15:35:51.456972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:34.196 [2024-04-26 15:35:51.462483] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.196 [2024-04-26 15:35:51.462816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.196 [2024-04-26 15:35:51.462832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.196 [2024-04-26 15:35:51.472550] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.196 [2024-04-26 15:35:51.472767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.196 [2024-04-26 15:35:51.472783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:34.196 [2024-04-26 15:35:51.482617] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.196 [2024-04-26 15:35:51.482955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.197 [2024-04-26 15:35:51.482971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:34.197 [2024-04-26 15:35:51.494019] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.197 [2024-04-26 15:35:51.494368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.197 [2024-04-26 15:35:51.494386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:34.197 [2024-04-26 15:35:51.503734] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.197 [2024-04-26 15:35:51.504058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.197 [2024-04-26 15:35:51.504074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.197 [2024-04-26 15:35:51.515337] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.197 [2024-04-26 15:35:51.515670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.197 [2024-04-26 15:35:51.515686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:34.197 [2024-04-26 15:35:51.528288] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.197 [2024-04-26 15:35:51.528637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.197 [2024-04-26 15:35:51.528653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:34.197 [2024-04-26 15:35:51.536196] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.197 [2024-04-26 15:35:51.536548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.197 [2024-04-26 15:35:51.536563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:34.197 [2024-04-26 15:35:51.544851] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.197 [2024-04-26 15:35:51.545169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.197 [2024-04-26 15:35:51.545185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.197 [2024-04-26 15:35:51.553145] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.197 [2024-04-26 15:35:51.553488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.197 [2024-04-26 15:35:51.553503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:34.197 [2024-04-26 15:35:51.561279] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.197 [2024-04-26 15:35:51.561621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.197 [2024-04-26 15:35:51.561636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:34.197 [2024-04-26 15:35:51.566508] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.197 [2024-04-26 15:35:51.566842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.197 [2024-04-26 15:35:51.566858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:34.197 [2024-04-26 15:35:51.572903] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.197 [2024-04-26 15:35:51.573235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.197 [2024-04-26 15:35:51.573251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.197 [2024-04-26 15:35:51.580413] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.197 [2024-04-26 15:35:51.580748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.197 [2024-04-26 15:35:51.580763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:34.197 [2024-04-26 15:35:51.586140] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.197 [2024-04-26 15:35:51.586458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.197 [2024-04-26 15:35:51.586474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:34.197 [2024-04-26 15:35:51.590963] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.197 [2024-04-26 15:35:51.591374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.197 [2024-04-26 15:35:51.591390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:34.197 [2024-04-26 15:35:51.596808] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.197 [2024-04-26 15:35:51.597049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.197 [2024-04-26 15:35:51.597065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.197 [2024-04-26 15:35:51.601208] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.197 [2024-04-26 15:35:51.601416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.197 [2024-04-26 15:35:51.601433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:34.197 [2024-04-26 15:35:51.608043] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.197 [2024-04-26 15:35:51.608364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.197 [2024-04-26 15:35:51.608380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:34.197 [2024-04-26 15:35:51.614701] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.197 [2024-04-26 15:35:51.615022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.197 [2024-04-26 15:35:51.615038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:34.197 [2024-04-26 15:35:51.625692] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.197 [2024-04-26 15:35:51.626042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.197 [2024-04-26 15:35:51.626061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.197 [2024-04-26 15:35:51.634631] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.197 [2024-04-26 15:35:51.634957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.197 [2024-04-26 15:35:51.634973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:34.197 [2024-04-26 15:35:51.642739] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.197 [2024-04-26 15:35:51.643053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.197 [2024-04-26 15:35:51.643069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:34.459 [2024-04-26 15:35:51.652519] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.459 [2024-04-26 15:35:51.652871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.459 [2024-04-26 15:35:51.652888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:34.459 [2024-04-26 15:35:51.663034] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.459 [2024-04-26 15:35:51.663372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.459 [2024-04-26 15:35:51.663388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.459 [2024-04-26 15:35:51.673628] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.459 [2024-04-26 15:35:51.673970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.459 [2024-04-26 15:35:51.673987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:34.459 [2024-04-26 15:35:51.684914] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.459 [2024-04-26 15:35:51.685243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.459 [2024-04-26 15:35:51.685259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:34.459 [2024-04-26 15:35:51.694381] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.459 [2024-04-26 15:35:51.694718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.459 [2024-04-26 15:35:51.694734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:34.459 [2024-04-26 15:35:51.704505] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.459 [2024-04-26 15:35:51.704609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.459 [2024-04-26 15:35:51.704623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.459 [2024-04-26 15:35:51.717637] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.459 [2024-04-26 15:35:51.717987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.459 [2024-04-26 15:35:51.718003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:34.459 [2024-04-26 15:35:51.729961] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.459 [2024-04-26 15:35:51.730294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.459 [2024-04-26 15:35:51.730310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:34.459 [2024-04-26 15:35:51.740281] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.459 [2024-04-26 15:35:51.740530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.459 [2024-04-26 15:35:51.740546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:34.459 [2024-04-26 15:35:51.751605] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.459 [2024-04-26 15:35:51.751910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.459 [2024-04-26 15:35:51.751925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.459 [2024-04-26 15:35:51.763783] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.459 [2024-04-26 15:35:51.763885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.459 [2024-04-26 15:35:51.763899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:34.459 [2024-04-26 15:35:51.776394] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.459 [2024-04-26 15:35:51.776731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.459 [2024-04-26 15:35:51.776747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:34.459 [2024-04-26 15:35:51.789222] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.459 [2024-04-26 15:35:51.789562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.459 [2024-04-26 15:35:51.789578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:34.459 [2024-04-26 15:35:51.801639] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.459 [2024-04-26 15:35:51.801757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.459 [2024-04-26 15:35:51.801771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.459 [2024-04-26 15:35:51.814241] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.459 [2024-04-26 15:35:51.814580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.459 [2024-04-26 15:35:51.814596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:34.459 [2024-04-26 15:35:51.825953] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.459 [2024-04-26 15:35:51.826293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.459 [2024-04-26 15:35:51.826309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:34.459 [2024-04-26 15:35:51.838187] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.459 [2024-04-26 15:35:51.838511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.459 [2024-04-26 15:35:51.838527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:34.459 [2024-04-26 15:35:51.852382] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.459 [2024-04-26 15:35:51.852722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.460 [2024-04-26 15:35:51.852738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.460 [2024-04-26 15:35:51.863336] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.460 [2024-04-26 15:35:51.863653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.460 [2024-04-26 15:35:51.863669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:34.460 [2024-04-26 15:35:51.872715] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.460 [2024-04-26 15:35:51.872937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.460 [2024-04-26 15:35:51.872953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:34.460 [2024-04-26 15:35:51.879855] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.460 [2024-04-26 15:35:51.880209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.460 [2024-04-26 15:35:51.880225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:34.460 [2024-04-26 15:35:51.886550] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.460 [2024-04-26 15:35:51.886881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.460 [2024-04-26 15:35:51.886897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.460 [2024-04-26 15:35:51.892314] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.460 [2024-04-26 15:35:51.892641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.460 [2024-04-26 15:35:51.892656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:34.460 [2024-04-26 15:35:51.900375] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.460 [2024-04-26 15:35:51.900700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.460 [2024-04-26 15:35:51.900718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:34.721 [2024-04-26 15:35:51.907912] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.721 [2024-04-26 15:35:51.908264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.721 [2024-04-26 15:35:51.908280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:34.721 [2024-04-26 15:35:51.917020] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.721 [2024-04-26 15:35:51.917335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.721 [2024-04-26 15:35:51.917350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.721 [2024-04-26 15:35:51.927561] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.721 [2024-04-26 15:35:51.927885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.721 [2024-04-26 15:35:51.927902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:34.721 [2024-04-26 15:35:51.940291] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.721 [2024-04-26 15:35:51.940379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.721 [2024-04-26 15:35:51.940393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:34.721 [2024-04-26 15:35:51.948542] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.721 [2024-04-26 15:35:51.948911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.721 [2024-04-26 15:35:51.948927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:34.721 [2024-04-26 15:35:51.955932] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.721 [2024-04-26 15:35:51.956258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.721 [2024-04-26 15:35:51.956273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.721 [2024-04-26 15:35:51.965659] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.721 [2024-04-26 15:35:51.965986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.721 [2024-04-26 15:35:51.966002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:34.721 [2024-04-26 15:35:51.975973] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.721 [2024-04-26 15:35:51.976296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.721 [2024-04-26 15:35:51.976311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:34.721 [2024-04-26 15:35:51.988666] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.721 [2024-04-26 15:35:51.989021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.721 [2024-04-26 15:35:51.989036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:34.721 [2024-04-26 15:35:51.997529] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.721 [2024-04-26 15:35:51.997747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.721 [2024-04-26 15:35:51.997763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.721 [2024-04-26 15:35:52.002834] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.721 [2024-04-26 15:35:52.003069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.721 [2024-04-26 15:35:52.003085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:34.721 [2024-04-26 15:35:52.007476] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.721 [2024-04-26 15:35:52.007812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.721 [2024-04-26 15:35:52.007828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:34.721 [2024-04-26 15:35:52.014118] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.721 [2024-04-26 15:35:52.014453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.721 [2024-04-26 15:35:52.014468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:34.721 [2024-04-26 15:35:52.021268] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.721 [2024-04-26 15:35:52.021591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.721 [2024-04-26 15:35:52.021607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.721 [2024-04-26 15:35:52.027048] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.722 [2024-04-26 15:35:52.027386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.722 [2024-04-26 15:35:52.027402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:34.722 [2024-04-26 15:35:52.034645] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.722 [2024-04-26 15:35:52.034968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.722 [2024-04-26 15:35:52.034984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:34.722 [2024-04-26 15:35:52.041338] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.722 [2024-04-26 15:35:52.041664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.722 [2024-04-26 15:35:52.041680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:34.722 [2024-04-26 15:35:52.051418] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.722 [2024-04-26 15:35:52.051749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.722 [2024-04-26 15:35:52.051764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.722 [2024-04-26 15:35:52.058399] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.722 [2024-04-26 15:35:52.058492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.722 [2024-04-26 15:35:52.058506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:34.722 [2024-04-26 15:35:52.067101] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.722 [2024-04-26 15:35:52.067437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.722 [2024-04-26 15:35:52.067454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:34.722 [2024-04-26 15:35:52.075477] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.722 [2024-04-26 15:35:52.075822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.722 [2024-04-26 15:35:52.075842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:34.722 [2024-04-26 15:35:52.084276] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.722 [2024-04-26 15:35:52.084627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.722 [2024-04-26 15:35:52.084642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.722 [2024-04-26 15:35:52.091922] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.722 [2024-04-26 15:35:52.092036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.722 [2024-04-26 15:35:52.092051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:34.722 [2024-04-26 15:35:52.101239] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.722 [2024-04-26 15:35:52.101552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.722 [2024-04-26 15:35:52.101567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:34.722 [2024-04-26 15:35:52.109168] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.722 [2024-04-26 15:35:52.109502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.722 [2024-04-26 15:35:52.109517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:34.722 [2024-04-26 15:35:52.119224] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.722 [2024-04-26 15:35:52.119594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.722 [2024-04-26 15:35:52.119613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.722 [2024-04-26 15:35:52.131872] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.722 [2024-04-26 15:35:52.132230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.722 [2024-04-26 15:35:52.132246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:34.722 [2024-04-26 15:35:52.144633] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.722 [2024-04-26 15:35:52.144985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.722 [2024-04-26 15:35:52.145001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:34.722 [2024-04-26 15:35:52.157488] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.722 [2024-04-26 15:35:52.157806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.722 [2024-04-26 15:35:52.157822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:34.722 [2024-04-26 15:35:52.168491] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.722 [2024-04-26 15:35:52.168824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.722 [2024-04-26 15:35:52.168846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.982 [2024-04-26 15:35:52.179087] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.982 [2024-04-26 15:35:52.179431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.982 [2024-04-26 15:35:52.179447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:34.982 [2024-04-26 15:35:52.189113] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.982 [2024-04-26 15:35:52.189440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.982 [2024-04-26 15:35:52.189456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:34.982 [2024-04-26 15:35:52.200500] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.982 [2024-04-26 15:35:52.200849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.982 [2024-04-26 15:35:52.200865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:34.982 [2024-04-26 15:35:52.209456] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.982 [2024-04-26 15:35:52.209804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.982 [2024-04-26 15:35:52.209819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:34.982 [2024-04-26 15:35:52.220335] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf1120) with pdu=0x2000190fef90 00:25:34.982 [2024-04-26 15:35:52.220434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.982 [2024-04-26 15:35:52.220448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:34.982 00:25:34.982 Latency(us) 00:25:34.982 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:34.982 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:34.982 nvme0n1 : 2.01 3538.68 442.33 0.00 0.00 4511.29 2075.31 13762.56 00:25:34.982 =================================================================================================================== 00:25:34.982 Total : 3538.68 442.33 0.00 0.00 4511.29 2075.31 13762.56 00:25:34.982 0 00:25:34.982 15:35:52 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:34.982 15:35:52 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:34.982 15:35:52 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:34.982 | .driver_specific 00:25:34.982 | .nvme_error 00:25:34.982 | .status_code 00:25:34.982 | .command_transient_transport_error' 00:25:34.982 15:35:52 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:34.982 15:35:52 -- host/digest.sh@71 -- # (( 229 > 0 )) 00:25:34.982 15:35:52 -- host/digest.sh@73 -- # killprocess 1776121 00:25:34.982 15:35:52 -- common/autotest_common.sh@936 -- # '[' -z 1776121 ']' 00:25:34.983 15:35:52 -- common/autotest_common.sh@940 -- # kill -0 1776121 00:25:34.983 15:35:52 -- common/autotest_common.sh@941 -- # uname 00:25:34.983 15:35:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:34.983 15:35:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1776121 00:25:35.242 15:35:52 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:35.242 15:35:52 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:35.242 15:35:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1776121' 00:25:35.242 killing process with pid 1776121 00:25:35.242 15:35:52 -- common/autotest_common.sh@955 -- # kill 1776121 00:25:35.242 Received shutdown signal, test time was about 2.000000 seconds 00:25:35.242 00:25:35.242 Latency(us) 00:25:35.242 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:35.242 =================================================================================================================== 00:25:35.242 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:35.242 15:35:52 -- common/autotest_common.sh@960 -- # wait 1776121 00:25:35.242 15:35:52 -- host/digest.sh@116 -- # killprocess 1773721 00:25:35.242 15:35:52 -- common/autotest_common.sh@936 -- # '[' -z 1773721 ']' 00:25:35.242 15:35:52 -- common/autotest_common.sh@940 -- # kill -0 1773721 00:25:35.242 15:35:52 -- common/autotest_common.sh@941 -- # uname 00:25:35.242 15:35:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:35.242 15:35:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1773721 00:25:35.242 15:35:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:35.242 15:35:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:35.242 15:35:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1773721' 00:25:35.242 killing process with pid 1773721 00:25:35.242 15:35:52 -- common/autotest_common.sh@955 -- # kill 1773721 00:25:35.242 15:35:52 -- common/autotest_common.sh@960 -- # wait 1773721 00:25:35.503 00:25:35.503 real 0m16.096s 00:25:35.503 user 0m31.648s 00:25:35.503 sys 0m3.357s 00:25:35.503 15:35:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:35.503 15:35:52 -- common/autotest_common.sh@10 -- # set +x 00:25:35.503 ************************************ 00:25:35.503 END TEST nvmf_digest_error 00:25:35.503 ************************************ 00:25:35.503 15:35:52 -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:25:35.503 15:35:52 -- host/digest.sh@150 -- # nvmftestfini 00:25:35.503 15:35:52 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:35.503 15:35:52 -- nvmf/common.sh@117 -- # sync 00:25:35.503 15:35:52 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:35.503 15:35:52 -- nvmf/common.sh@120 -- # set +e 00:25:35.503 15:35:52 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:35.503 15:35:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:35.503 rmmod nvme_tcp 00:25:35.503 rmmod nvme_fabrics 00:25:35.503 rmmod nvme_keyring 00:25:35.503 15:35:52 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:35.503 15:35:52 -- nvmf/common.sh@124 -- # set -e 00:25:35.503 15:35:52 -- nvmf/common.sh@125 -- # return 0 00:25:35.503 15:35:52 -- nvmf/common.sh@478 -- # '[' -n 1773721 ']' 00:25:35.503 15:35:52 -- nvmf/common.sh@479 -- # killprocess 1773721 00:25:35.503 15:35:52 -- common/autotest_common.sh@936 -- # '[' -z 1773721 ']' 00:25:35.503 15:35:52 -- common/autotest_common.sh@940 -- # kill -0 1773721 00:25:35.503 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (1773721) - No such process 00:25:35.503 15:35:52 -- common/autotest_common.sh@963 -- # echo 'Process with pid 1773721 is not found' 00:25:35.503 Process with pid 1773721 is not found 00:25:35.503 15:35:52 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:35.503 15:35:52 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:35.503 15:35:52 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:35.503 15:35:52 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:35.503 15:35:52 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:35.503 15:35:52 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:35.503 15:35:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:35.503 15:35:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:38.049 15:35:54 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:38.049 00:25:38.049 real 0m41.947s 00:25:38.049 user 1m5.103s 00:25:38.049 sys 0m12.343s 00:25:38.049 15:35:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:38.049 15:35:54 -- common/autotest_common.sh@10 -- # set +x 00:25:38.049 ************************************ 00:25:38.049 END TEST nvmf_digest 00:25:38.049 ************************************ 00:25:38.049 15:35:54 -- nvmf/nvmf.sh@108 -- # [[ 0 -eq 1 ]] 00:25:38.049 15:35:54 -- nvmf/nvmf.sh@113 -- # [[ 0 -eq 1 ]] 00:25:38.049 15:35:54 -- nvmf/nvmf.sh@118 -- # [[ phy == phy ]] 00:25:38.049 15:35:54 -- nvmf/nvmf.sh@119 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:25:38.049 15:35:54 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:38.049 15:35:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:38.049 15:35:54 -- common/autotest_common.sh@10 -- # set +x 00:25:38.049 ************************************ 00:25:38.049 START TEST nvmf_bdevperf 00:25:38.049 ************************************ 00:25:38.049 15:35:55 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:25:38.049 * Looking for test storage... 00:25:38.049 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:38.049 15:35:55 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:38.049 15:35:55 -- nvmf/common.sh@7 -- # uname -s 00:25:38.049 15:35:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:38.049 15:35:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:38.049 15:35:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:38.049 15:35:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:38.049 15:35:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:38.049 15:35:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:38.049 15:35:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:38.049 15:35:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:38.049 15:35:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:38.049 15:35:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:38.049 15:35:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:38.049 15:35:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:38.049 15:35:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:38.049 15:35:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:38.049 15:35:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:38.049 15:35:55 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:38.050 15:35:55 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:38.050 15:35:55 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:38.050 15:35:55 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:38.050 15:35:55 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:38.050 15:35:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.050 15:35:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.050 15:35:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.050 15:35:55 -- paths/export.sh@5 -- # export PATH 00:25:38.050 15:35:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.050 15:35:55 -- nvmf/common.sh@47 -- # : 0 00:25:38.050 15:35:55 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:38.050 15:35:55 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:38.050 15:35:55 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:38.050 15:35:55 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:38.050 15:35:55 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:38.050 15:35:55 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:38.050 15:35:55 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:38.050 15:35:55 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:38.050 15:35:55 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:38.050 15:35:55 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:38.050 15:35:55 -- host/bdevperf.sh@24 -- # nvmftestinit 00:25:38.050 15:35:55 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:38.050 15:35:55 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:38.050 15:35:55 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:38.050 15:35:55 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:38.050 15:35:55 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:38.050 15:35:55 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:38.050 15:35:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:38.050 15:35:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:38.050 15:35:55 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:25:38.050 15:35:55 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:25:38.050 15:35:55 -- nvmf/common.sh@285 -- # xtrace_disable 00:25:38.050 15:35:55 -- common/autotest_common.sh@10 -- # set +x 00:25:46.188 15:36:02 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:46.188 15:36:02 -- nvmf/common.sh@291 -- # pci_devs=() 00:25:46.188 15:36:02 -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:46.188 15:36:02 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:46.188 15:36:02 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:46.188 15:36:02 -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:46.188 15:36:02 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:46.188 15:36:02 -- nvmf/common.sh@295 -- # net_devs=() 00:25:46.188 15:36:02 -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:46.188 15:36:02 -- nvmf/common.sh@296 -- # e810=() 00:25:46.188 15:36:02 -- nvmf/common.sh@296 -- # local -ga e810 00:25:46.188 15:36:02 -- nvmf/common.sh@297 -- # x722=() 00:25:46.188 15:36:02 -- nvmf/common.sh@297 -- # local -ga x722 00:25:46.188 15:36:02 -- nvmf/common.sh@298 -- # mlx=() 00:25:46.188 15:36:02 -- nvmf/common.sh@298 -- # local -ga mlx 00:25:46.188 15:36:02 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:46.188 15:36:02 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:46.188 15:36:02 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:46.188 15:36:02 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:46.188 15:36:02 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:46.188 15:36:02 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:46.188 15:36:02 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:46.188 15:36:02 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:46.188 15:36:02 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:46.188 15:36:02 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:46.188 15:36:02 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:46.188 15:36:02 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:46.188 15:36:02 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:46.188 15:36:02 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:46.188 15:36:02 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:46.188 15:36:02 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:46.188 15:36:02 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:46.188 15:36:02 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:46.188 15:36:02 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:46.188 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:46.188 15:36:02 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:46.188 15:36:02 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:46.188 15:36:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:46.188 15:36:02 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:46.188 15:36:02 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:46.188 15:36:02 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:46.188 15:36:02 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:46.188 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:46.188 15:36:02 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:46.188 15:36:02 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:46.188 15:36:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:46.188 15:36:02 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:46.188 15:36:02 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:46.188 15:36:02 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:46.188 15:36:02 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:46.188 15:36:02 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:46.188 15:36:02 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:46.188 15:36:02 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:46.188 15:36:02 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:46.188 15:36:02 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:46.188 15:36:02 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:46.188 Found net devices under 0000:31:00.0: cvl_0_0 00:25:46.188 15:36:02 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:46.188 15:36:02 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:46.188 15:36:02 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:46.188 15:36:02 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:46.188 15:36:02 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:46.188 15:36:02 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:46.188 Found net devices under 0000:31:00.1: cvl_0_1 00:25:46.188 15:36:02 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:46.188 15:36:02 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:25:46.188 15:36:02 -- nvmf/common.sh@403 -- # is_hw=yes 00:25:46.188 15:36:02 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:25:46.188 15:36:02 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:25:46.188 15:36:02 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:25:46.188 15:36:02 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:46.188 15:36:02 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:46.188 15:36:02 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:46.188 15:36:02 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:46.188 15:36:02 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:46.188 15:36:02 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:46.188 15:36:02 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:46.188 15:36:02 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:46.188 15:36:02 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:46.188 15:36:02 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:46.188 15:36:02 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:46.188 15:36:02 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:46.188 15:36:02 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:46.188 15:36:02 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:46.188 15:36:02 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:46.188 15:36:02 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:46.188 15:36:02 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:46.188 15:36:02 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:46.188 15:36:02 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:46.188 15:36:02 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:46.188 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:46.188 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.645 ms 00:25:46.188 00:25:46.188 --- 10.0.0.2 ping statistics --- 00:25:46.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:46.188 rtt min/avg/max/mdev = 0.645/0.645/0.645/0.000 ms 00:25:46.188 15:36:02 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:46.188 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:46.188 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.306 ms 00:25:46.188 00:25:46.188 --- 10.0.0.1 ping statistics --- 00:25:46.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:46.188 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:25:46.188 15:36:02 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:46.188 15:36:02 -- nvmf/common.sh@411 -- # return 0 00:25:46.188 15:36:02 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:46.188 15:36:02 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:46.188 15:36:02 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:46.188 15:36:02 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:46.188 15:36:02 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:46.188 15:36:02 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:46.188 15:36:02 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:46.188 15:36:02 -- host/bdevperf.sh@25 -- # tgt_init 00:25:46.188 15:36:02 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:25:46.188 15:36:02 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:46.188 15:36:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:46.188 15:36:02 -- common/autotest_common.sh@10 -- # set +x 00:25:46.188 15:36:02 -- nvmf/common.sh@470 -- # nvmfpid=1781234 00:25:46.188 15:36:02 -- nvmf/common.sh@471 -- # waitforlisten 1781234 00:25:46.188 15:36:02 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:46.188 15:36:02 -- common/autotest_common.sh@817 -- # '[' -z 1781234 ']' 00:25:46.188 15:36:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:46.188 15:36:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:46.188 15:36:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:46.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:46.188 15:36:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:46.188 15:36:02 -- common/autotest_common.sh@10 -- # set +x 00:25:46.188 [2024-04-26 15:36:02.698571] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:25:46.188 [2024-04-26 15:36:02.698636] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:46.188 EAL: No free 2048 kB hugepages reported on node 1 00:25:46.188 [2024-04-26 15:36:02.789241] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:46.188 [2024-04-26 15:36:02.881348] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:46.188 [2024-04-26 15:36:02.881419] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:46.188 [2024-04-26 15:36:02.881427] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:46.188 [2024-04-26 15:36:02.881434] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:46.188 [2024-04-26 15:36:02.881441] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:46.188 [2024-04-26 15:36:02.881782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:46.188 [2024-04-26 15:36:02.881920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:46.188 [2024-04-26 15:36:02.881941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:46.188 15:36:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:46.189 15:36:03 -- common/autotest_common.sh@850 -- # return 0 00:25:46.189 15:36:03 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:46.189 15:36:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:46.189 15:36:03 -- common/autotest_common.sh@10 -- # set +x 00:25:46.189 15:36:03 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:46.189 15:36:03 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:46.189 15:36:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:46.189 15:36:03 -- common/autotest_common.sh@10 -- # set +x 00:25:46.189 [2024-04-26 15:36:03.523832] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:46.189 15:36:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:46.189 15:36:03 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:46.189 15:36:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:46.189 15:36:03 -- common/autotest_common.sh@10 -- # set +x 00:25:46.189 Malloc0 00:25:46.189 15:36:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:46.189 15:36:03 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:46.189 15:36:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:46.189 15:36:03 -- common/autotest_common.sh@10 -- # set +x 00:25:46.189 15:36:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:46.189 15:36:03 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:46.189 15:36:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:46.189 15:36:03 -- common/autotest_common.sh@10 -- # set +x 00:25:46.189 15:36:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:46.189 15:36:03 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:46.189 15:36:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:46.189 15:36:03 -- common/autotest_common.sh@10 -- # set +x 00:25:46.189 [2024-04-26 15:36:03.588262] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:46.189 15:36:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:46.189 15:36:03 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:25:46.189 15:36:03 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:25:46.189 15:36:03 -- nvmf/common.sh@521 -- # config=() 00:25:46.189 15:36:03 -- nvmf/common.sh@521 -- # local subsystem config 00:25:46.189 15:36:03 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:46.189 15:36:03 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:46.189 { 00:25:46.189 "params": { 00:25:46.189 "name": "Nvme$subsystem", 00:25:46.189 "trtype": "$TEST_TRANSPORT", 00:25:46.189 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:46.189 "adrfam": "ipv4", 00:25:46.189 "trsvcid": "$NVMF_PORT", 00:25:46.189 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:46.189 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:46.189 "hdgst": ${hdgst:-false}, 00:25:46.189 "ddgst": ${ddgst:-false} 00:25:46.189 }, 00:25:46.189 "method": "bdev_nvme_attach_controller" 00:25:46.189 } 00:25:46.189 EOF 00:25:46.189 )") 00:25:46.189 15:36:03 -- nvmf/common.sh@543 -- # cat 00:25:46.189 15:36:03 -- nvmf/common.sh@545 -- # jq . 00:25:46.189 15:36:03 -- nvmf/common.sh@546 -- # IFS=, 00:25:46.189 15:36:03 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:25:46.189 "params": { 00:25:46.189 "name": "Nvme1", 00:25:46.189 "trtype": "tcp", 00:25:46.189 "traddr": "10.0.0.2", 00:25:46.189 "adrfam": "ipv4", 00:25:46.189 "trsvcid": "4420", 00:25:46.189 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:46.189 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:46.189 "hdgst": false, 00:25:46.189 "ddgst": false 00:25:46.189 }, 00:25:46.189 "method": "bdev_nvme_attach_controller" 00:25:46.189 }' 00:25:46.448 [2024-04-26 15:36:03.639433] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:25:46.449 [2024-04-26 15:36:03.639482] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1781343 ] 00:25:46.449 EAL: No free 2048 kB hugepages reported on node 1 00:25:46.449 [2024-04-26 15:36:03.699043] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:46.449 [2024-04-26 15:36:03.761650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:46.709 Running I/O for 1 seconds... 00:25:47.653 00:25:47.653 Latency(us) 00:25:47.653 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:47.653 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:47.653 Verification LBA range: start 0x0 length 0x4000 00:25:47.653 Nvme1n1 : 1.05 8563.59 33.45 0.00 0.00 14334.90 2198.19 48496.64 00:25:47.653 =================================================================================================================== 00:25:47.653 Total : 8563.59 33.45 0.00 0.00 14334.90 2198.19 48496.64 00:25:47.913 15:36:05 -- host/bdevperf.sh@30 -- # bdevperfpid=1781680 00:25:47.913 15:36:05 -- host/bdevperf.sh@32 -- # sleep 3 00:25:47.913 15:36:05 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:25:47.913 15:36:05 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:25:47.913 15:36:05 -- nvmf/common.sh@521 -- # config=() 00:25:47.913 15:36:05 -- nvmf/common.sh@521 -- # local subsystem config 00:25:47.913 15:36:05 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:47.913 15:36:05 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:47.913 { 00:25:47.913 "params": { 00:25:47.913 "name": "Nvme$subsystem", 00:25:47.913 "trtype": "$TEST_TRANSPORT", 00:25:47.913 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:47.913 "adrfam": "ipv4", 00:25:47.913 "trsvcid": "$NVMF_PORT", 00:25:47.913 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:47.913 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:47.913 "hdgst": ${hdgst:-false}, 00:25:47.913 "ddgst": ${ddgst:-false} 00:25:47.913 }, 00:25:47.913 "method": "bdev_nvme_attach_controller" 00:25:47.913 } 00:25:47.913 EOF 00:25:47.913 )") 00:25:47.913 15:36:05 -- nvmf/common.sh@543 -- # cat 00:25:47.913 15:36:05 -- nvmf/common.sh@545 -- # jq . 00:25:47.913 15:36:05 -- nvmf/common.sh@546 -- # IFS=, 00:25:47.913 15:36:05 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:25:47.913 "params": { 00:25:47.913 "name": "Nvme1", 00:25:47.913 "trtype": "tcp", 00:25:47.913 "traddr": "10.0.0.2", 00:25:47.913 "adrfam": "ipv4", 00:25:47.913 "trsvcid": "4420", 00:25:47.913 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:47.913 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:47.913 "hdgst": false, 00:25:47.913 "ddgst": false 00:25:47.913 }, 00:25:47.913 "method": "bdev_nvme_attach_controller" 00:25:47.913 }' 00:25:47.913 [2024-04-26 15:36:05.266381] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:25:47.913 [2024-04-26 15:36:05.266433] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1781680 ] 00:25:47.913 EAL: No free 2048 kB hugepages reported on node 1 00:25:47.913 [2024-04-26 15:36:05.326548] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:48.174 [2024-04-26 15:36:05.389782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:48.435 Running I/O for 15 seconds... 00:25:51.038 15:36:08 -- host/bdevperf.sh@33 -- # kill -9 1781234 00:25:51.038 15:36:08 -- host/bdevperf.sh@35 -- # sleep 3 00:25:51.038 [2024-04-26 15:36:08.232765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.038 [2024-04-26 15:36:08.232807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.038 [2024-04-26 15:36:08.232840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:79664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.038 [2024-04-26 15:36:08.232851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.038 [2024-04-26 15:36:08.232861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:79672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.038 [2024-04-26 15:36:08.232870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.038 [2024-04-26 15:36:08.232881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:79680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.038 [2024-04-26 15:36:08.232890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.038 [2024-04-26 15:36:08.232900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:79688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.038 [2024-04-26 15:36:08.232910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.038 [2024-04-26 15:36:08.232921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:79696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.038 [2024-04-26 15:36:08.232928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.038 [2024-04-26 15:36:08.232939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:79704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.038 [2024-04-26 15:36:08.232949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.038 [2024-04-26 15:36:08.232960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:79712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.038 [2024-04-26 15:36:08.232970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.038 [2024-04-26 15:36:08.232982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:79720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.038 [2024-04-26 15:36:08.232990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.038 [2024-04-26 15:36:08.233000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:79728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.038 [2024-04-26 15:36:08.233007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.038 [2024-04-26 15:36:08.233018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.038 [2024-04-26 15:36:08.233027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.038 [2024-04-26 15:36:08.233038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:79744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.038 [2024-04-26 15:36:08.233047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.038 [2024-04-26 15:36:08.233058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:79752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.039 [2024-04-26 15:36:08.233068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.039 [2024-04-26 15:36:08.233080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:79760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.039 [2024-04-26 15:36:08.233090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.039 [2024-04-26 15:36:08.233100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:79768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.039 [2024-04-26 15:36:08.233107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.039 [2024-04-26 15:36:08.233117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:79776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.039 [2024-04-26 15:36:08.233127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.039 [2024-04-26 15:36:08.233138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:79784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.039 [2024-04-26 15:36:08.233146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.039 [2024-04-26 15:36:08.233155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:79792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.039 [2024-04-26 15:36:08.233164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.039 [2024-04-26 15:36:08.233174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:79800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.039 [2024-04-26 15:36:08.233184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.039 [2024-04-26 15:36:08.233194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:79808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.039 [2024-04-26 15:36:08.233206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.039 [2024-04-26 15:36:08.233217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:79816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.039 [2024-04-26 15:36:08.233227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.039 [2024-04-26 15:36:08.233239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:79824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.039 [2024-04-26 15:36:08.233248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.039 [2024-04-26 15:36:08.233258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:79832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.039 [2024-04-26 15:36:08.233267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.039 [2024-04-26 15:36:08.233278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.039 [2024-04-26 15:36:08.233286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.039 [2024-04-26 15:36:08.233296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:79848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.039 [2024-04-26 15:36:08.233304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.039 [2024-04-26 15:36:08.233313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:79856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.039 [2024-04-26 15:36:08.233320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.039 [2024-04-26 15:36:08.233331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:79864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.039 [2024-04-26 15:36:08.233338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.039 [2024-04-26 15:36:08.233347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:79872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.039 [2024-04-26 15:36:08.233355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.039 [2024-04-26 15:36:08.233364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:79880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.039 [2024-04-26 15:36:08.233371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.039 [2024-04-26 15:36:08.233380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:79888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.039 [2024-04-26 15:36:08.233387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.039 [2024-04-26 15:36:08.233396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:79896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.039 [2024-04-26 15:36:08.233403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.039 [2024-04-26 15:36:08.233412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:79904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.039 [2024-04-26 15:36:08.233419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.039 [2024-04-26 15:36:08.233428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:79912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.039 [2024-04-26 15:36:08.233435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.039 [2024-04-26 15:36:08.233445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:79920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.039 [2024-04-26 15:36:08.233453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.039 [2024-04-26 15:36:08.233462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.039 [2024-04-26 15:36:08.233469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.039 [2024-04-26 15:36:08.233478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:79936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.039 [2024-04-26 15:36:08.233486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.039 [2024-04-26 15:36:08.233494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:79944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.039 [2024-04-26 15:36:08.233503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.039 [2024-04-26 15:36:08.233512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:79952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.039 [2024-04-26 15:36:08.233519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.039 [2024-04-26 15:36:08.233528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:79960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.039 [2024-04-26 15:36:08.233535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.039 [2024-04-26 15:36:08.233546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:79968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.039 [2024-04-26 15:36:08.233554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.039 [2024-04-26 15:36:08.233563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:79976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.039 [2024-04-26 15:36:08.233570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.039 [2024-04-26 15:36:08.233579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:79984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.039 [2024-04-26 15:36:08.233587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.039 [2024-04-26 15:36:08.233596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:79992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.039 [2024-04-26 15:36:08.233602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.039 [2024-04-26 15:36:08.233612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.039 [2024-04-26 15:36:08.233619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.039 [2024-04-26 15:36:08.233629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:80008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.039 [2024-04-26 15:36:08.233636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.039 [2024-04-26 15:36:08.233645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:80016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.039 [2024-04-26 15:36:08.233652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.039 [2024-04-26 15:36:08.233661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:80024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.039 [2024-04-26 15:36:08.233668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.039 [2024-04-26 15:36:08.233677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:80032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.039 [2024-04-26 15:36:08.233684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.039 [2024-04-26 15:36:08.233693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:80040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.039 [2024-04-26 15:36:08.233700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.039 [2024-04-26 15:36:08.233710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:80048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.039 [2024-04-26 15:36:08.233717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.039 [2024-04-26 15:36:08.233726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:80056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.040 [2024-04-26 15:36:08.233733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.040 [2024-04-26 15:36:08.233743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:80064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.040 [2024-04-26 15:36:08.233751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.040 [2024-04-26 15:36:08.233761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:80072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.040 [2024-04-26 15:36:08.233768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.040 [2024-04-26 15:36:08.233777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.040 [2024-04-26 15:36:08.233784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.040 [2024-04-26 15:36:08.233793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:80088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.040 [2024-04-26 15:36:08.233800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.040 [2024-04-26 15:36:08.233809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:80096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.040 [2024-04-26 15:36:08.233816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.040 [2024-04-26 15:36:08.233826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:80104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.040 [2024-04-26 15:36:08.233833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.040 [2024-04-26 15:36:08.233934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:80112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.040 [2024-04-26 15:36:08.233942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.040 [2024-04-26 15:36:08.233951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:80120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.040 [2024-04-26 15:36:08.233958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.040 [2024-04-26 15:36:08.233967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:80128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.040 [2024-04-26 15:36:08.233974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.040 [2024-04-26 15:36:08.233984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:80136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.040 [2024-04-26 15:36:08.233991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.040 [2024-04-26 15:36:08.234000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:80144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.040 [2024-04-26 15:36:08.234008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.040 [2024-04-26 15:36:08.234017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:80152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.040 [2024-04-26 15:36:08.234024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.040 [2024-04-26 15:36:08.234033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:80160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.040 [2024-04-26 15:36:08.234040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.040 [2024-04-26 15:36:08.234052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:80168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.040 [2024-04-26 15:36:08.234059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.040 [2024-04-26 15:36:08.234068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.040 [2024-04-26 15:36:08.234076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.040 [2024-04-26 15:36:08.234085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:80184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.040 [2024-04-26 15:36:08.234091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.040 [2024-04-26 15:36:08.234101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:80192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.040 [2024-04-26 15:36:08.234108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.040 [2024-04-26 15:36:08.234117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:80200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.040 [2024-04-26 15:36:08.234125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.040 [2024-04-26 15:36:08.234134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:80208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.040 [2024-04-26 15:36:08.234141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.040 [2024-04-26 15:36:08.234151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:80216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.040 [2024-04-26 15:36:08.234158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.040 [2024-04-26 15:36:08.234167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:80224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.040 [2024-04-26 15:36:08.234174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.040 [2024-04-26 15:36:08.234183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:80232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.040 [2024-04-26 15:36:08.234190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.040 [2024-04-26 15:36:08.234200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:80240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.040 [2024-04-26 15:36:08.234207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.040 [2024-04-26 15:36:08.234216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:80248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.040 [2024-04-26 15:36:08.234223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.040 [2024-04-26 15:36:08.234232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:80256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.040 [2024-04-26 15:36:08.234239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.040 [2024-04-26 15:36:08.234248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.040 [2024-04-26 15:36:08.234257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.040 [2024-04-26 15:36:08.234267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:80272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.040 [2024-04-26 15:36:08.234274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.040 [2024-04-26 15:36:08.234283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:80280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.040 [2024-04-26 15:36:08.234290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.040 [2024-04-26 15:36:08.234300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:80288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.040 [2024-04-26 15:36:08.234307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.040 [2024-04-26 15:36:08.234317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:80320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.040 [2024-04-26 15:36:08.234324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.040 [2024-04-26 15:36:08.234333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:80328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.040 [2024-04-26 15:36:08.234340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.040 [2024-04-26 15:36:08.234349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:80336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.040 [2024-04-26 15:36:08.234356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.040 [2024-04-26 15:36:08.234365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:80344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.040 [2024-04-26 15:36:08.234372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.040 [2024-04-26 15:36:08.234381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:80352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.040 [2024-04-26 15:36:08.234388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.040 [2024-04-26 15:36:08.234398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:80360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.040 [2024-04-26 15:36:08.234405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.040 [2024-04-26 15:36:08.234414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:80368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.040 [2024-04-26 15:36:08.234421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.040 [2024-04-26 15:36:08.234430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:80376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.040 [2024-04-26 15:36:08.234436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.040 [2024-04-26 15:36:08.234446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:80384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.040 [2024-04-26 15:36:08.234453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.040 [2024-04-26 15:36:08.234463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:80392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.041 [2024-04-26 15:36:08.234470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.041 [2024-04-26 15:36:08.234479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:80400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.041 [2024-04-26 15:36:08.234486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.041 [2024-04-26 15:36:08.234496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:80408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.041 [2024-04-26 15:36:08.234503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.041 [2024-04-26 15:36:08.234512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:80416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.041 [2024-04-26 15:36:08.234520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.041 [2024-04-26 15:36:08.234529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:80424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.041 [2024-04-26 15:36:08.234536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.041 [2024-04-26 15:36:08.234545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:80432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.041 [2024-04-26 15:36:08.234552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.041 [2024-04-26 15:36:08.234561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:80440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.041 [2024-04-26 15:36:08.234568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.041 [2024-04-26 15:36:08.234577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:80448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.041 [2024-04-26 15:36:08.234584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.041 [2024-04-26 15:36:08.234594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:80456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.041 [2024-04-26 15:36:08.234601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.041 [2024-04-26 15:36:08.234611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:80464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.041 [2024-04-26 15:36:08.234618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.041 [2024-04-26 15:36:08.234627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:80472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.041 [2024-04-26 15:36:08.234634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.041 [2024-04-26 15:36:08.234644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:80480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.041 [2024-04-26 15:36:08.234651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.041 [2024-04-26 15:36:08.234660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:80488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.041 [2024-04-26 15:36:08.234667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.041 [2024-04-26 15:36:08.234677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:80496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.041 [2024-04-26 15:36:08.234684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.041 [2024-04-26 15:36:08.234694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:80504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.041 [2024-04-26 15:36:08.234701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.041 [2024-04-26 15:36:08.234710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:80512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.041 [2024-04-26 15:36:08.234717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.041 [2024-04-26 15:36:08.234726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:80520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.041 [2024-04-26 15:36:08.234733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.041 [2024-04-26 15:36:08.234742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:80528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.041 [2024-04-26 15:36:08.234749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.041 [2024-04-26 15:36:08.234758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:80536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.041 [2024-04-26 15:36:08.234765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.041 [2024-04-26 15:36:08.234774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:80544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.041 [2024-04-26 15:36:08.234781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.041 [2024-04-26 15:36:08.234790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:80552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.041 [2024-04-26 15:36:08.234798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.041 [2024-04-26 15:36:08.234807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:80296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.041 [2024-04-26 15:36:08.234814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.041 [2024-04-26 15:36:08.234823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:80304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.041 [2024-04-26 15:36:08.234830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.041 [2024-04-26 15:36:08.234843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:80312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.041 [2024-04-26 15:36:08.234851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.041 [2024-04-26 15:36:08.234859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:80560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.041 [2024-04-26 15:36:08.234867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.041 [2024-04-26 15:36:08.234875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.041 [2024-04-26 15:36:08.234884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.041 [2024-04-26 15:36:08.234893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:80576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.041 [2024-04-26 15:36:08.234900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.041 [2024-04-26 15:36:08.234909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:80584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.041 [2024-04-26 15:36:08.234916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.041 [2024-04-26 15:36:08.234926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:80592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.041 [2024-04-26 15:36:08.234933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.041 [2024-04-26 15:36:08.234941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:80600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.041 [2024-04-26 15:36:08.234949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.041 [2024-04-26 15:36:08.234957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:80608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.041 [2024-04-26 15:36:08.234965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.041 [2024-04-26 15:36:08.234974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:80616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.041 [2024-04-26 15:36:08.234982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.041 [2024-04-26 15:36:08.234991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:80624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.041 [2024-04-26 15:36:08.234998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.041 [2024-04-26 15:36:08.235006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:80632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.041 [2024-04-26 15:36:08.235014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.041 [2024-04-26 15:36:08.235023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:80640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.041 [2024-04-26 15:36:08.235030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.041 [2024-04-26 15:36:08.235039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:80648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.041 [2024-04-26 15:36:08.235045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.041 [2024-04-26 15:36:08.235055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:80656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.041 [2024-04-26 15:36:08.235062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.041 [2024-04-26 15:36:08.235071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:80664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.041 [2024-04-26 15:36:08.235078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.041 [2024-04-26 15:36:08.235088] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125e680 is same with the state(5) to be set 00:25:51.041 [2024-04-26 15:36:08.235097] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.041 [2024-04-26 15:36:08.235103] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.042 [2024-04-26 15:36:08.235110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80672 len:8 PRP1 0x0 PRP2 0x0 00:25:51.042 [2024-04-26 15:36:08.235118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.042 [2024-04-26 15:36:08.235155] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x125e680 was disconnected and freed. reset controller. 00:25:51.042 [2024-04-26 15:36:08.238648] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.042 [2024-04-26 15:36:08.238693] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:51.042 [2024-04-26 15:36:08.239348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.042 [2024-04-26 15:36:08.239714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.042 [2024-04-26 15:36:08.239725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:51.042 [2024-04-26 15:36:08.239734] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:51.042 [2024-04-26 15:36:08.239959] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:51.042 [2024-04-26 15:36:08.240178] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.042 [2024-04-26 15:36:08.240187] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.042 [2024-04-26 15:36:08.240194] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.042 [2024-04-26 15:36:08.243715] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.042 [2024-04-26 15:36:08.252866] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.042 [2024-04-26 15:36:08.253439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.042 [2024-04-26 15:36:08.253803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.042 [2024-04-26 15:36:08.253817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:51.042 [2024-04-26 15:36:08.253827] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:51.042 [2024-04-26 15:36:08.254074] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:51.042 [2024-04-26 15:36:08.254297] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.042 [2024-04-26 15:36:08.254305] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.042 [2024-04-26 15:36:08.254313] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.042 [2024-04-26 15:36:08.257835] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.042 [2024-04-26 15:36:08.266763] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.042 [2024-04-26 15:36:08.267302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.042 [2024-04-26 15:36:08.267534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.042 [2024-04-26 15:36:08.267543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:51.042 [2024-04-26 15:36:08.267556] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:51.042 [2024-04-26 15:36:08.267774] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:51.042 [2024-04-26 15:36:08.267996] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.042 [2024-04-26 15:36:08.268004] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.042 [2024-04-26 15:36:08.268011] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.042 [2024-04-26 15:36:08.271535] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.042 [2024-04-26 15:36:08.280696] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.042 [2024-04-26 15:36:08.281354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.042 [2024-04-26 15:36:08.281611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.042 [2024-04-26 15:36:08.281629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:51.042 [2024-04-26 15:36:08.281639] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:51.042 [2024-04-26 15:36:08.281882] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:51.042 [2024-04-26 15:36:08.282103] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.042 [2024-04-26 15:36:08.282112] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.042 [2024-04-26 15:36:08.282119] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.042 [2024-04-26 15:36:08.285643] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.042 [2024-04-26 15:36:08.294580] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.042 [2024-04-26 15:36:08.295255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.042 [2024-04-26 15:36:08.295512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.042 [2024-04-26 15:36:08.295525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:51.042 [2024-04-26 15:36:08.295534] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:51.042 [2024-04-26 15:36:08.295772] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:51.042 [2024-04-26 15:36:08.296000] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.042 [2024-04-26 15:36:08.296009] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.042 [2024-04-26 15:36:08.296017] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.042 [2024-04-26 15:36:08.299541] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.042 [2024-04-26 15:36:08.308468] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.042 [2024-04-26 15:36:08.309045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.042 [2024-04-26 15:36:08.309383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.042 [2024-04-26 15:36:08.309396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:51.042 [2024-04-26 15:36:08.309406] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:51.042 [2024-04-26 15:36:08.309647] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:51.042 [2024-04-26 15:36:08.309878] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.042 [2024-04-26 15:36:08.309887] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.042 [2024-04-26 15:36:08.309894] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.042 [2024-04-26 15:36:08.313420] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.042 [2024-04-26 15:36:08.322351] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.042 [2024-04-26 15:36:08.322970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.042 [2024-04-26 15:36:08.323226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.042 [2024-04-26 15:36:08.323242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:51.042 [2024-04-26 15:36:08.323252] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:51.042 [2024-04-26 15:36:08.323489] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:51.042 [2024-04-26 15:36:08.323710] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.042 [2024-04-26 15:36:08.323718] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.042 [2024-04-26 15:36:08.323725] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.042 [2024-04-26 15:36:08.327261] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.042 [2024-04-26 15:36:08.336196] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.042 [2024-04-26 15:36:08.336901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.042 [2024-04-26 15:36:08.337352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.042 [2024-04-26 15:36:08.337365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:51.042 [2024-04-26 15:36:08.337374] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:51.042 [2024-04-26 15:36:08.337611] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:51.042 [2024-04-26 15:36:08.337832] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.042 [2024-04-26 15:36:08.337848] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.042 [2024-04-26 15:36:08.337855] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.042 [2024-04-26 15:36:08.341379] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.042 [2024-04-26 15:36:08.350108] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.042 [2024-04-26 15:36:08.350780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.042 [2024-04-26 15:36:08.351175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.042 [2024-04-26 15:36:08.351189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:51.042 [2024-04-26 15:36:08.351198] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:51.042 [2024-04-26 15:36:08.351435] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:51.042 [2024-04-26 15:36:08.351660] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.043 [2024-04-26 15:36:08.351668] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.043 [2024-04-26 15:36:08.351676] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.043 [2024-04-26 15:36:08.355201] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.043 [2024-04-26 15:36:08.363921] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.043 [2024-04-26 15:36:08.364462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.043 [2024-04-26 15:36:08.364832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.043 [2024-04-26 15:36:08.364847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:51.043 [2024-04-26 15:36:08.364855] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:51.043 [2024-04-26 15:36:08.365073] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:51.043 [2024-04-26 15:36:08.365290] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.043 [2024-04-26 15:36:08.365298] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.043 [2024-04-26 15:36:08.365305] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.043 [2024-04-26 15:36:08.368833] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.043 [2024-04-26 15:36:08.377793] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.043 [2024-04-26 15:36:08.378376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.043 [2024-04-26 15:36:08.378713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.043 [2024-04-26 15:36:08.378726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:51.043 [2024-04-26 15:36:08.378735] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:51.043 [2024-04-26 15:36:08.378981] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:51.043 [2024-04-26 15:36:08.379203] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.043 [2024-04-26 15:36:08.379212] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.043 [2024-04-26 15:36:08.379219] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.043 [2024-04-26 15:36:08.382746] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.043 [2024-04-26 15:36:08.391697] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.043 [2024-04-26 15:36:08.392267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.043 [2024-04-26 15:36:08.392665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.043 [2024-04-26 15:36:08.392678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:51.043 [2024-04-26 15:36:08.392687] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:51.043 [2024-04-26 15:36:08.392932] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:51.043 [2024-04-26 15:36:08.393154] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.043 [2024-04-26 15:36:08.393169] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.043 [2024-04-26 15:36:08.393177] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.043 [2024-04-26 15:36:08.396703] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.043 [2024-04-26 15:36:08.405646] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.043 [2024-04-26 15:36:08.406319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.043 [2024-04-26 15:36:08.406655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.043 [2024-04-26 15:36:08.406667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:51.043 [2024-04-26 15:36:08.406677] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:51.043 [2024-04-26 15:36:08.406921] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:51.043 [2024-04-26 15:36:08.407142] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.043 [2024-04-26 15:36:08.407151] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.043 [2024-04-26 15:36:08.407159] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.043 [2024-04-26 15:36:08.410688] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.043 [2024-04-26 15:36:08.419433] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.043 [2024-04-26 15:36:08.420122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.043 [2024-04-26 15:36:08.420478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.043 [2024-04-26 15:36:08.420493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:51.043 [2024-04-26 15:36:08.420502] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:51.043 [2024-04-26 15:36:08.420739] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:51.043 [2024-04-26 15:36:08.420966] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.043 [2024-04-26 15:36:08.420975] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.043 [2024-04-26 15:36:08.420982] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.043 [2024-04-26 15:36:08.424503] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.043 [2024-04-26 15:36:08.433228] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.043 [2024-04-26 15:36:08.433806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.043 [2024-04-26 15:36:08.434178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.043 [2024-04-26 15:36:08.434189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:51.043 [2024-04-26 15:36:08.434196] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:51.043 [2024-04-26 15:36:08.434414] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:51.043 [2024-04-26 15:36:08.434631] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.043 [2024-04-26 15:36:08.434639] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.043 [2024-04-26 15:36:08.434650] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.043 [2024-04-26 15:36:08.438174] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.043 [2024-04-26 15:36:08.447102] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.043 [2024-04-26 15:36:08.447767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.043 [2024-04-26 15:36:08.448126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.043 [2024-04-26 15:36:08.448140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:51.043 [2024-04-26 15:36:08.448149] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:51.043 [2024-04-26 15:36:08.448386] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:51.043 [2024-04-26 15:36:08.448606] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.043 [2024-04-26 15:36:08.448614] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.043 [2024-04-26 15:36:08.448621] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.043 [2024-04-26 15:36:08.452151] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.043 [2024-04-26 15:36:08.460889] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.044 [2024-04-26 15:36:08.461430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.044 [2024-04-26 15:36:08.461768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.044 [2024-04-26 15:36:08.461778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:51.044 [2024-04-26 15:36:08.461786] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:51.044 [2024-04-26 15:36:08.462010] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:51.044 [2024-04-26 15:36:08.462228] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.044 [2024-04-26 15:36:08.462236] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.044 [2024-04-26 15:36:08.462243] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.044 [2024-04-26 15:36:08.465764] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.044 [2024-04-26 15:36:08.474715] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.044 [2024-04-26 15:36:08.475261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.044 [2024-04-26 15:36:08.475620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.044 [2024-04-26 15:36:08.475629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:51.044 [2024-04-26 15:36:08.475637] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:51.044 [2024-04-26 15:36:08.475859] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:51.044 [2024-04-26 15:36:08.476077] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.044 [2024-04-26 15:36:08.476086] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.044 [2024-04-26 15:36:08.476092] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.044 [2024-04-26 15:36:08.479624] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.307 [2024-04-26 15:36:08.488577] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.307 [2024-04-26 15:36:08.489220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.307 [2024-04-26 15:36:08.489706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.307 [2024-04-26 15:36:08.489719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:51.307 [2024-04-26 15:36:08.489729] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:51.307 [2024-04-26 15:36:08.489972] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:51.307 [2024-04-26 15:36:08.490194] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.307 [2024-04-26 15:36:08.490202] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.307 [2024-04-26 15:36:08.490209] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.307 [2024-04-26 15:36:08.493736] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.307 [2024-04-26 15:36:08.502466] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.307 [2024-04-26 15:36:08.503145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.307 [2024-04-26 15:36:08.503400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.307 [2024-04-26 15:36:08.503414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:51.307 [2024-04-26 15:36:08.503424] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:51.307 [2024-04-26 15:36:08.503660] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:51.307 [2024-04-26 15:36:08.503889] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.307 [2024-04-26 15:36:08.503899] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.307 [2024-04-26 15:36:08.503906] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.307 [2024-04-26 15:36:08.507431] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.307 [2024-04-26 15:36:08.516360] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.307 [2024-04-26 15:36:08.516939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.307 [2024-04-26 15:36:08.517291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.307 [2024-04-26 15:36:08.517301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:51.307 [2024-04-26 15:36:08.517309] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:51.307 [2024-04-26 15:36:08.517526] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:51.307 [2024-04-26 15:36:08.517743] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.307 [2024-04-26 15:36:08.517751] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.307 [2024-04-26 15:36:08.517757] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.307 [2024-04-26 15:36:08.521277] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.307 [2024-04-26 15:36:08.530202] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.307 [2024-04-26 15:36:08.530849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.307 [2024-04-26 15:36:08.531242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.307 [2024-04-26 15:36:08.531255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:51.307 [2024-04-26 15:36:08.531264] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:51.307 [2024-04-26 15:36:08.531501] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:51.307 [2024-04-26 15:36:08.531721] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.307 [2024-04-26 15:36:08.531729] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.307 [2024-04-26 15:36:08.531736] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.307 [2024-04-26 15:36:08.535270] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.307 [2024-04-26 15:36:08.543993] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.307 [2024-04-26 15:36:08.544673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.307 [2024-04-26 15:36:08.545052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.307 [2024-04-26 15:36:08.545066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:51.307 [2024-04-26 15:36:08.545075] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:51.307 [2024-04-26 15:36:08.545312] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:51.307 [2024-04-26 15:36:08.545532] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.307 [2024-04-26 15:36:08.545540] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.307 [2024-04-26 15:36:08.545548] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.307 [2024-04-26 15:36:08.549078] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.307 [2024-04-26 15:36:08.557810] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.307 [2024-04-26 15:36:08.558483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.307 [2024-04-26 15:36:08.558850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.307 [2024-04-26 15:36:08.558864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:51.307 [2024-04-26 15:36:08.558873] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:51.307 [2024-04-26 15:36:08.559109] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:51.307 [2024-04-26 15:36:08.559330] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.307 [2024-04-26 15:36:08.559338] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.307 [2024-04-26 15:36:08.559346] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.307 [2024-04-26 15:36:08.562875] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.307 [2024-04-26 15:36:08.571603] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.307 [2024-04-26 15:36:08.572258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.307 [2024-04-26 15:36:08.572624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.307 [2024-04-26 15:36:08.572636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:51.307 [2024-04-26 15:36:08.572646] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:51.307 [2024-04-26 15:36:08.572891] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:51.307 [2024-04-26 15:36:08.573112] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.307 [2024-04-26 15:36:08.573120] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.307 [2024-04-26 15:36:08.573128] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.307 [2024-04-26 15:36:08.576649] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.307 [2024-04-26 15:36:08.585578] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.307 [2024-04-26 15:36:08.586240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.307 [2024-04-26 15:36:08.586604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.307 [2024-04-26 15:36:08.586616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:51.307 [2024-04-26 15:36:08.586625] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:51.307 [2024-04-26 15:36:08.586870] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:51.307 [2024-04-26 15:36:08.587092] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.307 [2024-04-26 15:36:08.587100] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.307 [2024-04-26 15:36:08.587107] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.307 [2024-04-26 15:36:08.590636] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.307 [2024-04-26 15:36:08.599366] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.307 [2024-04-26 15:36:08.600058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.307 [2024-04-26 15:36:08.600423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.307 [2024-04-26 15:36:08.600435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:51.307 [2024-04-26 15:36:08.600444] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:51.307 [2024-04-26 15:36:08.600681] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:51.307 [2024-04-26 15:36:08.600910] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.307 [2024-04-26 15:36:08.600919] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.307 [2024-04-26 15:36:08.600926] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.307 [2024-04-26 15:36:08.604450] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.307 [2024-04-26 15:36:08.613165] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.307 [2024-04-26 15:36:08.613834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.307 [2024-04-26 15:36:08.614203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.307 [2024-04-26 15:36:08.614220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:51.307 [2024-04-26 15:36:08.614229] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:51.307 [2024-04-26 15:36:08.614465] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:51.307 [2024-04-26 15:36:08.614686] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.307 [2024-04-26 15:36:08.614694] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.307 [2024-04-26 15:36:08.614701] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.307 [2024-04-26 15:36:08.618229] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.307 [2024-04-26 15:36:08.626959] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.307 [2024-04-26 15:36:08.627545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.307 [2024-04-26 15:36:08.627878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.307 [2024-04-26 15:36:08.627889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:51.307 [2024-04-26 15:36:08.627897] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:51.307 [2024-04-26 15:36:08.628114] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:51.307 [2024-04-26 15:36:08.628331] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.307 [2024-04-26 15:36:08.628339] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.307 [2024-04-26 15:36:08.628346] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.307 [2024-04-26 15:36:08.631871] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.307 [2024-04-26 15:36:08.640809] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.307 [2024-04-26 15:36:08.641345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.307 [2024-04-26 15:36:08.641672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.307 [2024-04-26 15:36:08.641682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:51.307 [2024-04-26 15:36:08.641689] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:51.307 [2024-04-26 15:36:08.641912] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:51.307 [2024-04-26 15:36:08.642130] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.307 [2024-04-26 15:36:08.642137] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.307 [2024-04-26 15:36:08.642144] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.307 [2024-04-26 15:36:08.645670] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.307 [2024-04-26 15:36:08.654617] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.307 [2024-04-26 15:36:08.655035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.307 [2024-04-26 15:36:08.655388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.307 [2024-04-26 15:36:08.655398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:51.307 [2024-04-26 15:36:08.655409] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:51.307 [2024-04-26 15:36:08.655628] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:51.307 [2024-04-26 15:36:08.655850] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.307 [2024-04-26 15:36:08.655858] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.307 [2024-04-26 15:36:08.655865] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.307 [2024-04-26 15:36:08.659386] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.307 [2024-04-26 15:36:08.668529] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.308 [2024-04-26 15:36:08.669065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.308 [2024-04-26 15:36:08.669425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.308 [2024-04-26 15:36:08.669436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:51.308 [2024-04-26 15:36:08.669444] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:51.308 [2024-04-26 15:36:08.669661] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:51.308 [2024-04-26 15:36:08.669882] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.308 [2024-04-26 15:36:08.669890] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.308 [2024-04-26 15:36:08.669896] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.308 [2024-04-26 15:36:08.673435] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.308 [2024-04-26 15:36:08.682385] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.308 [2024-04-26 15:36:08.682957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.308 [2024-04-26 15:36:08.683299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.308 [2024-04-26 15:36:08.683308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:51.308 [2024-04-26 15:36:08.683315] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:51.308 [2024-04-26 15:36:08.683533] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:51.308 [2024-04-26 15:36:08.683750] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.308 [2024-04-26 15:36:08.683757] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.308 [2024-04-26 15:36:08.683764] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.308 [2024-04-26 15:36:08.687294] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.308 [2024-04-26 15:36:08.696241] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.308 [2024-04-26 15:36:08.696772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.308 [2024-04-26 15:36:08.697127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.308 [2024-04-26 15:36:08.697138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:51.308 [2024-04-26 15:36:08.697145] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:51.308 [2024-04-26 15:36:08.697367] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:51.308 [2024-04-26 15:36:08.697584] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.308 [2024-04-26 15:36:08.697591] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.308 [2024-04-26 15:36:08.697598] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.308 [2024-04-26 15:36:08.701127] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.308 [2024-04-26 15:36:08.710073] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.308 [2024-04-26 15:36:08.710743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.308 [2024-04-26 15:36:08.711099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.308 [2024-04-26 15:36:08.711113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:51.308 [2024-04-26 15:36:08.711122] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:51.308 [2024-04-26 15:36:08.711359] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:51.308 [2024-04-26 15:36:08.711580] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.308 [2024-04-26 15:36:08.711588] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.308 [2024-04-26 15:36:08.711595] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.308 [2024-04-26 15:36:08.715134] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.308 [2024-04-26 15:36:08.723871] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.308 [2024-04-26 15:36:08.724549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.308 [2024-04-26 15:36:08.724911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.308 [2024-04-26 15:36:08.724925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:51.308 [2024-04-26 15:36:08.724934] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:51.308 [2024-04-26 15:36:08.725171] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:51.308 [2024-04-26 15:36:08.725392] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.308 [2024-04-26 15:36:08.725400] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.308 [2024-04-26 15:36:08.725407] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.308 [2024-04-26 15:36:08.728939] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.308 [2024-04-26 15:36:08.737668] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.308 [2024-04-26 15:36:08.738243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.308 [2024-04-26 15:36:08.738590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.308 [2024-04-26 15:36:08.738600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:51.308 [2024-04-26 15:36:08.738608] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:51.308 [2024-04-26 15:36:08.738826] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:51.308 [2024-04-26 15:36:08.739054] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.308 [2024-04-26 15:36:08.739063] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.308 [2024-04-26 15:36:08.739070] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.308 [2024-04-26 15:36:08.742589] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.308 [2024-04-26 15:36:08.751531] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.308 [2024-04-26 15:36:08.752102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.308 [2024-04-26 15:36:08.752455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.308 [2024-04-26 15:36:08.752465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:51.308 [2024-04-26 15:36:08.752472] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:51.308 [2024-04-26 15:36:08.752690] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:51.308 [2024-04-26 15:36:08.752912] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.308 [2024-04-26 15:36:08.752921] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.308 [2024-04-26 15:36:08.752927] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.571 [2024-04-26 15:36:08.756451] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.571 [2024-04-26 15:36:08.765387] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.571 [2024-04-26 15:36:08.765954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.571 [2024-04-26 15:36:08.766293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.571 [2024-04-26 15:36:08.766303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:51.571 [2024-04-26 15:36:08.766311] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:51.571 [2024-04-26 15:36:08.766528] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:51.571 [2024-04-26 15:36:08.766745] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.571 [2024-04-26 15:36:08.766753] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.571 [2024-04-26 15:36:08.766759] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.571 [2024-04-26 15:36:08.770286] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.571 [2024-04-26 15:36:08.779231] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.571 [2024-04-26 15:36:08.779783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.571 [2024-04-26 15:36:08.780165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.571 [2024-04-26 15:36:08.780175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:51.571 [2024-04-26 15:36:08.780183] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:51.571 [2024-04-26 15:36:08.780401] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:51.571 [2024-04-26 15:36:08.780618] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.571 [2024-04-26 15:36:08.780629] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.571 [2024-04-26 15:36:08.780636] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.571 [2024-04-26 15:36:08.784161] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.571 [2024-04-26 15:36:08.793096] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.571 [2024-04-26 15:36:08.793634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.571 [2024-04-26 15:36:08.793965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.571 [2024-04-26 15:36:08.793975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:51.571 [2024-04-26 15:36:08.793983] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:51.571 [2024-04-26 15:36:08.794201] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:51.571 [2024-04-26 15:36:08.794418] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.571 [2024-04-26 15:36:08.794426] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.571 [2024-04-26 15:36:08.794433] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.571 [2024-04-26 15:36:08.797966] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.571 [2024-04-26 15:36:08.806913] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.571 [2024-04-26 15:36:08.807440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.571 [2024-04-26 15:36:08.807786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.571 [2024-04-26 15:36:08.807795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:51.571 [2024-04-26 15:36:08.807803] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:51.571 [2024-04-26 15:36:08.808025] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:51.571 [2024-04-26 15:36:08.808243] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.571 [2024-04-26 15:36:08.808250] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.571 [2024-04-26 15:36:08.808257] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.571 [2024-04-26 15:36:08.811782] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.571 [2024-04-26 15:36:08.820726] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.571 [2024-04-26 15:36:08.821248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.571 [2024-04-26 15:36:08.821596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.571 [2024-04-26 15:36:08.821605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:51.571 [2024-04-26 15:36:08.821613] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:51.571 [2024-04-26 15:36:08.821831] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:51.571 [2024-04-26 15:36:08.822055] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.571 [2024-04-26 15:36:08.822063] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.571 [2024-04-26 15:36:08.822073] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.571 [2024-04-26 15:36:08.825594] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.571 [2024-04-26 15:36:08.834525] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.571 [2024-04-26 15:36:08.835051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.571 [2024-04-26 15:36:08.835399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.571 [2024-04-26 15:36:08.835409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:51.571 [2024-04-26 15:36:08.835416] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:51.571 [2024-04-26 15:36:08.835633] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:51.571 [2024-04-26 15:36:08.835855] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.571 [2024-04-26 15:36:08.835863] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.571 [2024-04-26 15:36:08.835869] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.571 [2024-04-26 15:36:08.839420] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.571 [2024-04-26 15:36:08.848365] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.571 [2024-04-26 15:36:08.848946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.571 [2024-04-26 15:36:08.849308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.571 [2024-04-26 15:36:08.849321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:51.571 [2024-04-26 15:36:08.849330] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:51.571 [2024-04-26 15:36:08.849567] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:51.571 [2024-04-26 15:36:08.849788] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.571 [2024-04-26 15:36:08.849796] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.571 [2024-04-26 15:36:08.849804] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.571 [2024-04-26 15:36:08.853334] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.571 [2024-04-26 15:36:08.862258] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.571 [2024-04-26 15:36:08.862932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.571 [2024-04-26 15:36:08.863291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.571 [2024-04-26 15:36:08.863303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:51.571 [2024-04-26 15:36:08.863312] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:51.571 [2024-04-26 15:36:08.863549] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:51.572 [2024-04-26 15:36:08.863770] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.572 [2024-04-26 15:36:08.863778] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.572 [2024-04-26 15:36:08.863785] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.572 [2024-04-26 15:36:08.867323] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.572 [2024-04-26 15:36:08.876055] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.572 [2024-04-26 15:36:08.876692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.572 [2024-04-26 15:36:08.877119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.572 [2024-04-26 15:36:08.877134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:51.572 [2024-04-26 15:36:08.877144] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:51.572 [2024-04-26 15:36:08.877380] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:51.572 [2024-04-26 15:36:08.877601] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.572 [2024-04-26 15:36:08.877609] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.572 [2024-04-26 15:36:08.877617] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.572 [2024-04-26 15:36:08.881367] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.572 [2024-04-26 15:36:08.889892] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.572 [2024-04-26 15:36:08.890408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.572 [2024-04-26 15:36:08.890777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.572 [2024-04-26 15:36:08.890790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:51.572 [2024-04-26 15:36:08.890799] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:51.572 [2024-04-26 15:36:08.891044] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:51.572 [2024-04-26 15:36:08.891265] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.572 [2024-04-26 15:36:08.891274] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.572 [2024-04-26 15:36:08.891281] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.572 [2024-04-26 15:36:08.894801] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.572 [2024-04-26 15:36:08.903740] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.572 [2024-04-26 15:36:08.904361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.572 [2024-04-26 15:36:08.904725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.572 [2024-04-26 15:36:08.904738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:51.572 [2024-04-26 15:36:08.904747] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:51.572 [2024-04-26 15:36:08.904991] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:51.572 [2024-04-26 15:36:08.905213] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.572 [2024-04-26 15:36:08.905221] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.572 [2024-04-26 15:36:08.905228] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.572 [2024-04-26 15:36:08.908754] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.572 [2024-04-26 15:36:08.917701] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.572 [2024-04-26 15:36:08.918245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.572 [2024-04-26 15:36:08.918602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.572 [2024-04-26 15:36:08.918612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:51.572 [2024-04-26 15:36:08.918619] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:51.572 [2024-04-26 15:36:08.918843] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:51.572 [2024-04-26 15:36:08.919061] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.572 [2024-04-26 15:36:08.919069] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.572 [2024-04-26 15:36:08.919076] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.572 [2024-04-26 15:36:08.922594] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.572 [2024-04-26 15:36:08.931526] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.572 [2024-04-26 15:36:08.932186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.572 [2024-04-26 15:36:08.932557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.572 [2024-04-26 15:36:08.932570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:51.572 [2024-04-26 15:36:08.932579] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:51.572 [2024-04-26 15:36:08.932816] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:51.572 [2024-04-26 15:36:08.933042] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.572 [2024-04-26 15:36:08.933051] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.572 [2024-04-26 15:36:08.933058] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.572 [2024-04-26 15:36:08.936581] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.572 [2024-04-26 15:36:08.945303] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.572 [2024-04-26 15:36:08.945961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.572 [2024-04-26 15:36:08.946323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.572 [2024-04-26 15:36:08.946336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:51.572 [2024-04-26 15:36:08.946345] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:51.572 [2024-04-26 15:36:08.946581] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:51.572 [2024-04-26 15:36:08.946802] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.572 [2024-04-26 15:36:08.946810] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.572 [2024-04-26 15:36:08.946817] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.572 [2024-04-26 15:36:08.950347] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.572 [2024-04-26 15:36:08.959072] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.572 [2024-04-26 15:36:08.959754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.572 [2024-04-26 15:36:08.960135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.572 [2024-04-26 15:36:08.960149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:51.572 [2024-04-26 15:36:08.960159] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:51.572 [2024-04-26 15:36:08.960396] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:51.572 [2024-04-26 15:36:08.960616] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.572 [2024-04-26 15:36:08.960625] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.572 [2024-04-26 15:36:08.960632] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.572 [2024-04-26 15:36:08.964157] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.572 [2024-04-26 15:36:08.972885] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.572 [2024-04-26 15:36:08.973506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.572 [2024-04-26 15:36:08.973882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.572 [2024-04-26 15:36:08.973896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:51.572 [2024-04-26 15:36:08.973905] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:51.572 [2024-04-26 15:36:08.974142] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:51.572 [2024-04-26 15:36:08.974363] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.572 [2024-04-26 15:36:08.974371] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.572 [2024-04-26 15:36:08.974378] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.572 [2024-04-26 15:36:08.977905] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.572 [2024-04-26 15:36:08.986844] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.572 [2024-04-26 15:36:08.987526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.572 [2024-04-26 15:36:08.987914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.572 [2024-04-26 15:36:08.987929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:51.572 [2024-04-26 15:36:08.987938] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:51.572 [2024-04-26 15:36:08.988176] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:51.572 [2024-04-26 15:36:08.988397] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.573 [2024-04-26 15:36:08.988405] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.573 [2024-04-26 15:36:08.988412] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.573 [2024-04-26 15:36:08.991939] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.573 [2024-04-26 15:36:09.000677] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.573 [2024-04-26 15:36:09.001337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.573 [2024-04-26 15:36:09.001741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.573 [2024-04-26 15:36:09.001758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:51.573 [2024-04-26 15:36:09.001768] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:51.573 [2024-04-26 15:36:09.002011] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:51.573 [2024-04-26 15:36:09.002232] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.573 [2024-04-26 15:36:09.002240] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.573 [2024-04-26 15:36:09.002247] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.573 [2024-04-26 15:36:09.005768] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.573 [2024-04-26 15:36:09.014491] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.573 [2024-04-26 15:36:09.015175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.573 [2024-04-26 15:36:09.015563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.573 [2024-04-26 15:36:09.015575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:51.573 [2024-04-26 15:36:09.015585] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:51.573 [2024-04-26 15:36:09.015821] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:51.573 [2024-04-26 15:36:09.016051] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.573 [2024-04-26 15:36:09.016060] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.573 [2024-04-26 15:36:09.016068] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.836 [2024-04-26 15:36:09.019590] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.836 [2024-04-26 15:36:09.028319] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.836 [2024-04-26 15:36:09.028936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.836 [2024-04-26 15:36:09.029371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.836 [2024-04-26 15:36:09.029384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:51.836 [2024-04-26 15:36:09.029393] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:51.836 [2024-04-26 15:36:09.029630] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:51.836 [2024-04-26 15:36:09.029859] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.836 [2024-04-26 15:36:09.029868] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.836 [2024-04-26 15:36:09.029875] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.836 [2024-04-26 15:36:09.033442] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.836 [2024-04-26 15:36:09.042178] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.836 [2024-04-26 15:36:09.042627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.836 [2024-04-26 15:36:09.042970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.836 [2024-04-26 15:36:09.042982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:51.836 [2024-04-26 15:36:09.042998] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:51.836 [2024-04-26 15:36:09.043216] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:51.837 [2024-04-26 15:36:09.043433] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.837 [2024-04-26 15:36:09.043442] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.837 [2024-04-26 15:36:09.043449] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.837 [2024-04-26 15:36:09.046973] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.837 [2024-04-26 15:36:09.056123] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.837 [2024-04-26 15:36:09.056774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.837 [2024-04-26 15:36:09.057134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.837 [2024-04-26 15:36:09.057147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:51.837 [2024-04-26 15:36:09.057157] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:51.837 [2024-04-26 15:36:09.057393] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:51.837 [2024-04-26 15:36:09.057614] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.837 [2024-04-26 15:36:09.057622] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.837 [2024-04-26 15:36:09.057630] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.837 [2024-04-26 15:36:09.061164] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.837 [2024-04-26 15:36:09.069905] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.837 [2024-04-26 15:36:09.070559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.837 [2024-04-26 15:36:09.070931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.837 [2024-04-26 15:36:09.070946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:51.837 [2024-04-26 15:36:09.070956] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:51.837 [2024-04-26 15:36:09.071193] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:51.837 [2024-04-26 15:36:09.071415] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.837 [2024-04-26 15:36:09.071424] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.837 [2024-04-26 15:36:09.071431] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.837 [2024-04-26 15:36:09.074974] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.837 [2024-04-26 15:36:09.083693] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.837 [2024-04-26 15:36:09.084250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.837 [2024-04-26 15:36:09.084637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.837 [2024-04-26 15:36:09.084650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:51.837 [2024-04-26 15:36:09.084659] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:51.837 [2024-04-26 15:36:09.084907] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:51.837 [2024-04-26 15:36:09.085129] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.837 [2024-04-26 15:36:09.085137] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.837 [2024-04-26 15:36:09.085144] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.837 [2024-04-26 15:36:09.088667] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.837 [2024-04-26 15:36:09.097640] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.837 [2024-04-26 15:36:09.098321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.837 [2024-04-26 15:36:09.098683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.837 [2024-04-26 15:36:09.098697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:51.837 [2024-04-26 15:36:09.098706] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:51.837 [2024-04-26 15:36:09.098951] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:51.837 [2024-04-26 15:36:09.099172] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.837 [2024-04-26 15:36:09.099180] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.837 [2024-04-26 15:36:09.099188] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.837 [2024-04-26 15:36:09.102714] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.837 [2024-04-26 15:36:09.111453] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.837 [2024-04-26 15:36:09.112114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.837 [2024-04-26 15:36:09.112488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.837 [2024-04-26 15:36:09.112501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:51.837 [2024-04-26 15:36:09.112511] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:51.837 [2024-04-26 15:36:09.112747] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:51.837 [2024-04-26 15:36:09.112974] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.837 [2024-04-26 15:36:09.112983] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.837 [2024-04-26 15:36:09.112990] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.837 [2024-04-26 15:36:09.116515] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.837 [2024-04-26 15:36:09.125255] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.837 [2024-04-26 15:36:09.125795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.837 [2024-04-26 15:36:09.126137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.837 [2024-04-26 15:36:09.126148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:51.837 [2024-04-26 15:36:09.126155] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:51.837 [2024-04-26 15:36:09.126373] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:51.837 [2024-04-26 15:36:09.126596] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.837 [2024-04-26 15:36:09.126604] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.837 [2024-04-26 15:36:09.126611] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.837 [2024-04-26 15:36:09.130142] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.837 [2024-04-26 15:36:09.139090] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.837 [2024-04-26 15:36:09.139649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.837 [2024-04-26 15:36:09.140089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.837 [2024-04-26 15:36:09.140126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:51.837 [2024-04-26 15:36:09.140136] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:51.837 [2024-04-26 15:36:09.140373] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:51.837 [2024-04-26 15:36:09.140595] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.837 [2024-04-26 15:36:09.140604] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.837 [2024-04-26 15:36:09.140612] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.837 [2024-04-26 15:36:09.144153] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.837 [2024-04-26 15:36:09.152892] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.837 [2024-04-26 15:36:09.153430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.837 [2024-04-26 15:36:09.153781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.837 [2024-04-26 15:36:09.153790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:51.837 [2024-04-26 15:36:09.153798] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:51.837 [2024-04-26 15:36:09.154022] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:51.837 [2024-04-26 15:36:09.154240] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.837 [2024-04-26 15:36:09.154247] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.837 [2024-04-26 15:36:09.154254] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.837 [2024-04-26 15:36:09.157786] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.837 [2024-04-26 15:36:09.166771] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.837 [2024-04-26 15:36:09.167296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.837 [2024-04-26 15:36:09.167653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.837 [2024-04-26 15:36:09.167662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:51.837 [2024-04-26 15:36:09.167670] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:51.837 [2024-04-26 15:36:09.167894] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:51.837 [2024-04-26 15:36:09.168112] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.837 [2024-04-26 15:36:09.168124] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.838 [2024-04-26 15:36:09.168131] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.838 [2024-04-26 15:36:09.171654] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.838 [2024-04-26 15:36:09.180623] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.838 [2024-04-26 15:36:09.181141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.838 [2024-04-26 15:36:09.181481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.838 [2024-04-26 15:36:09.181491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:51.838 [2024-04-26 15:36:09.181498] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:51.838 [2024-04-26 15:36:09.181716] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:51.838 [2024-04-26 15:36:09.181939] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.838 [2024-04-26 15:36:09.181947] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.838 [2024-04-26 15:36:09.181954] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.838 [2024-04-26 15:36:09.185479] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.838 [2024-04-26 15:36:09.194421] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.838 [2024-04-26 15:36:09.194928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.838 [2024-04-26 15:36:09.195239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.838 [2024-04-26 15:36:09.195249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:51.838 [2024-04-26 15:36:09.195256] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:51.838 [2024-04-26 15:36:09.195475] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:51.838 [2024-04-26 15:36:09.195692] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.838 [2024-04-26 15:36:09.195699] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.838 [2024-04-26 15:36:09.195707] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.838 [2024-04-26 15:36:09.199237] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.838 [2024-04-26 15:36:09.208396] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.838 [2024-04-26 15:36:09.209097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.838 [2024-04-26 15:36:09.209449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.838 [2024-04-26 15:36:09.209461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:51.838 [2024-04-26 15:36:09.209470] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:51.838 [2024-04-26 15:36:09.209707] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:51.838 [2024-04-26 15:36:09.209934] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.838 [2024-04-26 15:36:09.209943] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.838 [2024-04-26 15:36:09.209955] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.838 [2024-04-26 15:36:09.213478] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.838 [2024-04-26 15:36:09.222211] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.838 [2024-04-26 15:36:09.222633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.838 [2024-04-26 15:36:09.223037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.838 [2024-04-26 15:36:09.223048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:51.838 [2024-04-26 15:36:09.223056] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:51.838 [2024-04-26 15:36:09.223275] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:51.838 [2024-04-26 15:36:09.223492] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.838 [2024-04-26 15:36:09.223500] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.838 [2024-04-26 15:36:09.223507] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.838 [2024-04-26 15:36:09.227034] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.838 [2024-04-26 15:36:09.236185] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.838 [2024-04-26 15:36:09.236686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.838 [2024-04-26 15:36:09.237093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.838 [2024-04-26 15:36:09.237107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:51.838 [2024-04-26 15:36:09.237116] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:51.838 [2024-04-26 15:36:09.237353] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:51.838 [2024-04-26 15:36:09.237575] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.838 [2024-04-26 15:36:09.237583] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.838 [2024-04-26 15:36:09.237590] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.838 [2024-04-26 15:36:09.241121] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.838 [2024-04-26 15:36:09.250054] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.838 [2024-04-26 15:36:09.250614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.838 [2024-04-26 15:36:09.250883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.838 [2024-04-26 15:36:09.250893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:51.838 [2024-04-26 15:36:09.250901] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:51.838 [2024-04-26 15:36:09.251119] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:51.838 [2024-04-26 15:36:09.251336] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.838 [2024-04-26 15:36:09.251343] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.838 [2024-04-26 15:36:09.251350] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.838 [2024-04-26 15:36:09.254885] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.838 [2024-04-26 15:36:09.263824] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.838 [2024-04-26 15:36:09.264263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.838 [2024-04-26 15:36:09.264613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.838 [2024-04-26 15:36:09.264623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:51.838 [2024-04-26 15:36:09.264630] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:51.838 [2024-04-26 15:36:09.264853] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:51.838 [2024-04-26 15:36:09.265071] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.838 [2024-04-26 15:36:09.265079] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.838 [2024-04-26 15:36:09.265086] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.838 [2024-04-26 15:36:09.268604] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.838 [2024-04-26 15:36:09.277747] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.838 [2024-04-26 15:36:09.278413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.838 [2024-04-26 15:36:09.278672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.838 [2024-04-26 15:36:09.278685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:51.838 [2024-04-26 15:36:09.278694] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:51.838 [2024-04-26 15:36:09.278939] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:51.838 [2024-04-26 15:36:09.279161] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.838 [2024-04-26 15:36:09.279169] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.838 [2024-04-26 15:36:09.279177] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.838 [2024-04-26 15:36:09.282702] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.101 [2024-04-26 15:36:09.291744] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.101 [2024-04-26 15:36:09.292441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.101 [2024-04-26 15:36:09.292712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.101 [2024-04-26 15:36:09.292725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:52.101 [2024-04-26 15:36:09.292735] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:52.101 [2024-04-26 15:36:09.292978] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:52.101 [2024-04-26 15:36:09.293200] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.101 [2024-04-26 15:36:09.293208] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.101 [2024-04-26 15:36:09.293216] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.101 [2024-04-26 15:36:09.296740] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.101 [2024-04-26 15:36:09.305679] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.101 [2024-04-26 15:36:09.306323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.101 [2024-04-26 15:36:09.306684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.101 [2024-04-26 15:36:09.306697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:52.101 [2024-04-26 15:36:09.306706] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:52.101 [2024-04-26 15:36:09.306949] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:52.101 [2024-04-26 15:36:09.307170] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.101 [2024-04-26 15:36:09.307178] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.102 [2024-04-26 15:36:09.307186] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.102 [2024-04-26 15:36:09.310709] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.102 [2024-04-26 15:36:09.319643] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.102 [2024-04-26 15:36:09.320186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.102 [2024-04-26 15:36:09.320531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.102 [2024-04-26 15:36:09.320541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:52.102 [2024-04-26 15:36:09.320548] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:52.102 [2024-04-26 15:36:09.320766] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:52.102 [2024-04-26 15:36:09.320988] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.102 [2024-04-26 15:36:09.320997] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.102 [2024-04-26 15:36:09.321004] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.102 [2024-04-26 15:36:09.324526] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.102 [2024-04-26 15:36:09.333451] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.102 [2024-04-26 15:36:09.334079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.102 [2024-04-26 15:36:09.334460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.102 [2024-04-26 15:36:09.334473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:52.102 [2024-04-26 15:36:09.334482] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:52.102 [2024-04-26 15:36:09.334719] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:52.102 [2024-04-26 15:36:09.334945] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.102 [2024-04-26 15:36:09.334953] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.102 [2024-04-26 15:36:09.334961] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.102 [2024-04-26 15:36:09.338487] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.102 [2024-04-26 15:36:09.347220] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.102 [2024-04-26 15:36:09.347693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.102 [2024-04-26 15:36:09.348048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.102 [2024-04-26 15:36:09.348058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:52.102 [2024-04-26 15:36:09.348066] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:52.102 [2024-04-26 15:36:09.348283] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:52.102 [2024-04-26 15:36:09.348501] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.102 [2024-04-26 15:36:09.348509] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.102 [2024-04-26 15:36:09.348516] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.102 [2024-04-26 15:36:09.352045] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.102 [2024-04-26 15:36:09.361188] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.102 [2024-04-26 15:36:09.361889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.102 [2024-04-26 15:36:09.362242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.102 [2024-04-26 15:36:09.362255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:52.102 [2024-04-26 15:36:09.362265] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:52.102 [2024-04-26 15:36:09.362501] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:52.102 [2024-04-26 15:36:09.362722] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.102 [2024-04-26 15:36:09.362730] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.102 [2024-04-26 15:36:09.362737] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.102 [2024-04-26 15:36:09.366271] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.102 [2024-04-26 15:36:09.375011] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.102 [2024-04-26 15:36:09.375662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.102 [2024-04-26 15:36:09.375968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.102 [2024-04-26 15:36:09.375983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:52.102 [2024-04-26 15:36:09.375993] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:52.102 [2024-04-26 15:36:09.376230] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:52.102 [2024-04-26 15:36:09.376451] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.102 [2024-04-26 15:36:09.376459] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.102 [2024-04-26 15:36:09.376467] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.102 [2024-04-26 15:36:09.379996] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.102 [2024-04-26 15:36:09.388929] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.102 [2024-04-26 15:36:09.389502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.102 [2024-04-26 15:36:09.389736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.102 [2024-04-26 15:36:09.389750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:52.102 [2024-04-26 15:36:09.389758] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:52.102 [2024-04-26 15:36:09.389983] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:52.102 [2024-04-26 15:36:09.390203] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.102 [2024-04-26 15:36:09.390211] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.102 [2024-04-26 15:36:09.390218] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.102 [2024-04-26 15:36:09.393744] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.102 [2024-04-26 15:36:09.402886] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.102 [2024-04-26 15:36:09.403570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.102 [2024-04-26 15:36:09.403976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.102 [2024-04-26 15:36:09.403991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:52.102 [2024-04-26 15:36:09.404000] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:52.102 [2024-04-26 15:36:09.404237] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:52.102 [2024-04-26 15:36:09.404458] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.102 [2024-04-26 15:36:09.404466] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.102 [2024-04-26 15:36:09.404473] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.102 [2024-04-26 15:36:09.408004] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.102 [2024-04-26 15:36:09.416726] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.102 [2024-04-26 15:36:09.417291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.102 [2024-04-26 15:36:09.417644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.102 [2024-04-26 15:36:09.417653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:52.102 [2024-04-26 15:36:09.417661] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:52.102 [2024-04-26 15:36:09.417883] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:52.102 [2024-04-26 15:36:09.418101] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.102 [2024-04-26 15:36:09.418109] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.102 [2024-04-26 15:36:09.418116] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.102 [2024-04-26 15:36:09.421634] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.102 [2024-04-26 15:36:09.430563] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.102 [2024-04-26 15:36:09.431110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.102 [2024-04-26 15:36:09.431328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.102 [2024-04-26 15:36:09.431337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:52.102 [2024-04-26 15:36:09.431349] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:52.103 [2024-04-26 15:36:09.431567] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:52.103 [2024-04-26 15:36:09.431784] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.103 [2024-04-26 15:36:09.431792] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.103 [2024-04-26 15:36:09.431800] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.103 [2024-04-26 15:36:09.435329] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.103 [2024-04-26 15:36:09.444479] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.103 [2024-04-26 15:36:09.444902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.103 [2024-04-26 15:36:09.445220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.103 [2024-04-26 15:36:09.445230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:52.103 [2024-04-26 15:36:09.445239] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:52.103 [2024-04-26 15:36:09.445456] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:52.103 [2024-04-26 15:36:09.445673] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.103 [2024-04-26 15:36:09.445681] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.103 [2024-04-26 15:36:09.445688] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.103 [2024-04-26 15:36:09.449215] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.103 [2024-04-26 15:36:09.458364] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.103 [2024-04-26 15:36:09.458894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.103 [2024-04-26 15:36:09.459227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.103 [2024-04-26 15:36:09.459237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:52.103 [2024-04-26 15:36:09.459244] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:52.103 [2024-04-26 15:36:09.459461] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:52.103 [2024-04-26 15:36:09.459678] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.103 [2024-04-26 15:36:09.459686] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.103 [2024-04-26 15:36:09.459692] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.103 [2024-04-26 15:36:09.463213] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.103 [2024-04-26 15:36:09.472154] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.103 [2024-04-26 15:36:09.472701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.103 [2024-04-26 15:36:09.472947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.103 [2024-04-26 15:36:09.472957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:52.103 [2024-04-26 15:36:09.472965] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:52.103 [2024-04-26 15:36:09.473186] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:52.103 [2024-04-26 15:36:09.473404] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.103 [2024-04-26 15:36:09.473411] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.103 [2024-04-26 15:36:09.473418] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.103 [2024-04-26 15:36:09.476939] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.103 [2024-04-26 15:36:09.486077] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.103 [2024-04-26 15:36:09.486646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.103 [2024-04-26 15:36:09.486894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.103 [2024-04-26 15:36:09.486903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:52.103 [2024-04-26 15:36:09.486910] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:52.103 [2024-04-26 15:36:09.487128] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:52.103 [2024-04-26 15:36:09.487345] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.103 [2024-04-26 15:36:09.487353] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.103 [2024-04-26 15:36:09.487359] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.103 [2024-04-26 15:36:09.490880] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.103 [2024-04-26 15:36:09.500015] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.103 [2024-04-26 15:36:09.500543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.103 [2024-04-26 15:36:09.500771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.103 [2024-04-26 15:36:09.500781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:52.103 [2024-04-26 15:36:09.500788] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:52.103 [2024-04-26 15:36:09.501010] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:52.103 [2024-04-26 15:36:09.501228] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.103 [2024-04-26 15:36:09.501235] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.103 [2024-04-26 15:36:09.501242] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.103 [2024-04-26 15:36:09.504766] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.103 [2024-04-26 15:36:09.513906] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.103 [2024-04-26 15:36:09.514435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.103 [2024-04-26 15:36:09.514790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.103 [2024-04-26 15:36:09.514799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:52.103 [2024-04-26 15:36:09.514807] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:52.103 [2024-04-26 15:36:09.515029] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:52.103 [2024-04-26 15:36:09.515252] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.103 [2024-04-26 15:36:09.515259] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.103 [2024-04-26 15:36:09.515266] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.103 [2024-04-26 15:36:09.518783] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.103 [2024-04-26 15:36:09.527711] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.103 [2024-04-26 15:36:09.528386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.103 [2024-04-26 15:36:09.528611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.103 [2024-04-26 15:36:09.528626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:52.103 [2024-04-26 15:36:09.528635] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:52.103 [2024-04-26 15:36:09.528882] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:52.103 [2024-04-26 15:36:09.529104] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.103 [2024-04-26 15:36:09.529112] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.103 [2024-04-26 15:36:09.529120] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.103 [2024-04-26 15:36:09.532645] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.103 [2024-04-26 15:36:09.541578] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.103 [2024-04-26 15:36:09.542257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.103 [2024-04-26 15:36:09.542678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.103 [2024-04-26 15:36:09.542690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:52.103 [2024-04-26 15:36:09.542700] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:52.103 [2024-04-26 15:36:09.542944] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:52.103 [2024-04-26 15:36:09.543166] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.103 [2024-04-26 15:36:09.543174] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.103 [2024-04-26 15:36:09.543181] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.103 [2024-04-26 15:36:09.546703] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.365 [2024-04-26 15:36:09.555434] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.366 [2024-04-26 15:36:09.555980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.366 [2024-04-26 15:36:09.556282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.366 [2024-04-26 15:36:09.556292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:52.366 [2024-04-26 15:36:09.556300] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:52.366 [2024-04-26 15:36:09.556518] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:52.366 [2024-04-26 15:36:09.556736] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.366 [2024-04-26 15:36:09.556748] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.366 [2024-04-26 15:36:09.556755] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.366 [2024-04-26 15:36:09.560282] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.366 [2024-04-26 15:36:09.569210] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.366 [2024-04-26 15:36:09.569742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.366 [2024-04-26 15:36:09.570051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.366 [2024-04-26 15:36:09.570061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:52.366 [2024-04-26 15:36:09.570068] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:52.366 [2024-04-26 15:36:09.570285] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:52.366 [2024-04-26 15:36:09.570502] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.366 [2024-04-26 15:36:09.570510] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.366 [2024-04-26 15:36:09.570516] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.366 [2024-04-26 15:36:09.574051] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.366 [2024-04-26 15:36:09.582990] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.366 [2024-04-26 15:36:09.583566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.366 [2024-04-26 15:36:09.583913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.366 [2024-04-26 15:36:09.583923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:52.366 [2024-04-26 15:36:09.583930] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:52.366 [2024-04-26 15:36:09.584148] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:52.366 [2024-04-26 15:36:09.584365] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.366 [2024-04-26 15:36:09.584372] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.366 [2024-04-26 15:36:09.584379] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.366 [2024-04-26 15:36:09.587903] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.366 [2024-04-26 15:36:09.596843] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.366 [2024-04-26 15:36:09.597484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.366 [2024-04-26 15:36:09.597856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.366 [2024-04-26 15:36:09.597870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:52.366 [2024-04-26 15:36:09.597879] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:52.366 [2024-04-26 15:36:09.598116] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:52.366 [2024-04-26 15:36:09.598337] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.366 [2024-04-26 15:36:09.598345] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.366 [2024-04-26 15:36:09.598356] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.366 [2024-04-26 15:36:09.601887] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.366 [2024-04-26 15:36:09.610626] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.366 [2024-04-26 15:36:09.611304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.366 [2024-04-26 15:36:09.611665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.366 [2024-04-26 15:36:09.611677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:52.366 [2024-04-26 15:36:09.611686] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:52.366 [2024-04-26 15:36:09.611928] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:52.366 [2024-04-26 15:36:09.612149] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.366 [2024-04-26 15:36:09.612157] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.366 [2024-04-26 15:36:09.612164] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.366 [2024-04-26 15:36:09.615691] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.366 [2024-04-26 15:36:09.624417] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.366 [2024-04-26 15:36:09.625068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.366 [2024-04-26 15:36:09.625436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.366 [2024-04-26 15:36:09.625450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:52.366 [2024-04-26 15:36:09.625459] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:52.366 [2024-04-26 15:36:09.625695] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:52.366 [2024-04-26 15:36:09.625923] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.366 [2024-04-26 15:36:09.625931] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.366 [2024-04-26 15:36:09.625939] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.366 [2024-04-26 15:36:09.629467] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.366 [2024-04-26 15:36:09.638196] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.366 [2024-04-26 15:36:09.638880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.366 [2024-04-26 15:36:09.639255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.366 [2024-04-26 15:36:09.639267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:52.366 [2024-04-26 15:36:09.639276] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:52.366 [2024-04-26 15:36:09.639513] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:52.366 [2024-04-26 15:36:09.639733] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.366 [2024-04-26 15:36:09.639741] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.366 [2024-04-26 15:36:09.639749] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.366 [2024-04-26 15:36:09.643283] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.366 [2024-04-26 15:36:09.652017] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.366 [2024-04-26 15:36:09.652597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.366 [2024-04-26 15:36:09.653066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.366 [2024-04-26 15:36:09.653103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:52.366 [2024-04-26 15:36:09.653114] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:52.366 [2024-04-26 15:36:09.653350] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:52.366 [2024-04-26 15:36:09.653572] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.366 [2024-04-26 15:36:09.653580] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.366 [2024-04-26 15:36:09.653588] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.366 [2024-04-26 15:36:09.657120] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.366 [2024-04-26 15:36:09.665847] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.366 [2024-04-26 15:36:09.666531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.366 [2024-04-26 15:36:09.666928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.366 [2024-04-26 15:36:09.666943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:52.366 [2024-04-26 15:36:09.666953] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:52.366 [2024-04-26 15:36:09.667190] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:52.366 [2024-04-26 15:36:09.667412] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.366 [2024-04-26 15:36:09.667420] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.366 [2024-04-26 15:36:09.667428] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.366 [2024-04-26 15:36:09.670958] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.366 [2024-04-26 15:36:09.679691] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.366 [2024-04-26 15:36:09.680269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.367 [2024-04-26 15:36:09.680640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.367 [2024-04-26 15:36:09.680650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:52.367 [2024-04-26 15:36:09.680658] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:52.367 [2024-04-26 15:36:09.680882] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:52.367 [2024-04-26 15:36:09.681101] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.367 [2024-04-26 15:36:09.681109] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.367 [2024-04-26 15:36:09.681116] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.367 [2024-04-26 15:36:09.684635] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.367 [2024-04-26 15:36:09.693573] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.367 [2024-04-26 15:36:09.694273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.367 [2024-04-26 15:36:09.694645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.367 [2024-04-26 15:36:09.694657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:52.367 [2024-04-26 15:36:09.694667] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:52.367 [2024-04-26 15:36:09.694911] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:52.367 [2024-04-26 15:36:09.695133] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.367 [2024-04-26 15:36:09.695141] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.367 [2024-04-26 15:36:09.695148] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.367 [2024-04-26 15:36:09.698673] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.367 [2024-04-26 15:36:09.707401] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.367 [2024-04-26 15:36:09.708083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.367 [2024-04-26 15:36:09.708457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.367 [2024-04-26 15:36:09.708470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:52.367 [2024-04-26 15:36:09.708479] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:52.367 [2024-04-26 15:36:09.708716] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:52.367 [2024-04-26 15:36:09.708942] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.367 [2024-04-26 15:36:09.708951] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.367 [2024-04-26 15:36:09.708958] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.367 [2024-04-26 15:36:09.712491] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.367 [2024-04-26 15:36:09.721219] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.367 [2024-04-26 15:36:09.721683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.367 [2024-04-26 15:36:09.722112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.367 [2024-04-26 15:36:09.722148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:52.367 [2024-04-26 15:36:09.722159] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:52.367 [2024-04-26 15:36:09.722396] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:52.367 [2024-04-26 15:36:09.722617] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.367 [2024-04-26 15:36:09.722626] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.367 [2024-04-26 15:36:09.722633] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.367 [2024-04-26 15:36:09.726166] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.367 [2024-04-26 15:36:09.735098] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.367 [2024-04-26 15:36:09.735577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.367 [2024-04-26 15:36:09.735937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.367 [2024-04-26 15:36:09.735949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:52.367 [2024-04-26 15:36:09.735957] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:52.367 [2024-04-26 15:36:09.736175] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:52.367 [2024-04-26 15:36:09.736393] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.367 [2024-04-26 15:36:09.736400] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.367 [2024-04-26 15:36:09.736407] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.367 [2024-04-26 15:36:09.739933] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.367 [2024-04-26 15:36:09.749078] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.367 [2024-04-26 15:36:09.749717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.367 [2024-04-26 15:36:09.750111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.367 [2024-04-26 15:36:09.750126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:52.367 [2024-04-26 15:36:09.750135] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:52.367 [2024-04-26 15:36:09.750372] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:52.367 [2024-04-26 15:36:09.750593] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.367 [2024-04-26 15:36:09.750602] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.367 [2024-04-26 15:36:09.750609] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.367 [2024-04-26 15:36:09.754140] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.367 [2024-04-26 15:36:09.762868] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.367 [2024-04-26 15:36:09.763326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.367 [2024-04-26 15:36:09.763549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.367 [2024-04-26 15:36:09.763558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:52.367 [2024-04-26 15:36:09.763566] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:52.367 [2024-04-26 15:36:09.763784] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:52.367 [2024-04-26 15:36:09.764007] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.367 [2024-04-26 15:36:09.764015] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.367 [2024-04-26 15:36:09.764021] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.367 [2024-04-26 15:36:09.767540] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.367 [2024-04-26 15:36:09.776691] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.367 [2024-04-26 15:36:09.777237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.367 [2024-04-26 15:36:09.777582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.367 [2024-04-26 15:36:09.777596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:52.367 [2024-04-26 15:36:09.777603] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:52.367 [2024-04-26 15:36:09.777821] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:52.367 [2024-04-26 15:36:09.778044] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.367 [2024-04-26 15:36:09.778051] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.367 [2024-04-26 15:36:09.778058] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.367 [2024-04-26 15:36:09.781576] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.367 [2024-04-26 15:36:09.790511] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.367 [2024-04-26 15:36:09.791214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.367 [2024-04-26 15:36:09.791575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.367 [2024-04-26 15:36:09.791588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:52.367 [2024-04-26 15:36:09.791597] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:52.367 [2024-04-26 15:36:09.791833] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:52.367 [2024-04-26 15:36:09.792061] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.367 [2024-04-26 15:36:09.792069] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.367 [2024-04-26 15:36:09.792076] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.367 [2024-04-26 15:36:09.795603] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.367 [2024-04-26 15:36:09.804330] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.367 [2024-04-26 15:36:09.804885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.367 [2024-04-26 15:36:09.805227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.367 [2024-04-26 15:36:09.805237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:52.368 [2024-04-26 15:36:09.805245] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:52.368 [2024-04-26 15:36:09.805467] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:52.368 [2024-04-26 15:36:09.805685] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.368 [2024-04-26 15:36:09.805693] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.368 [2024-04-26 15:36:09.805700] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.368 [2024-04-26 15:36:09.809226] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.629 [2024-04-26 15:36:09.818156] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.629 [2024-04-26 15:36:09.818802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.629 [2024-04-26 15:36:09.819181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.629 [2024-04-26 15:36:09.819194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:52.629 [2024-04-26 15:36:09.819208] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:52.629 [2024-04-26 15:36:09.819445] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:52.629 [2024-04-26 15:36:09.819665] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.629 [2024-04-26 15:36:09.819674] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.629 [2024-04-26 15:36:09.819681] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.629 [2024-04-26 15:36:09.823208] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.629 [2024-04-26 15:36:09.831933] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.629 [2024-04-26 15:36:09.832409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.629 [2024-04-26 15:36:09.832701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.629 [2024-04-26 15:36:09.832711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:52.629 [2024-04-26 15:36:09.832718] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:52.629 [2024-04-26 15:36:09.832941] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:52.629 [2024-04-26 15:36:09.833158] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.629 [2024-04-26 15:36:09.833166] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.629 [2024-04-26 15:36:09.833173] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.629 [2024-04-26 15:36:09.836690] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.629 [2024-04-26 15:36:09.845821] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.629 [2024-04-26 15:36:09.846343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.629 [2024-04-26 15:36:09.846691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.629 [2024-04-26 15:36:09.846700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:52.629 [2024-04-26 15:36:09.846707] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:52.629 [2024-04-26 15:36:09.846930] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:52.629 [2024-04-26 15:36:09.847147] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.629 [2024-04-26 15:36:09.847154] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.629 [2024-04-26 15:36:09.847161] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.629 [2024-04-26 15:36:09.850682] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.629 [2024-04-26 15:36:09.859620] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.629 [2024-04-26 15:36:09.860215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.629 [2024-04-26 15:36:09.860587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.629 [2024-04-26 15:36:09.860600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:52.629 [2024-04-26 15:36:09.860609] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:52.629 [2024-04-26 15:36:09.860857] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:52.629 [2024-04-26 15:36:09.861078] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.629 [2024-04-26 15:36:09.861086] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.629 [2024-04-26 15:36:09.861093] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.629 [2024-04-26 15:36:09.864698] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.629 [2024-04-26 15:36:09.873445] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.629 [2024-04-26 15:36:09.873976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.629 [2024-04-26 15:36:09.874342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.629 [2024-04-26 15:36:09.874355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:52.629 [2024-04-26 15:36:09.874364] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:52.629 [2024-04-26 15:36:09.874601] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:52.629 [2024-04-26 15:36:09.874822] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.629 [2024-04-26 15:36:09.874830] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.629 [2024-04-26 15:36:09.874844] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.629 [2024-04-26 15:36:09.878589] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.629 [2024-04-26 15:36:09.887323] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.629 [2024-04-26 15:36:09.887943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.629 [2024-04-26 15:36:09.888182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.629 [2024-04-26 15:36:09.888195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:52.629 [2024-04-26 15:36:09.888205] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:52.629 [2024-04-26 15:36:09.888441] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:52.629 [2024-04-26 15:36:09.888662] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.630 [2024-04-26 15:36:09.888670] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.630 [2024-04-26 15:36:09.888677] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.630 [2024-04-26 15:36:09.892211] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.630 [2024-04-26 15:36:09.901146] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.630 [2024-04-26 15:36:09.901816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.630 [2024-04-26 15:36:09.902216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.630 [2024-04-26 15:36:09.902229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:52.630 [2024-04-26 15:36:09.902239] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:52.630 [2024-04-26 15:36:09.902476] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:52.630 [2024-04-26 15:36:09.902701] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.630 [2024-04-26 15:36:09.902709] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.630 [2024-04-26 15:36:09.902716] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.630 [2024-04-26 15:36:09.906247] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.630 [2024-04-26 15:36:09.914969] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.630 [2024-04-26 15:36:09.915640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.630 [2024-04-26 15:36:09.915900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.630 [2024-04-26 15:36:09.915914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:52.630 [2024-04-26 15:36:09.915923] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:52.630 [2024-04-26 15:36:09.916161] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:52.630 [2024-04-26 15:36:09.916382] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.630 [2024-04-26 15:36:09.916389] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.630 [2024-04-26 15:36:09.916397] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.630 [2024-04-26 15:36:09.919926] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.630 [2024-04-26 15:36:09.928886] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.630 [2024-04-26 15:36:09.929429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.630 [2024-04-26 15:36:09.929778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.630 [2024-04-26 15:36:09.929788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:52.630 [2024-04-26 15:36:09.929795] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:52.630 [2024-04-26 15:36:09.930019] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:52.630 [2024-04-26 15:36:09.930237] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.630 [2024-04-26 15:36:09.930245] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.630 [2024-04-26 15:36:09.930251] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.630 [2024-04-26 15:36:09.933768] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.630 [2024-04-26 15:36:09.942692] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.630 [2024-04-26 15:36:09.943367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.630 [2024-04-26 15:36:09.943728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.630 [2024-04-26 15:36:09.943741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:52.630 [2024-04-26 15:36:09.943750] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:52.630 [2024-04-26 15:36:09.943995] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:52.630 [2024-04-26 15:36:09.944216] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.630 [2024-04-26 15:36:09.944228] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.630 [2024-04-26 15:36:09.944235] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.630 [2024-04-26 15:36:09.947760] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.630 [2024-04-26 15:36:09.956484] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.630 [2024-04-26 15:36:09.957152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.630 [2024-04-26 15:36:09.957517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.630 [2024-04-26 15:36:09.957529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:52.630 [2024-04-26 15:36:09.957538] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:52.630 [2024-04-26 15:36:09.957775] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:52.630 [2024-04-26 15:36:09.958003] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.630 [2024-04-26 15:36:09.958011] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.630 [2024-04-26 15:36:09.958019] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.630 [2024-04-26 15:36:09.961543] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.630 [2024-04-26 15:36:09.970263] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.630 [2024-04-26 15:36:09.970937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.630 [2024-04-26 15:36:09.971302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.630 [2024-04-26 15:36:09.971314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:52.630 [2024-04-26 15:36:09.971323] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:52.630 [2024-04-26 15:36:09.971560] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:52.630 [2024-04-26 15:36:09.971781] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.630 [2024-04-26 15:36:09.971789] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.630 [2024-04-26 15:36:09.971796] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.630 [2024-04-26 15:36:09.975334] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.630 [2024-04-26 15:36:09.984062] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.630 [2024-04-26 15:36:09.984733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.630 [2024-04-26 15:36:09.985110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.630 [2024-04-26 15:36:09.985125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:52.630 [2024-04-26 15:36:09.985134] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:52.630 [2024-04-26 15:36:09.985371] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:52.630 [2024-04-26 15:36:09.985592] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.630 [2024-04-26 15:36:09.985600] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.630 [2024-04-26 15:36:09.985611] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.630 [2024-04-26 15:36:09.989141] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.630 [2024-04-26 15:36:09.997868] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.630 [2024-04-26 15:36:09.998525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.630 [2024-04-26 15:36:09.998886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.630 [2024-04-26 15:36:09.998901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:52.631 [2024-04-26 15:36:09.998910] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:52.631 [2024-04-26 15:36:09.999146] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:52.631 [2024-04-26 15:36:09.999367] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.631 [2024-04-26 15:36:09.999375] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.631 [2024-04-26 15:36:09.999383] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.631 [2024-04-26 15:36:10.003377] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.631 [2024-04-26 15:36:10.011708] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.631 [2024-04-26 15:36:10.012364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.631 [2024-04-26 15:36:10.012724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.631 [2024-04-26 15:36:10.012737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:52.631 [2024-04-26 15:36:10.012747] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:52.631 [2024-04-26 15:36:10.012990] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:52.631 [2024-04-26 15:36:10.013212] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.631 [2024-04-26 15:36:10.013220] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.631 [2024-04-26 15:36:10.013228] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.631 [2024-04-26 15:36:10.016754] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.631 [2024-04-26 15:36:10.025480] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.631 [2024-04-26 15:36:10.026156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.631 [2024-04-26 15:36:10.026407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.631 [2024-04-26 15:36:10.026419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:52.631 [2024-04-26 15:36:10.026429] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:52.631 [2024-04-26 15:36:10.026666] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:52.631 [2024-04-26 15:36:10.026893] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.631 [2024-04-26 15:36:10.026903] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.631 [2024-04-26 15:36:10.026910] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.631 [2024-04-26 15:36:10.030438] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.631 [2024-04-26 15:36:10.039367] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.631 [2024-04-26 15:36:10.039946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.631 [2024-04-26 15:36:10.040134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.631 [2024-04-26 15:36:10.040143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:52.631 [2024-04-26 15:36:10.040151] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:52.631 [2024-04-26 15:36:10.040369] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:52.631 [2024-04-26 15:36:10.040587] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.631 [2024-04-26 15:36:10.040595] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.631 [2024-04-26 15:36:10.040601] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.631 [2024-04-26 15:36:10.044134] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.631 [2024-04-26 15:36:10.053286] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.631 [2024-04-26 15:36:10.053963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.631 [2024-04-26 15:36:10.054354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.631 [2024-04-26 15:36:10.054367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:52.631 [2024-04-26 15:36:10.054376] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:52.631 [2024-04-26 15:36:10.054613] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:52.631 [2024-04-26 15:36:10.054834] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.631 [2024-04-26 15:36:10.054850] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.631 [2024-04-26 15:36:10.054857] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.631 [2024-04-26 15:36:10.058382] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.631 [2024-04-26 15:36:10.067105] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.631 [2024-04-26 15:36:10.067773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.631 [2024-04-26 15:36:10.068141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.631 [2024-04-26 15:36:10.068156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:52.631 [2024-04-26 15:36:10.068165] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:52.631 [2024-04-26 15:36:10.068402] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:52.631 [2024-04-26 15:36:10.068623] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.631 [2024-04-26 15:36:10.068631] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.631 [2024-04-26 15:36:10.068639] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.631 [2024-04-26 15:36:10.072177] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.893 [2024-04-26 15:36:10.080912] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.893 [2024-04-26 15:36:10.081497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.893 [2024-04-26 15:36:10.081693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.893 [2024-04-26 15:36:10.081703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:52.893 [2024-04-26 15:36:10.081711] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:52.893 [2024-04-26 15:36:10.081934] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:52.893 [2024-04-26 15:36:10.082151] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.893 [2024-04-26 15:36:10.082159] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.893 [2024-04-26 15:36:10.082166] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.893 [2024-04-26 15:36:10.085682] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.893 [2024-04-26 15:36:10.094818] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.893 [2024-04-26 15:36:10.095485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.893 [2024-04-26 15:36:10.095861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.893 [2024-04-26 15:36:10.095875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:52.893 [2024-04-26 15:36:10.095885] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:52.893 [2024-04-26 15:36:10.096121] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:52.893 [2024-04-26 15:36:10.096342] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.893 [2024-04-26 15:36:10.096350] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.893 [2024-04-26 15:36:10.096357] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.893 [2024-04-26 15:36:10.099886] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.893 [2024-04-26 15:36:10.108607] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.893 [2024-04-26 15:36:10.109289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.893 [2024-04-26 15:36:10.109724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.893 [2024-04-26 15:36:10.109737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:52.893 [2024-04-26 15:36:10.109746] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:52.893 [2024-04-26 15:36:10.109990] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:52.893 [2024-04-26 15:36:10.110211] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.893 [2024-04-26 15:36:10.110219] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.893 [2024-04-26 15:36:10.110226] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.893 [2024-04-26 15:36:10.113752] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.893 [2024-04-26 15:36:10.122483] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.893 [2024-04-26 15:36:10.123221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.893 [2024-04-26 15:36:10.123582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.893 [2024-04-26 15:36:10.123595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:52.893 [2024-04-26 15:36:10.123604] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:52.893 [2024-04-26 15:36:10.123849] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:52.893 [2024-04-26 15:36:10.124070] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.893 [2024-04-26 15:36:10.124079] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.893 [2024-04-26 15:36:10.124086] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.893 [2024-04-26 15:36:10.127610] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.893 [2024-04-26 15:36:10.136334] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.893 [2024-04-26 15:36:10.136937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.893 [2024-04-26 15:36:10.137301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.893 [2024-04-26 15:36:10.137313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:52.893 [2024-04-26 15:36:10.137323] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:52.893 [2024-04-26 15:36:10.137559] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:52.893 [2024-04-26 15:36:10.137780] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.893 [2024-04-26 15:36:10.137788] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.893 [2024-04-26 15:36:10.137795] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.894 [2024-04-26 15:36:10.141327] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.894 [2024-04-26 15:36:10.150259] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.894 [2024-04-26 15:36:10.150794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.894 [2024-04-26 15:36:10.151174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.894 [2024-04-26 15:36:10.151185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:52.894 [2024-04-26 15:36:10.151192] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:52.894 [2024-04-26 15:36:10.151410] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:52.894 [2024-04-26 15:36:10.151627] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.894 [2024-04-26 15:36:10.151634] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.894 [2024-04-26 15:36:10.151641] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.894 [2024-04-26 15:36:10.155169] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.894 [2024-04-26 15:36:10.164106] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.894 [2024-04-26 15:36:10.164680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.894 [2024-04-26 15:36:10.164889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.894 [2024-04-26 15:36:10.164899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:52.894 [2024-04-26 15:36:10.164907] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:52.894 [2024-04-26 15:36:10.165124] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:52.894 [2024-04-26 15:36:10.165341] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.894 [2024-04-26 15:36:10.165348] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.894 [2024-04-26 15:36:10.165355] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.894 [2024-04-26 15:36:10.168875] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.894 [2024-04-26 15:36:10.178017] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.894 [2024-04-26 15:36:10.178600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.894 [2024-04-26 15:36:10.178957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.894 [2024-04-26 15:36:10.178967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:52.894 [2024-04-26 15:36:10.178974] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:52.894 [2024-04-26 15:36:10.179191] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:52.894 [2024-04-26 15:36:10.179407] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.894 [2024-04-26 15:36:10.179415] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.894 [2024-04-26 15:36:10.179421] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.894 [2024-04-26 15:36:10.182942] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.894 [2024-04-26 15:36:10.191867] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.894 [2024-04-26 15:36:10.192529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.894 [2024-04-26 15:36:10.192888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.894 [2024-04-26 15:36:10.192901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:52.894 [2024-04-26 15:36:10.192911] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:52.894 [2024-04-26 15:36:10.193147] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:52.894 [2024-04-26 15:36:10.193368] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.894 [2024-04-26 15:36:10.193376] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.894 [2024-04-26 15:36:10.193384] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.894 [2024-04-26 15:36:10.196912] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.894 [2024-04-26 15:36:10.205637] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.894 [2024-04-26 15:36:10.206174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.894 [2024-04-26 15:36:10.206523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.894 [2024-04-26 15:36:10.206532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:52.894 [2024-04-26 15:36:10.206544] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:52.894 [2024-04-26 15:36:10.206762] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:52.894 [2024-04-26 15:36:10.206984] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.894 [2024-04-26 15:36:10.206992] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.894 [2024-04-26 15:36:10.206999] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.894 [2024-04-26 15:36:10.210517] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.894 [2024-04-26 15:36:10.219440] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.894 [2024-04-26 15:36:10.219982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.894 [2024-04-26 15:36:10.220315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.894 [2024-04-26 15:36:10.220324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:52.894 [2024-04-26 15:36:10.220331] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:52.894 [2024-04-26 15:36:10.220549] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:52.894 [2024-04-26 15:36:10.220766] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.894 [2024-04-26 15:36:10.220774] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.894 [2024-04-26 15:36:10.220781] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.894 [2024-04-26 15:36:10.224300] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.894 [2024-04-26 15:36:10.233228] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.894 [2024-04-26 15:36:10.233754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.894 [2024-04-26 15:36:10.234093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.894 [2024-04-26 15:36:10.234103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:52.894 [2024-04-26 15:36:10.234111] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:52.894 [2024-04-26 15:36:10.234328] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:52.894 [2024-04-26 15:36:10.234545] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.894 [2024-04-26 15:36:10.234552] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.894 [2024-04-26 15:36:10.234559] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.894 [2024-04-26 15:36:10.238080] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.894 [2024-04-26 15:36:10.247005] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.894 [2024-04-26 15:36:10.247530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.894 [2024-04-26 15:36:10.247884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.894 [2024-04-26 15:36:10.247895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:52.894 [2024-04-26 15:36:10.247902] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:52.894 [2024-04-26 15:36:10.248123] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:52.894 [2024-04-26 15:36:10.248340] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.894 [2024-04-26 15:36:10.248347] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.894 [2024-04-26 15:36:10.248354] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.894 [2024-04-26 15:36:10.251874] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.894 [2024-04-26 15:36:10.260795] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.894 [2024-04-26 15:36:10.261365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.894 [2024-04-26 15:36:10.261705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.894 [2024-04-26 15:36:10.261714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:52.894 [2024-04-26 15:36:10.261721] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:52.894 [2024-04-26 15:36:10.261944] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:52.894 [2024-04-26 15:36:10.262162] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.894 [2024-04-26 15:36:10.262169] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.894 [2024-04-26 15:36:10.262176] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.894 [2024-04-26 15:36:10.265693] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.894 [2024-04-26 15:36:10.274629] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.894 [2024-04-26 15:36:10.275309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.894 [2024-04-26 15:36:10.275672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.894 [2024-04-26 15:36:10.275684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:52.894 [2024-04-26 15:36:10.275693] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:52.894 [2024-04-26 15:36:10.275936] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:52.894 [2024-04-26 15:36:10.276157] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.894 [2024-04-26 15:36:10.276165] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.894 [2024-04-26 15:36:10.276173] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.894 [2024-04-26 15:36:10.279696] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.894 [2024-04-26 15:36:10.288432] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.894 [2024-04-26 15:36:10.289131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.894 [2024-04-26 15:36:10.289505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.894 [2024-04-26 15:36:10.289517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:52.894 [2024-04-26 15:36:10.289527] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:52.894 [2024-04-26 15:36:10.289763] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:52.894 [2024-04-26 15:36:10.289994] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.894 [2024-04-26 15:36:10.290004] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.894 [2024-04-26 15:36:10.290011] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.894 [2024-04-26 15:36:10.293534] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.894 [2024-04-26 15:36:10.302258] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.894 [2024-04-26 15:36:10.302748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.894 [2024-04-26 15:36:10.303056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.894 [2024-04-26 15:36:10.303067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:52.894 [2024-04-26 15:36:10.303075] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:52.894 [2024-04-26 15:36:10.303293] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:52.894 [2024-04-26 15:36:10.303511] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.894 [2024-04-26 15:36:10.303518] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.894 [2024-04-26 15:36:10.303525] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.894 [2024-04-26 15:36:10.307048] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.894 [2024-04-26 15:36:10.316178] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.894 [2024-04-26 15:36:10.316762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.894 [2024-04-26 15:36:10.317155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.894 [2024-04-26 15:36:10.317169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:52.894 [2024-04-26 15:36:10.317178] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:52.894 [2024-04-26 15:36:10.317415] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:52.894 [2024-04-26 15:36:10.317635] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.894 [2024-04-26 15:36:10.317643] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.894 [2024-04-26 15:36:10.317651] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.894 [2024-04-26 15:36:10.321183] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.894 [2024-04-26 15:36:10.330125] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.894 [2024-04-26 15:36:10.330751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.894 [2024-04-26 15:36:10.331134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.894 [2024-04-26 15:36:10.331148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:52.894 [2024-04-26 15:36:10.331157] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:52.894 [2024-04-26 15:36:10.331394] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:52.894 [2024-04-26 15:36:10.331614] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.894 [2024-04-26 15:36:10.331629] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.894 [2024-04-26 15:36:10.331636] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.894 [2024-04-26 15:36:10.335165] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.156 [2024-04-26 15:36:10.343986] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.156 [2024-04-26 15:36:10.344526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.156 [2024-04-26 15:36:10.344874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.156 [2024-04-26 15:36:10.344884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:53.156 [2024-04-26 15:36:10.344892] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:53.156 [2024-04-26 15:36:10.345110] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:53.156 [2024-04-26 15:36:10.345328] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.156 [2024-04-26 15:36:10.345335] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.157 [2024-04-26 15:36:10.345342] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.157 [2024-04-26 15:36:10.348865] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.157 [2024-04-26 15:36:10.357787] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.157 [2024-04-26 15:36:10.358291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.157 [2024-04-26 15:36:10.358629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.157 [2024-04-26 15:36:10.358638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:53.157 [2024-04-26 15:36:10.358646] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:53.157 [2024-04-26 15:36:10.358867] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:53.157 [2024-04-26 15:36:10.359085] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.157 [2024-04-26 15:36:10.359092] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.157 [2024-04-26 15:36:10.359099] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.157 [2024-04-26 15:36:10.362619] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.157 [2024-04-26 15:36:10.371748] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.157 [2024-04-26 15:36:10.372418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.157 [2024-04-26 15:36:10.372777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.157 [2024-04-26 15:36:10.372790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:53.157 [2024-04-26 15:36:10.372799] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:53.157 [2024-04-26 15:36:10.373054] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:53.157 [2024-04-26 15:36:10.373275] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.157 [2024-04-26 15:36:10.373283] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.157 [2024-04-26 15:36:10.373295] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.157 [2024-04-26 15:36:10.376864] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.157 [2024-04-26 15:36:10.385587] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.157 [2024-04-26 15:36:10.386259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.157 [2024-04-26 15:36:10.386611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.157 [2024-04-26 15:36:10.386623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:53.157 [2024-04-26 15:36:10.386633] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:53.157 [2024-04-26 15:36:10.386880] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:53.157 [2024-04-26 15:36:10.387101] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.157 [2024-04-26 15:36:10.387110] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.157 [2024-04-26 15:36:10.387117] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.157 [2024-04-26 15:36:10.390640] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.157 [2024-04-26 15:36:10.399361] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.157 [2024-04-26 15:36:10.400071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.157 [2024-04-26 15:36:10.400419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.157 [2024-04-26 15:36:10.400431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:53.157 [2024-04-26 15:36:10.400441] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:53.157 [2024-04-26 15:36:10.400677] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:53.157 [2024-04-26 15:36:10.400905] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.157 [2024-04-26 15:36:10.400914] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.157 [2024-04-26 15:36:10.400922] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.157 [2024-04-26 15:36:10.404446] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.157 [2024-04-26 15:36:10.413175] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.157 [2024-04-26 15:36:10.413876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.157 [2024-04-26 15:36:10.414240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.157 [2024-04-26 15:36:10.414253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:53.157 [2024-04-26 15:36:10.414262] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:53.157 [2024-04-26 15:36:10.414500] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:53.157 [2024-04-26 15:36:10.414720] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.157 [2024-04-26 15:36:10.414729] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.157 [2024-04-26 15:36:10.414736] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.157 [2024-04-26 15:36:10.418275] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.157 [2024-04-26 15:36:10.426998] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.157 [2024-04-26 15:36:10.427576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.157 [2024-04-26 15:36:10.427914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.157 [2024-04-26 15:36:10.427925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:53.157 [2024-04-26 15:36:10.427932] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:53.157 [2024-04-26 15:36:10.428150] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:53.157 [2024-04-26 15:36:10.428367] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.157 [2024-04-26 15:36:10.428375] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.157 [2024-04-26 15:36:10.428381] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.157 [2024-04-26 15:36:10.431909] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.157 [2024-04-26 15:36:10.440852] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.157 [2024-04-26 15:36:10.441505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.157 [2024-04-26 15:36:10.441876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.157 [2024-04-26 15:36:10.441891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:53.157 [2024-04-26 15:36:10.441900] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:53.157 [2024-04-26 15:36:10.442137] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:53.157 [2024-04-26 15:36:10.442358] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.157 [2024-04-26 15:36:10.442366] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.157 [2024-04-26 15:36:10.442373] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.157 [2024-04-26 15:36:10.445901] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.157 [2024-04-26 15:36:10.454622] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.157 [2024-04-26 15:36:10.455301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.157 [2024-04-26 15:36:10.455661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.157 [2024-04-26 15:36:10.455674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:53.157 [2024-04-26 15:36:10.455683] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:53.157 [2024-04-26 15:36:10.455927] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:53.157 [2024-04-26 15:36:10.456148] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.157 [2024-04-26 15:36:10.456157] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.157 [2024-04-26 15:36:10.456164] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.157 [2024-04-26 15:36:10.459687] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.157 [2024-04-26 15:36:10.468415] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.157 [2024-04-26 15:36:10.468961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.157 [2024-04-26 15:36:10.469265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.157 [2024-04-26 15:36:10.469275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:53.157 [2024-04-26 15:36:10.469283] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:53.157 [2024-04-26 15:36:10.469501] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:53.157 [2024-04-26 15:36:10.469718] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.157 [2024-04-26 15:36:10.469726] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.157 [2024-04-26 15:36:10.469733] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.157 [2024-04-26 15:36:10.473254] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.157 [2024-04-26 15:36:10.482191] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.157 [2024-04-26 15:36:10.482746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.158 [2024-04-26 15:36:10.483131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.158 [2024-04-26 15:36:10.483146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:53.158 [2024-04-26 15:36:10.483155] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:53.158 [2024-04-26 15:36:10.483391] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:53.158 [2024-04-26 15:36:10.483612] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.158 [2024-04-26 15:36:10.483620] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.158 [2024-04-26 15:36:10.483627] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.158 [2024-04-26 15:36:10.487155] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.158 [2024-04-26 15:36:10.496084] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.158 [2024-04-26 15:36:10.496763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.158 [2024-04-26 15:36:10.497119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.158 [2024-04-26 15:36:10.497133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:53.158 [2024-04-26 15:36:10.497142] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:53.158 [2024-04-26 15:36:10.497379] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:53.158 [2024-04-26 15:36:10.497599] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.158 [2024-04-26 15:36:10.497607] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.158 [2024-04-26 15:36:10.497614] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.158 [2024-04-26 15:36:10.501142] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.158 [2024-04-26 15:36:10.509865] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.158 [2024-04-26 15:36:10.510426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.158 [2024-04-26 15:36:10.510652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.158 [2024-04-26 15:36:10.510661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:53.158 [2024-04-26 15:36:10.510669] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:53.158 [2024-04-26 15:36:10.510899] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:53.158 [2024-04-26 15:36:10.511119] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.158 [2024-04-26 15:36:10.511127] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.158 [2024-04-26 15:36:10.511133] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.158 [2024-04-26 15:36:10.514651] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.158 [2024-04-26 15:36:10.523783] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.158 [2024-04-26 15:36:10.524417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.158 [2024-04-26 15:36:10.524782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.158 [2024-04-26 15:36:10.524794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:53.158 [2024-04-26 15:36:10.524804] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:53.158 [2024-04-26 15:36:10.525047] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:53.158 [2024-04-26 15:36:10.525268] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.158 [2024-04-26 15:36:10.525276] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.158 [2024-04-26 15:36:10.525283] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.158 [2024-04-26 15:36:10.528806] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.158 [2024-04-26 15:36:10.537734] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.158 [2024-04-26 15:36:10.538409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.158 [2024-04-26 15:36:10.538773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.158 [2024-04-26 15:36:10.538787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:53.158 [2024-04-26 15:36:10.538796] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:53.158 [2024-04-26 15:36:10.539045] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:53.158 [2024-04-26 15:36:10.539267] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.158 [2024-04-26 15:36:10.539276] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.158 [2024-04-26 15:36:10.539283] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.158 [2024-04-26 15:36:10.542807] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.158 [2024-04-26 15:36:10.551529] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.158 [2024-04-26 15:36:10.552196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.158 [2024-04-26 15:36:10.552502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.158 [2024-04-26 15:36:10.552516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:53.158 [2024-04-26 15:36:10.552525] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:53.158 [2024-04-26 15:36:10.552761] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:53.158 [2024-04-26 15:36:10.552990] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.158 [2024-04-26 15:36:10.552998] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.158 [2024-04-26 15:36:10.553006] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.158 [2024-04-26 15:36:10.556528] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.158 [2024-04-26 15:36:10.565455] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.158 [2024-04-26 15:36:10.566134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.158 [2024-04-26 15:36:10.566498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.158 [2024-04-26 15:36:10.566511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:53.158 [2024-04-26 15:36:10.566520] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:53.158 [2024-04-26 15:36:10.566756] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:53.158 [2024-04-26 15:36:10.566985] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.158 [2024-04-26 15:36:10.566994] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.158 [2024-04-26 15:36:10.567001] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.158 [2024-04-26 15:36:10.570525] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.158 [2024-04-26 15:36:10.579255] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.158 [2024-04-26 15:36:10.579797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.158 [2024-04-26 15:36:10.580146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.158 [2024-04-26 15:36:10.580157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:53.158 [2024-04-26 15:36:10.580164] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:53.158 [2024-04-26 15:36:10.580382] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:53.158 [2024-04-26 15:36:10.580599] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.158 [2024-04-26 15:36:10.580606] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.158 [2024-04-26 15:36:10.580613] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.158 [2024-04-26 15:36:10.584141] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.158 [2024-04-26 15:36:10.593071] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.158 [2024-04-26 15:36:10.593650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.158 [2024-04-26 15:36:10.593993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.158 [2024-04-26 15:36:10.594003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:53.158 [2024-04-26 15:36:10.594015] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:53.158 [2024-04-26 15:36:10.594233] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:53.158 [2024-04-26 15:36:10.594450] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.158 [2024-04-26 15:36:10.594457] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.158 [2024-04-26 15:36:10.594464] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.158 [2024-04-26 15:36:10.597989] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.421 [2024-04-26 15:36:10.606915] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.421 [2024-04-26 15:36:10.607580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.421 [2024-04-26 15:36:10.607941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.421 [2024-04-26 15:36:10.607955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:53.421 [2024-04-26 15:36:10.607964] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:53.421 [2024-04-26 15:36:10.608200] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:53.421 [2024-04-26 15:36:10.608421] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.421 [2024-04-26 15:36:10.608429] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.421 [2024-04-26 15:36:10.608436] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.421 [2024-04-26 15:36:10.611964] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.421 [2024-04-26 15:36:10.620685] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.421 [2024-04-26 15:36:10.621229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.421 [2024-04-26 15:36:10.621582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.421 [2024-04-26 15:36:10.621591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:53.421 [2024-04-26 15:36:10.621599] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:53.421 [2024-04-26 15:36:10.621816] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:53.421 [2024-04-26 15:36:10.622039] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.421 [2024-04-26 15:36:10.622048] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.421 [2024-04-26 15:36:10.622054] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.421 [2024-04-26 15:36:10.625571] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.421 [2024-04-26 15:36:10.634496] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.421 [2024-04-26 15:36:10.635176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.421 [2024-04-26 15:36:10.635536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.421 [2024-04-26 15:36:10.635549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:53.421 [2024-04-26 15:36:10.635558] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:53.421 [2024-04-26 15:36:10.635799] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:53.421 [2024-04-26 15:36:10.636028] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.421 [2024-04-26 15:36:10.636037] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.421 [2024-04-26 15:36:10.636044] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.421 [2024-04-26 15:36:10.639573] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.421 [2024-04-26 15:36:10.648304] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.421 [2024-04-26 15:36:10.648952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.421 [2024-04-26 15:36:10.649320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.421 [2024-04-26 15:36:10.649333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:53.421 [2024-04-26 15:36:10.649342] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:53.421 [2024-04-26 15:36:10.649578] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:53.421 [2024-04-26 15:36:10.649799] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.421 [2024-04-26 15:36:10.649807] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.421 [2024-04-26 15:36:10.649814] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.421 [2024-04-26 15:36:10.653349] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.421 [2024-04-26 15:36:10.662081] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.421 [2024-04-26 15:36:10.662750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.421 [2024-04-26 15:36:10.663134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.421 [2024-04-26 15:36:10.663148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:53.421 [2024-04-26 15:36:10.663158] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:53.421 [2024-04-26 15:36:10.663395] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:53.421 [2024-04-26 15:36:10.663616] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.421 [2024-04-26 15:36:10.663624] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.421 [2024-04-26 15:36:10.663632] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.421 [2024-04-26 15:36:10.667166] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.421 [2024-04-26 15:36:10.675906] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.421 [2024-04-26 15:36:10.676485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.421 [2024-04-26 15:36:10.676845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.421 [2024-04-26 15:36:10.676856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:53.421 [2024-04-26 15:36:10.676863] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:53.421 [2024-04-26 15:36:10.677081] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:53.421 [2024-04-26 15:36:10.677303] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.421 [2024-04-26 15:36:10.677311] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.421 [2024-04-26 15:36:10.677318] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.421 [2024-04-26 15:36:10.680842] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.421 [2024-04-26 15:36:10.689771] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.421 [2024-04-26 15:36:10.690440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.421 [2024-04-26 15:36:10.690798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.421 [2024-04-26 15:36:10.690811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:53.421 [2024-04-26 15:36:10.690820] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:53.421 [2024-04-26 15:36:10.691067] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:53.421 [2024-04-26 15:36:10.691288] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.421 [2024-04-26 15:36:10.691297] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.421 [2024-04-26 15:36:10.691304] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.421 [2024-04-26 15:36:10.694829] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.421 [2024-04-26 15:36:10.703555] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.421 [2024-04-26 15:36:10.704207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.421 [2024-04-26 15:36:10.704639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.421 [2024-04-26 15:36:10.704652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:53.421 [2024-04-26 15:36:10.704661] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:53.421 [2024-04-26 15:36:10.704907] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:53.421 [2024-04-26 15:36:10.705128] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.422 [2024-04-26 15:36:10.705136] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.422 [2024-04-26 15:36:10.705143] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.422 [2024-04-26 15:36:10.708669] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.422 [2024-04-26 15:36:10.717397] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.422 [2024-04-26 15:36:10.718073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.422 [2024-04-26 15:36:10.718431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.422 [2024-04-26 15:36:10.718444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:53.422 [2024-04-26 15:36:10.718453] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:53.422 [2024-04-26 15:36:10.718689] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:53.422 [2024-04-26 15:36:10.718919] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.422 [2024-04-26 15:36:10.718932] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.422 [2024-04-26 15:36:10.718939] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.422 [2024-04-26 15:36:10.722469] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.422 [2024-04-26 15:36:10.731198] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.422 [2024-04-26 15:36:10.731826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.422 [2024-04-26 15:36:10.732201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.422 [2024-04-26 15:36:10.732213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:53.422 [2024-04-26 15:36:10.732222] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:53.422 [2024-04-26 15:36:10.732459] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:53.422 [2024-04-26 15:36:10.732680] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.422 [2024-04-26 15:36:10.732687] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.422 [2024-04-26 15:36:10.732695] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.422 [2024-04-26 15:36:10.736228] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.422 [2024-04-26 15:36:10.745160] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.422 [2024-04-26 15:36:10.745833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.422 [2024-04-26 15:36:10.746202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.422 [2024-04-26 15:36:10.746214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:53.422 [2024-04-26 15:36:10.746223] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:53.422 [2024-04-26 15:36:10.746460] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:53.422 [2024-04-26 15:36:10.746681] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.422 [2024-04-26 15:36:10.746688] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.422 [2024-04-26 15:36:10.746696] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.422 [2024-04-26 15:36:10.750226] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.422 [2024-04-26 15:36:10.758954] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.422 [2024-04-26 15:36:10.759625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.422 [2024-04-26 15:36:10.760032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.422 [2024-04-26 15:36:10.760046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:53.422 [2024-04-26 15:36:10.760055] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:53.422 [2024-04-26 15:36:10.760291] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:53.422 [2024-04-26 15:36:10.760512] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.422 [2024-04-26 15:36:10.760520] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.422 [2024-04-26 15:36:10.760531] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.422 [2024-04-26 15:36:10.764064] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.422 [2024-04-26 15:36:10.772795] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.422 [2024-04-26 15:36:10.773383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.422 [2024-04-26 15:36:10.773721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.422 [2024-04-26 15:36:10.773732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:53.422 [2024-04-26 15:36:10.773739] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:53.422 [2024-04-26 15:36:10.773971] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:53.422 [2024-04-26 15:36:10.774189] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.422 [2024-04-26 15:36:10.774197] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.422 [2024-04-26 15:36:10.774204] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.422 [2024-04-26 15:36:10.777727] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.422 [2024-04-26 15:36:10.786660] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.422 [2024-04-26 15:36:10.787204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.422 [2024-04-26 15:36:10.787548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.422 [2024-04-26 15:36:10.787557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:53.422 [2024-04-26 15:36:10.787565] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:53.422 [2024-04-26 15:36:10.787782] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:53.422 [2024-04-26 15:36:10.788004] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.422 [2024-04-26 15:36:10.788012] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.422 [2024-04-26 15:36:10.788019] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.422 [2024-04-26 15:36:10.791539] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.422 [2024-04-26 15:36:10.800470] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.422 [2024-04-26 15:36:10.801016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.422 [2024-04-26 15:36:10.801367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.422 [2024-04-26 15:36:10.801376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:53.422 [2024-04-26 15:36:10.801383] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:53.422 [2024-04-26 15:36:10.801601] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:53.422 [2024-04-26 15:36:10.801818] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.422 [2024-04-26 15:36:10.801825] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.422 [2024-04-26 15:36:10.801835] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.422 [2024-04-26 15:36:10.805362] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.422 [2024-04-26 15:36:10.814298] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.422 [2024-04-26 15:36:10.814966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.422 [2024-04-26 15:36:10.815342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.422 [2024-04-26 15:36:10.815354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:53.422 [2024-04-26 15:36:10.815364] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:53.422 [2024-04-26 15:36:10.815601] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:53.422 [2024-04-26 15:36:10.815821] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.422 [2024-04-26 15:36:10.815829] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.422 [2024-04-26 15:36:10.815844] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.422 [2024-04-26 15:36:10.819368] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.422 [2024-04-26 15:36:10.828135] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.422 [2024-04-26 15:36:10.828796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.422 [2024-04-26 15:36:10.829192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.422 [2024-04-26 15:36:10.829206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:53.423 [2024-04-26 15:36:10.829215] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:53.423 [2024-04-26 15:36:10.829451] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:53.423 [2024-04-26 15:36:10.829672] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.423 [2024-04-26 15:36:10.829680] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.423 [2024-04-26 15:36:10.829687] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.423 [2024-04-26 15:36:10.833221] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.423 [2024-04-26 15:36:10.841956] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.423 [2024-04-26 15:36:10.842536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.423 [2024-04-26 15:36:10.842872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.423 [2024-04-26 15:36:10.842883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:53.423 [2024-04-26 15:36:10.842890] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:53.423 [2024-04-26 15:36:10.843108] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:53.423 [2024-04-26 15:36:10.843325] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.423 [2024-04-26 15:36:10.843332] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.423 [2024-04-26 15:36:10.843339] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.423 [2024-04-26 15:36:10.846866] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.423 [2024-04-26 15:36:10.855806] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.423 [2024-04-26 15:36:10.856420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.423 [2024-04-26 15:36:10.856795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.423 [2024-04-26 15:36:10.856809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:53.423 [2024-04-26 15:36:10.856818] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:53.423 [2024-04-26 15:36:10.857063] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:53.423 [2024-04-26 15:36:10.857285] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.423 [2024-04-26 15:36:10.857293] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.423 [2024-04-26 15:36:10.857300] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.423 [2024-04-26 15:36:10.860828] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.684 [2024-04-26 15:36:10.869771] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.684 [2024-04-26 15:36:10.870342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.684 [2024-04-26 15:36:10.870687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.684 [2024-04-26 15:36:10.870696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:53.684 [2024-04-26 15:36:10.870704] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:53.684 [2024-04-26 15:36:10.870929] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:53.684 [2024-04-26 15:36:10.871148] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.684 [2024-04-26 15:36:10.871155] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.684 [2024-04-26 15:36:10.871162] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.684 [2024-04-26 15:36:10.874700] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.684 [2024-04-26 15:36:10.883630] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.684 [2024-04-26 15:36:10.884313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.684 [2024-04-26 15:36:10.884679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.684 [2024-04-26 15:36:10.884692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:53.684 [2024-04-26 15:36:10.884702] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:53.684 [2024-04-26 15:36:10.884944] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:53.684 [2024-04-26 15:36:10.885164] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.684 [2024-04-26 15:36:10.885173] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.684 [2024-04-26 15:36:10.885180] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.684 [2024-04-26 15:36:10.888711] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.684 [2024-04-26 15:36:10.897448] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.684 [2024-04-26 15:36:10.898100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.684 [2024-04-26 15:36:10.898456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.684 [2024-04-26 15:36:10.898468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:53.684 [2024-04-26 15:36:10.898477] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:53.684 [2024-04-26 15:36:10.898714] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:53.684 [2024-04-26 15:36:10.898943] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.684 [2024-04-26 15:36:10.898951] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.684 [2024-04-26 15:36:10.898959] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.684 [2024-04-26 15:36:10.902482] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.684 [2024-04-26 15:36:10.911408] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.684 [2024-04-26 15:36:10.912055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.684 [2024-04-26 15:36:10.912416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.684 [2024-04-26 15:36:10.912429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:53.684 [2024-04-26 15:36:10.912438] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:53.684 [2024-04-26 15:36:10.912674] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:53.684 [2024-04-26 15:36:10.912904] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.684 [2024-04-26 15:36:10.912913] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.684 [2024-04-26 15:36:10.912920] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.684 [2024-04-26 15:36:10.916443] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.684 [2024-04-26 15:36:10.925372] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.684 [2024-04-26 15:36:10.925976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.684 [2024-04-26 15:36:10.926270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.684 [2024-04-26 15:36:10.926283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:53.684 [2024-04-26 15:36:10.926292] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:53.684 [2024-04-26 15:36:10.926528] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:53.684 [2024-04-26 15:36:10.926749] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.684 [2024-04-26 15:36:10.926757] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.684 [2024-04-26 15:36:10.926764] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.684 [2024-04-26 15:36:10.930299] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.684 [2024-04-26 15:36:10.939226] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.684 [2024-04-26 15:36:10.939875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.684 [2024-04-26 15:36:10.940270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.684 [2024-04-26 15:36:10.940283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:53.684 [2024-04-26 15:36:10.940292] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:53.684 [2024-04-26 15:36:10.940529] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:53.684 [2024-04-26 15:36:10.940749] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.685 [2024-04-26 15:36:10.940757] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.685 [2024-04-26 15:36:10.940764] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.685 [2024-04-26 15:36:10.944312] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.685 [2024-04-26 15:36:10.953047] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.685 [2024-04-26 15:36:10.953755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.685 [2024-04-26 15:36:10.953974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.685 [2024-04-26 15:36:10.953990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:53.685 [2024-04-26 15:36:10.954000] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:53.685 [2024-04-26 15:36:10.954236] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:53.685 [2024-04-26 15:36:10.954457] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.685 [2024-04-26 15:36:10.954465] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.685 [2024-04-26 15:36:10.954472] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.685 [2024-04-26 15:36:10.958004] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.685 [2024-04-26 15:36:10.966943] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.685 [2024-04-26 15:36:10.967486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.685 [2024-04-26 15:36:10.967848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.685 [2024-04-26 15:36:10.967858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:53.685 [2024-04-26 15:36:10.967866] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:53.685 [2024-04-26 15:36:10.968084] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:53.685 [2024-04-26 15:36:10.968301] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.685 [2024-04-26 15:36:10.968309] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.685 [2024-04-26 15:36:10.968316] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.685 [2024-04-26 15:36:10.971847] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.685 [2024-04-26 15:36:10.980799] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.685 [2024-04-26 15:36:10.981250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.685 [2024-04-26 15:36:10.981583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.685 [2024-04-26 15:36:10.981592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:53.685 [2024-04-26 15:36:10.981607] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:53.685 [2024-04-26 15:36:10.981824] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:53.685 [2024-04-26 15:36:10.982047] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.685 [2024-04-26 15:36:10.982055] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.685 [2024-04-26 15:36:10.982062] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.685 [2024-04-26 15:36:10.985585] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.685 [2024-04-26 15:36:10.994724] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.685 [2024-04-26 15:36:10.995178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.685 [2024-04-26 15:36:10.995486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.685 [2024-04-26 15:36:10.995496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:53.685 [2024-04-26 15:36:10.995503] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:53.685 [2024-04-26 15:36:10.995720] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:53.685 [2024-04-26 15:36:10.995942] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.685 [2024-04-26 15:36:10.995951] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.685 [2024-04-26 15:36:10.995957] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.685 [2024-04-26 15:36:10.999479] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.685 [2024-04-26 15:36:11.008617] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.685 [2024-04-26 15:36:11.009091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.685 [2024-04-26 15:36:11.009423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.685 [2024-04-26 15:36:11.009433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:53.685 [2024-04-26 15:36:11.009440] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:53.685 [2024-04-26 15:36:11.009658] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:53.685 [2024-04-26 15:36:11.009879] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.685 [2024-04-26 15:36:11.009887] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.685 [2024-04-26 15:36:11.009893] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.685 [2024-04-26 15:36:11.013420] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.685 [2024-04-26 15:36:11.022570] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.685 [2024-04-26 15:36:11.023144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.685 [2024-04-26 15:36:11.023506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.685 [2024-04-26 15:36:11.023515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:53.685 [2024-04-26 15:36:11.023523] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:53.685 [2024-04-26 15:36:11.023744] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:53.685 [2024-04-26 15:36:11.023965] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.685 [2024-04-26 15:36:11.023973] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.685 [2024-04-26 15:36:11.023980] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.685 [2024-04-26 15:36:11.027502] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.685 [2024-04-26 15:36:11.036449] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.685 [2024-04-26 15:36:11.037019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.685 [2024-04-26 15:36:11.037357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.685 [2024-04-26 15:36:11.037366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:53.685 [2024-04-26 15:36:11.037373] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:53.685 [2024-04-26 15:36:11.037591] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:53.685 [2024-04-26 15:36:11.037807] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.685 [2024-04-26 15:36:11.037815] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.685 [2024-04-26 15:36:11.037821] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.685 [2024-04-26 15:36:11.041350] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.685 [2024-04-26 15:36:11.050291] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.685 [2024-04-26 15:36:11.050961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.685 [2024-04-26 15:36:11.051323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.685 [2024-04-26 15:36:11.051335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:53.685 [2024-04-26 15:36:11.051345] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:53.685 [2024-04-26 15:36:11.051581] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:53.685 [2024-04-26 15:36:11.051801] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.685 [2024-04-26 15:36:11.051810] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.685 [2024-04-26 15:36:11.051817] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.685 [2024-04-26 15:36:11.055348] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.685 [2024-04-26 15:36:11.064083] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.685 [2024-04-26 15:36:11.064544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.685 [2024-04-26 15:36:11.064780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.685 [2024-04-26 15:36:11.064790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:53.685 [2024-04-26 15:36:11.064797] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:53.685 [2024-04-26 15:36:11.065024] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:53.685 [2024-04-26 15:36:11.065242] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.685 [2024-04-26 15:36:11.065250] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.685 [2024-04-26 15:36:11.065257] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.685 [2024-04-26 15:36:11.068775] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.685 [2024-04-26 15:36:11.077923] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.685 [2024-04-26 15:36:11.078552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.685 [2024-04-26 15:36:11.078923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.685 [2024-04-26 15:36:11.078937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:53.685 [2024-04-26 15:36:11.078946] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:53.685 [2024-04-26 15:36:11.079183] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:53.685 [2024-04-26 15:36:11.079404] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.685 [2024-04-26 15:36:11.079411] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.685 [2024-04-26 15:36:11.079419] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.685 [2024-04-26 15:36:11.082946] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.685 [2024-04-26 15:36:11.091882] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.685 [2024-04-26 15:36:11.092523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.685 [2024-04-26 15:36:11.092896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.685 [2024-04-26 15:36:11.092909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:53.685 [2024-04-26 15:36:11.092918] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:53.685 [2024-04-26 15:36:11.093155] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:53.685 [2024-04-26 15:36:11.093375] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.685 [2024-04-26 15:36:11.093383] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.685 [2024-04-26 15:36:11.093391] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.685 [2024-04-26 15:36:11.096921] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.685 [2024-04-26 15:36:11.105852] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.685 [2024-04-26 15:36:11.106274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.685 [2024-04-26 15:36:11.106518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.685 [2024-04-26 15:36:11.106528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:53.685 [2024-04-26 15:36:11.106536] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:53.685 [2024-04-26 15:36:11.106753] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:53.685 [2024-04-26 15:36:11.106980] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.685 [2024-04-26 15:36:11.106989] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.685 [2024-04-26 15:36:11.106996] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.685 [2024-04-26 15:36:11.110515] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.685 [2024-04-26 15:36:11.119649] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.685 [2024-04-26 15:36:11.120190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.685 [2024-04-26 15:36:11.120551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.685 [2024-04-26 15:36:11.120560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:53.685 [2024-04-26 15:36:11.120568] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:53.685 [2024-04-26 15:36:11.120785] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:53.685 [2024-04-26 15:36:11.121006] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.685 [2024-04-26 15:36:11.121014] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.685 [2024-04-26 15:36:11.121021] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.685 [2024-04-26 15:36:11.124538] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.948 [2024-04-26 15:36:11.133463] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.948 [2024-04-26 15:36:11.134001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.948 [2024-04-26 15:36:11.134306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.948 [2024-04-26 15:36:11.134315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:53.948 [2024-04-26 15:36:11.134323] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:53.948 [2024-04-26 15:36:11.134540] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:53.948 [2024-04-26 15:36:11.134757] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.948 [2024-04-26 15:36:11.134764] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.948 [2024-04-26 15:36:11.134771] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.948 [2024-04-26 15:36:11.138306] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.948 [2024-04-26 15:36:11.147276] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.948 [2024-04-26 15:36:11.147801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.948 [2024-04-26 15:36:11.148123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.948 [2024-04-26 15:36:11.148133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:53.948 [2024-04-26 15:36:11.148140] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:53.948 [2024-04-26 15:36:11.148358] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:53.948 [2024-04-26 15:36:11.148575] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.948 [2024-04-26 15:36:11.148586] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.948 [2024-04-26 15:36:11.148592] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.948 [2024-04-26 15:36:11.152116] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.948 [2024-04-26 15:36:11.161046] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.948 [2024-04-26 15:36:11.161612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.948 [2024-04-26 15:36:11.161959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.948 [2024-04-26 15:36:11.161969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:53.948 [2024-04-26 15:36:11.161977] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:53.948 [2024-04-26 15:36:11.162194] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:53.948 [2024-04-26 15:36:11.162411] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.948 [2024-04-26 15:36:11.162418] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.948 [2024-04-26 15:36:11.162425] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.948 [2024-04-26 15:36:11.165944] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.948 [2024-04-26 15:36:11.174879] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.948 [2024-04-26 15:36:11.175412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.948 [2024-04-26 15:36:11.175749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.948 [2024-04-26 15:36:11.175759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:53.948 [2024-04-26 15:36:11.175766] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:53.948 [2024-04-26 15:36:11.175987] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:53.948 [2024-04-26 15:36:11.176205] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.948 [2024-04-26 15:36:11.176212] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.948 [2024-04-26 15:36:11.176219] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.948 [2024-04-26 15:36:11.179738] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.948 [2024-04-26 15:36:11.188664] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.948 [2024-04-26 15:36:11.189249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.948 [2024-04-26 15:36:11.189453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.948 [2024-04-26 15:36:11.189465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:53.948 [2024-04-26 15:36:11.189473] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:53.948 [2024-04-26 15:36:11.189691] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:53.948 [2024-04-26 15:36:11.189912] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.948 [2024-04-26 15:36:11.189920] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.948 [2024-04-26 15:36:11.189930] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.948 [2024-04-26 15:36:11.193452] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.948 [2024-04-26 15:36:11.202592] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.948 [2024-04-26 15:36:11.203278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.948 [2024-04-26 15:36:11.203632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.948 [2024-04-26 15:36:11.203645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:53.948 [2024-04-26 15:36:11.203654] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:53.948 [2024-04-26 15:36:11.203895] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:53.948 [2024-04-26 15:36:11.204117] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.948 [2024-04-26 15:36:11.204125] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.948 [2024-04-26 15:36:11.204132] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.948 [2024-04-26 15:36:11.207655] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.948 [2024-04-26 15:36:11.216381] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.948 [2024-04-26 15:36:11.216960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.948 [2024-04-26 15:36:11.217262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.948 [2024-04-26 15:36:11.217272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:53.948 [2024-04-26 15:36:11.217280] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:53.948 [2024-04-26 15:36:11.217498] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:53.948 [2024-04-26 15:36:11.217715] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.948 [2024-04-26 15:36:11.217723] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.948 [2024-04-26 15:36:11.217729] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.948 [2024-04-26 15:36:11.221252] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.948 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1781234 Killed "${NVMF_APP[@]}" "$@" 00:25:53.948 15:36:11 -- host/bdevperf.sh@36 -- # tgt_init 00:25:53.948 [2024-04-26 15:36:11.230180] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.948 15:36:11 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:25:53.948 15:36:11 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:53.948 [2024-04-26 15:36:11.230708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.948 15:36:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:53.948 15:36:11 -- common/autotest_common.sh@10 -- # set +x 00:25:53.948 [2024-04-26 15:36:11.231059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.948 [2024-04-26 15:36:11.231069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:53.948 [2024-04-26 15:36:11.231076] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:53.948 [2024-04-26 15:36:11.231294] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:53.948 [2024-04-26 15:36:11.231515] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.948 [2024-04-26 15:36:11.231523] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.948 [2024-04-26 15:36:11.231529] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.948 [2024-04-26 15:36:11.235049] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.948 15:36:11 -- nvmf/common.sh@470 -- # nvmfpid=1783317 00:25:53.948 15:36:11 -- nvmf/common.sh@471 -- # waitforlisten 1783317 00:25:53.948 15:36:11 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:53.948 15:36:11 -- common/autotest_common.sh@817 -- # '[' -z 1783317 ']' 00:25:53.948 15:36:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:53.948 15:36:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:53.948 15:36:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:53.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:53.948 15:36:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:53.948 15:36:11 -- common/autotest_common.sh@10 -- # set +x 00:25:53.948 [2024-04-26 15:36:11.243980] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.948 [2024-04-26 15:36:11.244562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.948 [2024-04-26 15:36:11.244914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.948 [2024-04-26 15:36:11.244924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:53.948 [2024-04-26 15:36:11.244931] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:53.948 [2024-04-26 15:36:11.245149] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:53.948 [2024-04-26 15:36:11.245367] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.948 [2024-04-26 15:36:11.245374] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.948 [2024-04-26 15:36:11.245381] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.948 [2024-04-26 15:36:11.248900] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.948 [2024-04-26 15:36:11.257823] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.948 [2024-04-26 15:36:11.258397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.948 [2024-04-26 15:36:11.258734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.948 [2024-04-26 15:36:11.258743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:53.948 [2024-04-26 15:36:11.258750] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:53.948 [2024-04-26 15:36:11.258972] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:53.948 [2024-04-26 15:36:11.259189] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.948 [2024-04-26 15:36:11.259197] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.948 [2024-04-26 15:36:11.259203] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.948 [2024-04-26 15:36:11.262717] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.948 [2024-04-26 15:36:11.271657] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.948 [2024-04-26 15:36:11.272260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.948 [2024-04-26 15:36:11.272413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.948 [2024-04-26 15:36:11.272422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:53.949 [2024-04-26 15:36:11.272429] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:53.949 [2024-04-26 15:36:11.272647] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:53.949 [2024-04-26 15:36:11.272869] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.949 [2024-04-26 15:36:11.272877] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.949 [2024-04-26 15:36:11.272884] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.949 [2024-04-26 15:36:11.276415] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.949 [2024-04-26 15:36:11.285556] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.949 [2024-04-26 15:36:11.286016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.949 [2024-04-26 15:36:11.286238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.949 [2024-04-26 15:36:11.286248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:53.949 [2024-04-26 15:36:11.286255] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:53.949 [2024-04-26 15:36:11.286473] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:53.949 [2024-04-26 15:36:11.286691] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.949 [2024-04-26 15:36:11.286698] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.949 [2024-04-26 15:36:11.286705] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.949 [2024-04-26 15:36:11.286881] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:25:53.949 [2024-04-26 15:36:11.286925] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:53.949 [2024-04-26 15:36:11.290231] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.949 [2024-04-26 15:36:11.299366] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.949 [2024-04-26 15:36:11.299948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.949 [2024-04-26 15:36:11.300064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.949 [2024-04-26 15:36:11.300077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:53.949 [2024-04-26 15:36:11.300087] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:53.949 [2024-04-26 15:36:11.300326] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:53.949 [2024-04-26 15:36:11.300547] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.949 [2024-04-26 15:36:11.300555] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.949 [2024-04-26 15:36:11.300563] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.949 [2024-04-26 15:36:11.304094] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.949 [2024-04-26 15:36:11.313240] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.949 [2024-04-26 15:36:11.313825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.949 [2024-04-26 15:36:11.314156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.949 [2024-04-26 15:36:11.314167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:53.949 [2024-04-26 15:36:11.314174] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:53.949 [2024-04-26 15:36:11.314393] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:53.949 [2024-04-26 15:36:11.314610] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.949 [2024-04-26 15:36:11.314617] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.949 [2024-04-26 15:36:11.314624] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.949 [2024-04-26 15:36:11.318148] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.949 [2024-04-26 15:36:11.327091] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.949 [2024-04-26 15:36:11.327769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.949 [2024-04-26 15:36:11.328155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.949 [2024-04-26 15:36:11.328169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:53.949 [2024-04-26 15:36:11.328179] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:53.949 [2024-04-26 15:36:11.328416] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:53.949 [2024-04-26 15:36:11.328637] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.949 [2024-04-26 15:36:11.328645] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.949 [2024-04-26 15:36:11.328652] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.949 [2024-04-26 15:36:11.332181] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.949 EAL: No free 2048 kB hugepages reported on node 1 00:25:53.949 [2024-04-26 15:36:11.340913] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.949 [2024-04-26 15:36:11.341541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.949 [2024-04-26 15:36:11.341920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.949 [2024-04-26 15:36:11.341934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:53.949 [2024-04-26 15:36:11.341944] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:53.949 [2024-04-26 15:36:11.342181] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:53.949 [2024-04-26 15:36:11.342401] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.949 [2024-04-26 15:36:11.342410] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.949 [2024-04-26 15:36:11.342417] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.949 [2024-04-26 15:36:11.345955] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.949 [2024-04-26 15:36:11.354682] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.949 [2024-04-26 15:36:11.355124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.949 [2024-04-26 15:36:11.355432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.949 [2024-04-26 15:36:11.355442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:53.949 [2024-04-26 15:36:11.355450] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:53.949 [2024-04-26 15:36:11.355668] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:53.949 [2024-04-26 15:36:11.355890] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.949 [2024-04-26 15:36:11.355898] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.949 [2024-04-26 15:36:11.355905] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.949 [2024-04-26 15:36:11.359423] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.949 [2024-04-26 15:36:11.368563] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.949 [2024-04-26 15:36:11.369040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.949 [2024-04-26 15:36:11.369374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.949 [2024-04-26 15:36:11.369383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:53.949 [2024-04-26 15:36:11.369391] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:53.949 [2024-04-26 15:36:11.369609] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:53.949 [2024-04-26 15:36:11.369826] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.949 [2024-04-26 15:36:11.369834] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.949 [2024-04-26 15:36:11.369847] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.949 [2024-04-26 15:36:11.373366] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.949 [2024-04-26 15:36:11.382513] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.949 [2024-04-26 15:36:11.383184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.949 [2024-04-26 15:36:11.383554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.949 [2024-04-26 15:36:11.383566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:53.949 [2024-04-26 15:36:11.383576] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:53.949 [2024-04-26 15:36:11.383812] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:53.949 [2024-04-26 15:36:11.384042] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.949 [2024-04-26 15:36:11.384051] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.949 [2024-04-26 15:36:11.384058] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.949 [2024-04-26 15:36:11.387580] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.949 [2024-04-26 15:36:11.388799] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:54.211 [2024-04-26 15:36:11.396413] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.211 [2024-04-26 15:36:11.397144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.211 [2024-04-26 15:36:11.397513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.211 [2024-04-26 15:36:11.397526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:54.211 [2024-04-26 15:36:11.397535] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:54.211 [2024-04-26 15:36:11.397773] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:54.211 [2024-04-26 15:36:11.397999] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.211 [2024-04-26 15:36:11.398008] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.211 [2024-04-26 15:36:11.398015] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.211 [2024-04-26 15:36:11.401541] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.211 [2024-04-26 15:36:11.410268] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.211 [2024-04-26 15:36:11.410859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.211 [2024-04-26 15:36:11.411103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.211 [2024-04-26 15:36:11.411113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:54.211 [2024-04-26 15:36:11.411121] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:54.211 [2024-04-26 15:36:11.411340] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:54.211 [2024-04-26 15:36:11.411557] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.211 [2024-04-26 15:36:11.411565] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.211 [2024-04-26 15:36:11.411572] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.211 [2024-04-26 15:36:11.415093] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.211 [2024-04-26 15:36:11.424235] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.211 [2024-04-26 15:36:11.424817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.211 [2024-04-26 15:36:11.425095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.211 [2024-04-26 15:36:11.425105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:54.211 [2024-04-26 15:36:11.425113] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:54.211 [2024-04-26 15:36:11.425331] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:54.211 [2024-04-26 15:36:11.425548] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.211 [2024-04-26 15:36:11.425556] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.211 [2024-04-26 15:36:11.425564] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.211 [2024-04-26 15:36:11.429088] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.211 [2024-04-26 15:36:11.438025] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.211 [2024-04-26 15:36:11.438556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.211 [2024-04-26 15:36:11.438958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.211 [2024-04-26 15:36:11.438969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:54.211 [2024-04-26 15:36:11.438976] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:54.211 [2024-04-26 15:36:11.439195] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:54.211 [2024-04-26 15:36:11.439412] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.211 [2024-04-26 15:36:11.439420] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.211 [2024-04-26 15:36:11.439427] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.211 [2024-04-26 15:36:11.441450] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:54.211 [2024-04-26 15:36:11.441474] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:54.211 [2024-04-26 15:36:11.441479] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:54.211 [2024-04-26 15:36:11.441484] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:54.211 [2024-04-26 15:36:11.441487] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:54.211 [2024-04-26 15:36:11.441657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:54.211 [2024-04-26 15:36:11.441777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:54.211 [2024-04-26 15:36:11.441778] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:54.211 [2024-04-26 15:36:11.442954] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.211 [2024-04-26 15:36:11.451893] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.211 [2024-04-26 15:36:11.452499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.211 [2024-04-26 15:36:11.452722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.211 [2024-04-26 15:36:11.452732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:54.211 [2024-04-26 15:36:11.452741] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:54.211 [2024-04-26 15:36:11.452964] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:54.211 [2024-04-26 15:36:11.453183] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.211 [2024-04-26 15:36:11.453190] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.211 [2024-04-26 15:36:11.453197] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.211 [2024-04-26 15:36:11.456719] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.211 [2024-04-26 15:36:11.465650] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.211 [2024-04-26 15:36:11.466205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.211 [2024-04-26 15:36:11.466547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.211 [2024-04-26 15:36:11.466561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:54.211 [2024-04-26 15:36:11.466571] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:54.211 [2024-04-26 15:36:11.466814] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:54.211 [2024-04-26 15:36:11.467048] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.211 [2024-04-26 15:36:11.467057] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.211 [2024-04-26 15:36:11.467065] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.211 [2024-04-26 15:36:11.470590] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.211 [2024-04-26 15:36:11.479536] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.211 [2024-04-26 15:36:11.480223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.211 [2024-04-26 15:36:11.480568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.211 [2024-04-26 15:36:11.480580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:54.211 [2024-04-26 15:36:11.480590] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:54.211 [2024-04-26 15:36:11.480831] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:54.211 [2024-04-26 15:36:11.481058] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.211 [2024-04-26 15:36:11.481066] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.211 [2024-04-26 15:36:11.481074] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.211 [2024-04-26 15:36:11.484597] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.211 [2024-04-26 15:36:11.493320] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.211 [2024-04-26 15:36:11.494071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.211 [2024-04-26 15:36:11.494431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.211 [2024-04-26 15:36:11.494444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:54.211 [2024-04-26 15:36:11.494454] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:54.211 [2024-04-26 15:36:11.494691] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:54.211 [2024-04-26 15:36:11.494920] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.211 [2024-04-26 15:36:11.494929] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.211 [2024-04-26 15:36:11.494936] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.211 [2024-04-26 15:36:11.498461] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.211 [2024-04-26 15:36:11.507185] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.212 [2024-04-26 15:36:11.507902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.212 [2024-04-26 15:36:11.508272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.212 [2024-04-26 15:36:11.508285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:54.212 [2024-04-26 15:36:11.508296] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:54.212 [2024-04-26 15:36:11.508533] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:54.212 [2024-04-26 15:36:11.508754] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.212 [2024-04-26 15:36:11.508768] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.212 [2024-04-26 15:36:11.508775] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.212 [2024-04-26 15:36:11.512306] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.212 [2024-04-26 15:36:11.521036] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.212 [2024-04-26 15:36:11.521675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.212 [2024-04-26 15:36:11.522036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.212 [2024-04-26 15:36:11.522049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:54.212 [2024-04-26 15:36:11.522059] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:54.212 [2024-04-26 15:36:11.522296] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:54.212 [2024-04-26 15:36:11.522517] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.212 [2024-04-26 15:36:11.522526] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.212 [2024-04-26 15:36:11.522534] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.212 [2024-04-26 15:36:11.526059] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.212 [2024-04-26 15:36:11.534992] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.212 [2024-04-26 15:36:11.535382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.212 [2024-04-26 15:36:11.535567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.212 [2024-04-26 15:36:11.535577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:54.212 [2024-04-26 15:36:11.535585] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:54.212 [2024-04-26 15:36:11.535803] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:54.212 [2024-04-26 15:36:11.536026] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.212 [2024-04-26 15:36:11.536035] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.212 [2024-04-26 15:36:11.536042] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.212 [2024-04-26 15:36:11.539558] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.212 [2024-04-26 15:36:11.548902] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.212 [2024-04-26 15:36:11.549455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.212 [2024-04-26 15:36:11.549821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.212 [2024-04-26 15:36:11.549831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:54.212 [2024-04-26 15:36:11.549844] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:54.212 [2024-04-26 15:36:11.550062] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:54.212 [2024-04-26 15:36:11.550280] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.212 [2024-04-26 15:36:11.550288] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.212 [2024-04-26 15:36:11.550300] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.212 [2024-04-26 15:36:11.553824] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.212 [2024-04-26 15:36:11.562748] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.212 [2024-04-26 15:36:11.563451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.212 [2024-04-26 15:36:11.563816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.212 [2024-04-26 15:36:11.563828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:54.212 [2024-04-26 15:36:11.563844] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:54.212 [2024-04-26 15:36:11.564081] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:54.212 [2024-04-26 15:36:11.564302] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.212 [2024-04-26 15:36:11.564310] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.212 [2024-04-26 15:36:11.564317] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.212 [2024-04-26 15:36:11.567844] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.212 [2024-04-26 15:36:11.576575] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.212 [2024-04-26 15:36:11.577250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.212 [2024-04-26 15:36:11.577597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.212 [2024-04-26 15:36:11.577609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:54.212 [2024-04-26 15:36:11.577619] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:54.212 [2024-04-26 15:36:11.577862] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:54.212 [2024-04-26 15:36:11.578084] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.212 [2024-04-26 15:36:11.578092] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.212 [2024-04-26 15:36:11.578099] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.212 [2024-04-26 15:36:11.581623] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.212 [2024-04-26 15:36:11.590346] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.212 [2024-04-26 15:36:11.590958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.212 [2024-04-26 15:36:11.591382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.212 [2024-04-26 15:36:11.591394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:54.212 [2024-04-26 15:36:11.591403] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:54.212 [2024-04-26 15:36:11.591640] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:54.212 [2024-04-26 15:36:11.591867] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.212 [2024-04-26 15:36:11.591877] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.212 [2024-04-26 15:36:11.591884] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.212 [2024-04-26 15:36:11.595416] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.212 [2024-04-26 15:36:11.604132] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.212 [2024-04-26 15:36:11.604532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.212 [2024-04-26 15:36:11.604726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.212 [2024-04-26 15:36:11.604736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:54.212 [2024-04-26 15:36:11.604744] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:54.212 [2024-04-26 15:36:11.604972] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:54.212 [2024-04-26 15:36:11.605191] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.212 [2024-04-26 15:36:11.605198] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.212 [2024-04-26 15:36:11.605205] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.212 [2024-04-26 15:36:11.608729] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.212 [2024-04-26 15:36:11.618065] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.212 [2024-04-26 15:36:11.618708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.212 [2024-04-26 15:36:11.619142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.212 [2024-04-26 15:36:11.619158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:54.212 [2024-04-26 15:36:11.619167] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:54.212 [2024-04-26 15:36:11.619405] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:54.212 [2024-04-26 15:36:11.619625] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.212 [2024-04-26 15:36:11.619633] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.212 [2024-04-26 15:36:11.619641] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.212 [2024-04-26 15:36:11.623168] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.212 [2024-04-26 15:36:11.631884] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.212 [2024-04-26 15:36:11.632482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.212 [2024-04-26 15:36:11.632799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.212 [2024-04-26 15:36:11.632808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:54.212 [2024-04-26 15:36:11.632816] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:54.212 [2024-04-26 15:36:11.633038] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:54.212 [2024-04-26 15:36:11.633256] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.212 [2024-04-26 15:36:11.633263] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.212 [2024-04-26 15:36:11.633270] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.212 [2024-04-26 15:36:11.636791] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.212 [2024-04-26 15:36:11.645726] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.212 [2024-04-26 15:36:11.646420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.212 [2024-04-26 15:36:11.646778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.212 [2024-04-26 15:36:11.646791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:54.212 [2024-04-26 15:36:11.646800] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:54.212 [2024-04-26 15:36:11.647044] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:54.212 [2024-04-26 15:36:11.647266] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.212 [2024-04-26 15:36:11.647274] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.213 [2024-04-26 15:36:11.647281] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.213 [2024-04-26 15:36:11.650804] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.474 [2024-04-26 15:36:11.659533] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.474 [2024-04-26 15:36:11.660070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.474 [2024-04-26 15:36:11.660427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.474 [2024-04-26 15:36:11.660437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:54.474 [2024-04-26 15:36:11.660444] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:54.474 [2024-04-26 15:36:11.660663] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:54.474 [2024-04-26 15:36:11.660883] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.474 [2024-04-26 15:36:11.660891] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.474 [2024-04-26 15:36:11.660898] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.474 [2024-04-26 15:36:11.664420] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.474 [2024-04-26 15:36:11.673348] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.474 [2024-04-26 15:36:11.673783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.474 [2024-04-26 15:36:11.673981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.474 [2024-04-26 15:36:11.673991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:54.474 [2024-04-26 15:36:11.673998] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:54.474 [2024-04-26 15:36:11.674215] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:54.474 [2024-04-26 15:36:11.674433] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.474 [2024-04-26 15:36:11.674441] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.474 [2024-04-26 15:36:11.674448] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.474 [2024-04-26 15:36:11.677979] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.474 [2024-04-26 15:36:11.687115] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.474 [2024-04-26 15:36:11.687704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.474 [2024-04-26 15:36:11.688038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.474 [2024-04-26 15:36:11.688048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:54.474 [2024-04-26 15:36:11.688056] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:54.474 [2024-04-26 15:36:11.688273] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:54.474 [2024-04-26 15:36:11.688491] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.474 [2024-04-26 15:36:11.688498] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.474 [2024-04-26 15:36:11.688505] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.474 [2024-04-26 15:36:11.692025] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.474 [2024-04-26 15:36:11.700949] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.474 [2024-04-26 15:36:11.701500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.474 [2024-04-26 15:36:11.701690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.474 [2024-04-26 15:36:11.701699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:54.474 [2024-04-26 15:36:11.701706] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:54.474 [2024-04-26 15:36:11.701930] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:54.474 [2024-04-26 15:36:11.702148] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.474 [2024-04-26 15:36:11.702155] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.474 [2024-04-26 15:36:11.702161] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.474 [2024-04-26 15:36:11.705675] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.474 [2024-04-26 15:36:11.714808] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.474 [2024-04-26 15:36:11.715445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.474 [2024-04-26 15:36:11.715662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.474 [2024-04-26 15:36:11.715674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:54.474 [2024-04-26 15:36:11.715683] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:54.474 [2024-04-26 15:36:11.715928] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:54.474 [2024-04-26 15:36:11.716150] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.474 [2024-04-26 15:36:11.716158] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.474 [2024-04-26 15:36:11.716165] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.474 [2024-04-26 15:36:11.719685] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.474 [2024-04-26 15:36:11.728611] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.474 [2024-04-26 15:36:11.729292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.474 [2024-04-26 15:36:11.729639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.474 [2024-04-26 15:36:11.729656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:54.474 [2024-04-26 15:36:11.729665] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:54.474 [2024-04-26 15:36:11.729910] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:54.474 [2024-04-26 15:36:11.730132] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.474 [2024-04-26 15:36:11.730140] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.474 [2024-04-26 15:36:11.730147] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.475 [2024-04-26 15:36:11.733669] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.475 [2024-04-26 15:36:11.742392] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.475 [2024-04-26 15:36:11.743083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.475 [2024-04-26 15:36:11.743362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.475 [2024-04-26 15:36:11.743375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:54.475 [2024-04-26 15:36:11.743384] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:54.475 [2024-04-26 15:36:11.743622] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:54.475 [2024-04-26 15:36:11.743849] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.475 [2024-04-26 15:36:11.743858] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.475 [2024-04-26 15:36:11.743865] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.475 [2024-04-26 15:36:11.747388] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.475 [2024-04-26 15:36:11.756313] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.475 [2024-04-26 15:36:11.756587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.475 [2024-04-26 15:36:11.756777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.475 [2024-04-26 15:36:11.756787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:54.475 [2024-04-26 15:36:11.756795] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:54.475 [2024-04-26 15:36:11.757020] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:54.475 [2024-04-26 15:36:11.757239] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.475 [2024-04-26 15:36:11.757246] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.475 [2024-04-26 15:36:11.757253] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.475 [2024-04-26 15:36:11.760774] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.475 [2024-04-26 15:36:11.770114] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.475 [2024-04-26 15:36:11.770807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.475 [2024-04-26 15:36:11.771241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.475 [2024-04-26 15:36:11.771254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:54.475 [2024-04-26 15:36:11.771268] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:54.475 [2024-04-26 15:36:11.771505] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:54.475 [2024-04-26 15:36:11.771726] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.475 [2024-04-26 15:36:11.771734] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.475 [2024-04-26 15:36:11.771741] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.475 [2024-04-26 15:36:11.775276] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.475 [2024-04-26 15:36:11.784064] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.475 [2024-04-26 15:36:11.784539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.475 [2024-04-26 15:36:11.784861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.475 [2024-04-26 15:36:11.784872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:54.475 [2024-04-26 15:36:11.784880] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:54.475 [2024-04-26 15:36:11.785098] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:54.475 [2024-04-26 15:36:11.785315] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.475 [2024-04-26 15:36:11.785322] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.475 [2024-04-26 15:36:11.785329] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.475 [2024-04-26 15:36:11.788851] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.475 [2024-04-26 15:36:11.797988] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.475 [2024-04-26 15:36:11.798499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.475 [2024-04-26 15:36:11.798856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.475 [2024-04-26 15:36:11.798870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:54.475 [2024-04-26 15:36:11.798879] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:54.475 [2024-04-26 15:36:11.799116] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:54.475 [2024-04-26 15:36:11.799337] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.475 [2024-04-26 15:36:11.799346] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.475 [2024-04-26 15:36:11.799353] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.475 [2024-04-26 15:36:11.802882] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.475 [2024-04-26 15:36:11.811810] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.475 [2024-04-26 15:36:11.812483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.475 [2024-04-26 15:36:11.812830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.475 [2024-04-26 15:36:11.812850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:54.475 [2024-04-26 15:36:11.812860] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:54.475 [2024-04-26 15:36:11.813102] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:54.475 [2024-04-26 15:36:11.813323] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.475 [2024-04-26 15:36:11.813331] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.475 [2024-04-26 15:36:11.813338] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.475 [2024-04-26 15:36:11.816861] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.475 [2024-04-26 15:36:11.825591] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.475 [2024-04-26 15:36:11.826269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.475 [2024-04-26 15:36:11.826614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.475 [2024-04-26 15:36:11.826627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:54.475 [2024-04-26 15:36:11.826636] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:54.475 [2024-04-26 15:36:11.826882] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:54.475 [2024-04-26 15:36:11.827103] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.475 [2024-04-26 15:36:11.827111] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.475 [2024-04-26 15:36:11.827118] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.475 [2024-04-26 15:36:11.830641] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.475 [2024-04-26 15:36:11.839361] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.475 [2024-04-26 15:36:11.839950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.475 [2024-04-26 15:36:11.840315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.475 [2024-04-26 15:36:11.840328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:54.475 [2024-04-26 15:36:11.840337] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:54.475 [2024-04-26 15:36:11.840574] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:54.475 [2024-04-26 15:36:11.840795] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.475 [2024-04-26 15:36:11.840803] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.475 [2024-04-26 15:36:11.840810] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.475 [2024-04-26 15:36:11.844340] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.476 [2024-04-26 15:36:11.853145] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.476 [2024-04-26 15:36:11.853800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.476 [2024-04-26 15:36:11.854079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.476 [2024-04-26 15:36:11.854093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:54.476 [2024-04-26 15:36:11.854102] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:54.476 [2024-04-26 15:36:11.854339] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:54.476 [2024-04-26 15:36:11.854565] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.476 [2024-04-26 15:36:11.854573] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.476 [2024-04-26 15:36:11.854581] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.476 [2024-04-26 15:36:11.858107] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.476 [2024-04-26 15:36:11.867038] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.476 [2024-04-26 15:36:11.867588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.476 [2024-04-26 15:36:11.867774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.476 [2024-04-26 15:36:11.867784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:54.476 [2024-04-26 15:36:11.867791] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:54.476 [2024-04-26 15:36:11.868016] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:54.476 [2024-04-26 15:36:11.868237] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.476 [2024-04-26 15:36:11.868245] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.476 [2024-04-26 15:36:11.868252] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.476 [2024-04-26 15:36:11.871767] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.476 [2024-04-26 15:36:11.881133] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.476 [2024-04-26 15:36:11.881779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.476 [2024-04-26 15:36:11.882208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.476 [2024-04-26 15:36:11.882221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:54.476 [2024-04-26 15:36:11.882231] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:54.476 [2024-04-26 15:36:11.882467] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:54.476 [2024-04-26 15:36:11.882688] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.476 [2024-04-26 15:36:11.882696] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.476 [2024-04-26 15:36:11.882703] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.476 [2024-04-26 15:36:11.886230] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.476 [2024-04-26 15:36:11.894954] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.476 [2024-04-26 15:36:11.895557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.476 [2024-04-26 15:36:11.895951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.476 [2024-04-26 15:36:11.895961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:54.476 [2024-04-26 15:36:11.895969] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:54.476 [2024-04-26 15:36:11.896187] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:54.476 [2024-04-26 15:36:11.896404] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.476 [2024-04-26 15:36:11.896417] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.476 [2024-04-26 15:36:11.896424] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.476 [2024-04-26 15:36:11.899949] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.476 [2024-04-26 15:36:11.908877] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.476 [2024-04-26 15:36:11.909549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.476 [2024-04-26 15:36:11.909902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.476 [2024-04-26 15:36:11.909916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:54.476 [2024-04-26 15:36:11.909927] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:54.476 [2024-04-26 15:36:11.910163] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:54.476 [2024-04-26 15:36:11.910384] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.476 [2024-04-26 15:36:11.910392] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.476 [2024-04-26 15:36:11.910399] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.476 [2024-04-26 15:36:11.913929] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.738 [2024-04-26 15:36:11.922650] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.738 [2024-04-26 15:36:11.923336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.738 [2024-04-26 15:36:11.923550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.738 [2024-04-26 15:36:11.923563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:54.738 [2024-04-26 15:36:11.923572] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:54.738 [2024-04-26 15:36:11.923809] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:54.738 [2024-04-26 15:36:11.924039] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.738 [2024-04-26 15:36:11.924048] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.738 [2024-04-26 15:36:11.924055] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.738 [2024-04-26 15:36:11.927581] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.738 [2024-04-26 15:36:11.936512] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.738 [2024-04-26 15:36:11.937198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.738 [2024-04-26 15:36:11.937545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.738 [2024-04-26 15:36:11.937558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:54.738 [2024-04-26 15:36:11.937567] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:54.738 [2024-04-26 15:36:11.937804] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:54.738 [2024-04-26 15:36:11.938031] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.738 [2024-04-26 15:36:11.938041] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.738 [2024-04-26 15:36:11.938053] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.738 [2024-04-26 15:36:11.941578] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.738 [2024-04-26 15:36:11.950300] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.738 [2024-04-26 15:36:11.950961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.738 [2024-04-26 15:36:11.951364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.738 [2024-04-26 15:36:11.951377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:54.738 [2024-04-26 15:36:11.951386] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:54.738 [2024-04-26 15:36:11.951622] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:54.738 [2024-04-26 15:36:11.951850] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.738 [2024-04-26 15:36:11.951859] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.738 [2024-04-26 15:36:11.951867] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.738 [2024-04-26 15:36:11.955389] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.738 [2024-04-26 15:36:11.964112] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.738 [2024-04-26 15:36:11.964791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.738 [2024-04-26 15:36:11.965143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.738 [2024-04-26 15:36:11.965156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:54.738 [2024-04-26 15:36:11.965166] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:54.738 [2024-04-26 15:36:11.965403] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:54.738 [2024-04-26 15:36:11.965623] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.738 [2024-04-26 15:36:11.965631] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.738 [2024-04-26 15:36:11.965639] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.738 [2024-04-26 15:36:11.969172] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.738 [2024-04-26 15:36:11.977905] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.738 [2024-04-26 15:36:11.978599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.738 [2024-04-26 15:36:11.978815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.738 [2024-04-26 15:36:11.978828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:54.738 [2024-04-26 15:36:11.978844] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:54.738 [2024-04-26 15:36:11.979082] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:54.738 [2024-04-26 15:36:11.979303] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.738 [2024-04-26 15:36:11.979311] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.738 [2024-04-26 15:36:11.979318] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.738 [2024-04-26 15:36:11.982850] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.738 [2024-04-26 15:36:11.991777] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.738 [2024-04-26 15:36:11.992493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.738 [2024-04-26 15:36:11.992844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.738 [2024-04-26 15:36:11.992858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:54.738 [2024-04-26 15:36:11.992867] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:54.738 [2024-04-26 15:36:11.993103] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:54.738 [2024-04-26 15:36:11.993325] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.738 [2024-04-26 15:36:11.993333] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.738 [2024-04-26 15:36:11.993341] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.738 [2024-04-26 15:36:11.996868] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.738 [2024-04-26 15:36:12.005592] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.738 [2024-04-26 15:36:12.006153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.738 [2024-04-26 15:36:12.006516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.738 [2024-04-26 15:36:12.006529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:54.738 [2024-04-26 15:36:12.006538] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:54.738 [2024-04-26 15:36:12.006775] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:54.738 [2024-04-26 15:36:12.007005] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.738 [2024-04-26 15:36:12.007015] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.738 [2024-04-26 15:36:12.007022] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.738 [2024-04-26 15:36:12.010547] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.738 [2024-04-26 15:36:12.019483] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.738 [2024-04-26 15:36:12.020186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.738 [2024-04-26 15:36:12.020545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.739 [2024-04-26 15:36:12.020557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:54.739 [2024-04-26 15:36:12.020567] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:54.739 [2024-04-26 15:36:12.020804] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:54.739 [2024-04-26 15:36:12.021033] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.739 [2024-04-26 15:36:12.021043] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.739 [2024-04-26 15:36:12.021050] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.739 [2024-04-26 15:36:12.024579] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.739 [2024-04-26 15:36:12.033308] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.739 [2024-04-26 15:36:12.033634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.739 [2024-04-26 15:36:12.034035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.739 [2024-04-26 15:36:12.034047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:54.739 [2024-04-26 15:36:12.034056] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:54.739 [2024-04-26 15:36:12.034279] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:54.739 [2024-04-26 15:36:12.034497] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.739 [2024-04-26 15:36:12.034505] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.739 [2024-04-26 15:36:12.034512] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.739 [2024-04-26 15:36:12.038036] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.739 [2024-04-26 15:36:12.047168] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.739 [2024-04-26 15:36:12.047740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.739 [2024-04-26 15:36:12.047982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.739 [2024-04-26 15:36:12.047996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:54.739 [2024-04-26 15:36:12.048006] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:54.739 [2024-04-26 15:36:12.048242] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:54.739 [2024-04-26 15:36:12.048463] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.739 [2024-04-26 15:36:12.048471] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.739 [2024-04-26 15:36:12.048478] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.739 [2024-04-26 15:36:12.052004] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.739 15:36:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:54.739 15:36:12 -- common/autotest_common.sh@850 -- # return 0 00:25:54.739 15:36:12 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:54.739 15:36:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:54.739 15:36:12 -- common/autotest_common.sh@10 -- # set +x 00:25:54.739 [2024-04-26 15:36:12.060942] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.739 [2024-04-26 15:36:12.061507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.739 [2024-04-26 15:36:12.062056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.739 [2024-04-26 15:36:12.062092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:54.739 [2024-04-26 15:36:12.062104] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:54.739 [2024-04-26 15:36:12.062340] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:54.739 [2024-04-26 15:36:12.062562] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.739 [2024-04-26 15:36:12.062571] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.739 [2024-04-26 15:36:12.062578] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.739 [2024-04-26 15:36:12.066114] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.739 [2024-04-26 15:36:12.074854] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.739 [2024-04-26 15:36:12.075523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.739 [2024-04-26 15:36:12.076042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.739 [2024-04-26 15:36:12.076079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:54.739 [2024-04-26 15:36:12.076089] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:54.739 [2024-04-26 15:36:12.076326] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:54.739 [2024-04-26 15:36:12.076549] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.739 [2024-04-26 15:36:12.076557] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.739 [2024-04-26 15:36:12.076565] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.739 [2024-04-26 15:36:12.080096] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.739 [2024-04-26 15:36:12.088816] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.739 [2024-04-26 15:36:12.089525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.739 [2024-04-26 15:36:12.090072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.739 [2024-04-26 15:36:12.090109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:54.739 [2024-04-26 15:36:12.090119] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:54.739 [2024-04-26 15:36:12.090356] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:54.739 [2024-04-26 15:36:12.090578] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.739 [2024-04-26 15:36:12.090587] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.739 [2024-04-26 15:36:12.090595] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.739 [2024-04-26 15:36:12.094127] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.739 15:36:12 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:54.739 15:36:12 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:54.739 15:36:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:54.739 15:36:12 -- common/autotest_common.sh@10 -- # set +x 00:25:54.739 [2024-04-26 15:36:12.102640] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.739 [2024-04-26 15:36:12.103187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.739 [2024-04-26 15:36:12.103401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.739 [2024-04-26 15:36:12.103410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:54.739 [2024-04-26 15:36:12.103418] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:54.739 [2024-04-26 15:36:12.103636] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:54.739 [2024-04-26 15:36:12.103858] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.739 [2024-04-26 15:36:12.103866] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.739 [2024-04-26 15:36:12.103878] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.739 [2024-04-26 15:36:12.104447] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:54.739 [2024-04-26 15:36:12.107395] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.739 15:36:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:54.739 15:36:12 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:54.739 15:36:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:54.739 15:36:12 -- common/autotest_common.sh@10 -- # set +x 00:25:54.739 [2024-04-26 15:36:12.116533] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.739 [2024-04-26 15:36:12.117068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.739 [2024-04-26 15:36:12.117420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.739 [2024-04-26 15:36:12.117430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:54.739 [2024-04-26 15:36:12.117437] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:54.739 [2024-04-26 15:36:12.117655] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:54.739 [2024-04-26 15:36:12.117876] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.739 [2024-04-26 15:36:12.117884] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.739 [2024-04-26 15:36:12.117891] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.739 [2024-04-26 15:36:12.121414] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.739 [2024-04-26 15:36:12.130339] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.739 [2024-04-26 15:36:12.131069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.739 [2024-04-26 15:36:12.131336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.739 [2024-04-26 15:36:12.131349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:54.739 [2024-04-26 15:36:12.131359] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:54.739 [2024-04-26 15:36:12.131597] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:54.739 [2024-04-26 15:36:12.131817] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.740 [2024-04-26 15:36:12.131825] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.740 [2024-04-26 15:36:12.131833] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.740 Malloc0 00:25:54.740 [2024-04-26 15:36:12.135365] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.740 15:36:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:54.740 15:36:12 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:54.740 15:36:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:54.740 15:36:12 -- common/autotest_common.sh@10 -- # set +x 00:25:54.740 [2024-04-26 15:36:12.144294] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.740 [2024-04-26 15:36:12.144959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.740 [2024-04-26 15:36:12.145322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.740 [2024-04-26 15:36:12.145335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:54.740 [2024-04-26 15:36:12.145349] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:54.740 [2024-04-26 15:36:12.145587] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:54.740 [2024-04-26 15:36:12.145807] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.740 [2024-04-26 15:36:12.145815] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.740 [2024-04-26 15:36:12.145822] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.740 15:36:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:54.740 15:36:12 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:54.740 15:36:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:54.740 15:36:12 -- common/autotest_common.sh@10 -- # set +x 00:25:54.740 [2024-04-26 15:36:12.149353] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.740 [2024-04-26 15:36:12.158076] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.740 [2024-04-26 15:36:12.158622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.740 [2024-04-26 15:36:12.158978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.740 [2024-04-26 15:36:12.158994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102e4a0 with addr=10.0.0.2, port=4420 00:25:54.740 [2024-04-26 15:36:12.159003] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102e4a0 is same with the state(5) to be set 00:25:54.740 [2024-04-26 15:36:12.159240] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102e4a0 (9): Bad file descriptor 00:25:54.740 [2024-04-26 15:36:12.159462] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:54.740 [2024-04-26 15:36:12.159470] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:54.740 [2024-04-26 15:36:12.159478] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:54.740 15:36:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:54.740 15:36:12 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:54.740 15:36:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:54.740 15:36:12 -- common/autotest_common.sh@10 -- # set +x 00:25:54.740 [2024-04-26 15:36:12.163003] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:54.740 [2024-04-26 15:36:12.166750] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:54.740 15:36:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:54.740 [2024-04-26 15:36:12.171932] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:54.740 15:36:12 -- host/bdevperf.sh@38 -- # wait 1781680 00:25:55.000 [2024-04-26 15:36:12.206292] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:04.995 00:26:04.995 Latency(us) 00:26:04.995 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:04.995 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:04.995 Verification LBA range: start 0x0 length 0x4000 00:26:04.995 Nvme1n1 : 15.01 8148.86 31.83 9532.57 0.00 7213.89 539.31 16711.68 00:26:04.996 =================================================================================================================== 00:26:04.996 Total : 8148.86 31.83 9532.57 0.00 7213.89 539.31 16711.68 00:26:04.996 15:36:20 -- host/bdevperf.sh@39 -- # sync 00:26:04.996 15:36:20 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:04.996 15:36:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:04.996 15:36:20 -- common/autotest_common.sh@10 -- # set +x 00:26:04.996 15:36:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:04.996 15:36:20 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:26:04.996 15:36:20 -- host/bdevperf.sh@44 -- # nvmftestfini 00:26:04.996 15:36:20 -- nvmf/common.sh@477 -- # nvmfcleanup 00:26:04.996 15:36:20 -- nvmf/common.sh@117 -- # sync 00:26:04.996 15:36:20 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:04.996 15:36:20 -- nvmf/common.sh@120 -- # set +e 00:26:04.996 15:36:20 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:04.996 15:36:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:04.996 rmmod nvme_tcp 00:26:04.996 rmmod nvme_fabrics 00:26:04.996 rmmod nvme_keyring 00:26:04.996 15:36:20 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:04.996 15:36:20 -- nvmf/common.sh@124 -- # set -e 00:26:04.996 15:36:20 -- nvmf/common.sh@125 -- # return 0 00:26:04.996 15:36:20 -- nvmf/common.sh@478 -- # '[' -n 1783317 ']' 00:26:04.996 15:36:20 -- nvmf/common.sh@479 -- # killprocess 1783317 00:26:04.996 15:36:20 -- common/autotest_common.sh@936 -- # '[' -z 1783317 ']' 00:26:04.996 15:36:20 -- common/autotest_common.sh@940 -- # kill -0 1783317 00:26:04.996 15:36:20 -- common/autotest_common.sh@941 -- # uname 00:26:04.996 15:36:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:04.996 15:36:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1783317 00:26:04.996 15:36:20 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:04.996 15:36:20 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:04.996 15:36:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1783317' 00:26:04.996 killing process with pid 1783317 00:26:04.996 15:36:20 -- common/autotest_common.sh@955 -- # kill 1783317 00:26:04.996 15:36:20 -- common/autotest_common.sh@960 -- # wait 1783317 00:26:04.996 15:36:21 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:26:04.996 15:36:21 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:26:04.996 15:36:21 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:26:04.996 15:36:21 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:04.996 15:36:21 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:04.996 15:36:21 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:04.996 15:36:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:04.996 15:36:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:05.936 15:36:23 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:05.936 00:26:05.936 real 0m28.050s 00:26:05.936 user 1m2.654s 00:26:05.936 sys 0m7.588s 00:26:05.936 15:36:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:05.936 15:36:23 -- common/autotest_common.sh@10 -- # set +x 00:26:05.936 ************************************ 00:26:05.936 END TEST nvmf_bdevperf 00:26:05.936 ************************************ 00:26:05.936 15:36:23 -- nvmf/nvmf.sh@120 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:05.936 15:36:23 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:05.936 15:36:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:05.936 15:36:23 -- common/autotest_common.sh@10 -- # set +x 00:26:05.936 ************************************ 00:26:05.936 START TEST nvmf_target_disconnect 00:26:05.936 ************************************ 00:26:05.936 15:36:23 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:06.197 * Looking for test storage... 00:26:06.197 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:06.197 15:36:23 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:06.197 15:36:23 -- nvmf/common.sh@7 -- # uname -s 00:26:06.197 15:36:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:06.197 15:36:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:06.197 15:36:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:06.197 15:36:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:06.197 15:36:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:06.197 15:36:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:06.197 15:36:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:06.197 15:36:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:06.197 15:36:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:06.197 15:36:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:06.197 15:36:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:06.197 15:36:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:06.197 15:36:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:06.197 15:36:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:06.197 15:36:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:06.197 15:36:23 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:06.197 15:36:23 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:06.197 15:36:23 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:06.197 15:36:23 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:06.197 15:36:23 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:06.197 15:36:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:06.197 15:36:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:06.197 15:36:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:06.197 15:36:23 -- paths/export.sh@5 -- # export PATH 00:26:06.197 15:36:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:06.197 15:36:23 -- nvmf/common.sh@47 -- # : 0 00:26:06.197 15:36:23 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:06.197 15:36:23 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:06.197 15:36:23 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:06.197 15:36:23 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:06.197 15:36:23 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:06.197 15:36:23 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:06.197 15:36:23 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:06.197 15:36:23 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:06.197 15:36:23 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:26:06.197 15:36:23 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:26:06.197 15:36:23 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:26:06.197 15:36:23 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:26:06.197 15:36:23 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:26:06.197 15:36:23 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:06.197 15:36:23 -- nvmf/common.sh@437 -- # prepare_net_devs 00:26:06.197 15:36:23 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:26:06.197 15:36:23 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:26:06.197 15:36:23 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:06.197 15:36:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:06.197 15:36:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:06.197 15:36:23 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:26:06.197 15:36:23 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:26:06.197 15:36:23 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:06.197 15:36:23 -- common/autotest_common.sh@10 -- # set +x 00:26:14.334 15:36:30 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:14.334 15:36:30 -- nvmf/common.sh@291 -- # pci_devs=() 00:26:14.334 15:36:30 -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:14.334 15:36:30 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:14.334 15:36:30 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:14.334 15:36:30 -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:14.334 15:36:30 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:14.334 15:36:30 -- nvmf/common.sh@295 -- # net_devs=() 00:26:14.334 15:36:30 -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:14.334 15:36:30 -- nvmf/common.sh@296 -- # e810=() 00:26:14.334 15:36:30 -- nvmf/common.sh@296 -- # local -ga e810 00:26:14.334 15:36:30 -- nvmf/common.sh@297 -- # x722=() 00:26:14.334 15:36:30 -- nvmf/common.sh@297 -- # local -ga x722 00:26:14.334 15:36:30 -- nvmf/common.sh@298 -- # mlx=() 00:26:14.334 15:36:30 -- nvmf/common.sh@298 -- # local -ga mlx 00:26:14.334 15:36:30 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:14.334 15:36:30 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:14.334 15:36:30 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:14.334 15:36:30 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:14.334 15:36:30 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:14.334 15:36:30 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:14.334 15:36:30 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:14.334 15:36:30 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:14.334 15:36:30 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:14.334 15:36:30 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:14.334 15:36:30 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:14.334 15:36:30 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:14.334 15:36:30 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:14.334 15:36:30 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:14.335 15:36:30 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:14.335 15:36:30 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:14.335 15:36:30 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:14.335 15:36:30 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:14.335 15:36:30 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:14.335 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:14.335 15:36:30 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:14.335 15:36:30 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:14.335 15:36:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:14.335 15:36:30 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:14.335 15:36:30 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:14.335 15:36:30 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:14.335 15:36:30 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:14.335 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:14.335 15:36:30 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:14.335 15:36:30 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:14.335 15:36:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:14.335 15:36:30 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:14.335 15:36:30 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:14.335 15:36:30 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:14.335 15:36:30 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:14.335 15:36:30 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:14.335 15:36:30 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:14.335 15:36:30 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:14.335 15:36:30 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:14.335 15:36:30 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:14.335 15:36:30 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:14.335 Found net devices under 0000:31:00.0: cvl_0_0 00:26:14.335 15:36:30 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:14.335 15:36:30 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:14.335 15:36:30 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:14.335 15:36:30 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:14.335 15:36:30 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:14.335 15:36:30 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:14.335 Found net devices under 0000:31:00.1: cvl_0_1 00:26:14.335 15:36:30 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:14.335 15:36:30 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:26:14.335 15:36:30 -- nvmf/common.sh@403 -- # is_hw=yes 00:26:14.335 15:36:30 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:26:14.335 15:36:30 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:26:14.335 15:36:30 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:26:14.335 15:36:30 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:14.335 15:36:30 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:14.335 15:36:30 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:14.335 15:36:30 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:14.335 15:36:30 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:14.335 15:36:30 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:14.335 15:36:30 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:14.335 15:36:30 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:14.335 15:36:30 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:14.335 15:36:30 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:14.335 15:36:30 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:14.335 15:36:30 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:14.335 15:36:30 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:14.335 15:36:30 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:14.335 15:36:30 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:14.335 15:36:30 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:14.335 15:36:30 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:14.335 15:36:30 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:14.335 15:36:30 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:14.335 15:36:30 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:14.335 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:14.335 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.865 ms 00:26:14.335 00:26:14.335 --- 10.0.0.2 ping statistics --- 00:26:14.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:14.335 rtt min/avg/max/mdev = 0.865/0.865/0.865/0.000 ms 00:26:14.335 15:36:30 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:14.335 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:14.335 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.333 ms 00:26:14.335 00:26:14.335 --- 10.0.0.1 ping statistics --- 00:26:14.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:14.335 rtt min/avg/max/mdev = 0.333/0.333/0.333/0.000 ms 00:26:14.335 15:36:30 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:14.335 15:36:30 -- nvmf/common.sh@411 -- # return 0 00:26:14.335 15:36:30 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:26:14.335 15:36:30 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:14.335 15:36:30 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:26:14.335 15:36:30 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:26:14.335 15:36:30 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:14.335 15:36:30 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:26:14.335 15:36:30 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:26:14.335 15:36:30 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:26:14.335 15:36:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:14.335 15:36:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:14.335 15:36:30 -- common/autotest_common.sh@10 -- # set +x 00:26:14.335 ************************************ 00:26:14.335 START TEST nvmf_target_disconnect_tc1 00:26:14.335 ************************************ 00:26:14.335 15:36:30 -- common/autotest_common.sh@1111 -- # nvmf_target_disconnect_tc1 00:26:14.335 15:36:30 -- host/target_disconnect.sh@32 -- # set +e 00:26:14.335 15:36:30 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:14.335 EAL: No free 2048 kB hugepages reported on node 1 00:26:14.335 [2024-04-26 15:36:30.914450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.335 [2024-04-26 15:36:30.914735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.335 [2024-04-26 15:36:30.914749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1375510 with addr=10.0.0.2, port=4420 00:26:14.335 [2024-04-26 15:36:30.914772] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:14.335 [2024-04-26 15:36:30.914785] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:14.335 [2024-04-26 15:36:30.914792] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:26:14.335 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:26:14.335 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:26:14.335 Initializing NVMe Controllers 00:26:14.335 15:36:30 -- host/target_disconnect.sh@33 -- # trap - ERR 00:26:14.335 15:36:30 -- host/target_disconnect.sh@33 -- # print_backtrace 00:26:14.335 15:36:30 -- common/autotest_common.sh@1139 -- # [[ hxBET =~ e ]] 00:26:14.335 15:36:30 -- common/autotest_common.sh@1139 -- # return 0 00:26:14.335 15:36:30 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:26:14.335 15:36:30 -- host/target_disconnect.sh@41 -- # set -e 00:26:14.335 00:26:14.335 real 0m0.104s 00:26:14.335 user 0m0.037s 00:26:14.335 sys 0m0.067s 00:26:14.335 15:36:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:14.335 15:36:30 -- common/autotest_common.sh@10 -- # set +x 00:26:14.335 ************************************ 00:26:14.335 END TEST nvmf_target_disconnect_tc1 00:26:14.335 ************************************ 00:26:14.335 15:36:30 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:26:14.335 15:36:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:14.336 15:36:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:14.336 15:36:30 -- common/autotest_common.sh@10 -- # set +x 00:26:14.336 ************************************ 00:26:14.336 START TEST nvmf_target_disconnect_tc2 00:26:14.336 ************************************ 00:26:14.336 15:36:31 -- common/autotest_common.sh@1111 -- # nvmf_target_disconnect_tc2 00:26:14.336 15:36:31 -- host/target_disconnect.sh@45 -- # disconnect_init 10.0.0.2 00:26:14.336 15:36:31 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:14.336 15:36:31 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:26:14.336 15:36:31 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:14.336 15:36:31 -- common/autotest_common.sh@10 -- # set +x 00:26:14.336 15:36:31 -- nvmf/common.sh@470 -- # nvmfpid=1789605 00:26:14.336 15:36:31 -- nvmf/common.sh@471 -- # waitforlisten 1789605 00:26:14.336 15:36:31 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:14.336 15:36:31 -- common/autotest_common.sh@817 -- # '[' -z 1789605 ']' 00:26:14.336 15:36:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:14.336 15:36:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:14.336 15:36:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:14.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:14.336 15:36:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:14.336 15:36:31 -- common/autotest_common.sh@10 -- # set +x 00:26:14.336 [2024-04-26 15:36:31.191090] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:26:14.336 [2024-04-26 15:36:31.191137] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:14.336 EAL: No free 2048 kB hugepages reported on node 1 00:26:14.336 [2024-04-26 15:36:31.277535] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:14.336 [2024-04-26 15:36:31.370371] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:14.336 [2024-04-26 15:36:31.370433] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:14.336 [2024-04-26 15:36:31.370441] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:14.336 [2024-04-26 15:36:31.370448] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:14.336 [2024-04-26 15:36:31.370458] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:14.336 [2024-04-26 15:36:31.370560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:26:14.336 [2024-04-26 15:36:31.370708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:26:14.336 [2024-04-26 15:36:31.371207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:26:14.336 [2024-04-26 15:36:31.371289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:26:14.596 15:36:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:14.596 15:36:31 -- common/autotest_common.sh@850 -- # return 0 00:26:14.596 15:36:31 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:26:14.596 15:36:31 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:14.596 15:36:31 -- common/autotest_common.sh@10 -- # set +x 00:26:14.596 15:36:32 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:14.596 15:36:32 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:14.596 15:36:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:14.596 15:36:32 -- common/autotest_common.sh@10 -- # set +x 00:26:14.866 Malloc0 00:26:14.866 15:36:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:14.866 15:36:32 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:14.866 15:36:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:14.866 15:36:32 -- common/autotest_common.sh@10 -- # set +x 00:26:14.866 [2024-04-26 15:36:32.057846] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:14.866 15:36:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:14.866 15:36:32 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:14.866 15:36:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:14.866 15:36:32 -- common/autotest_common.sh@10 -- # set +x 00:26:14.866 15:36:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:14.866 15:36:32 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:14.866 15:36:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:14.866 15:36:32 -- common/autotest_common.sh@10 -- # set +x 00:26:14.866 15:36:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:14.866 15:36:32 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:14.866 15:36:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:14.866 15:36:32 -- common/autotest_common.sh@10 -- # set +x 00:26:14.866 [2024-04-26 15:36:32.098243] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:14.866 15:36:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:14.867 15:36:32 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:14.867 15:36:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:14.867 15:36:32 -- common/autotest_common.sh@10 -- # set +x 00:26:14.867 15:36:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:14.867 15:36:32 -- host/target_disconnect.sh@50 -- # reconnectpid=1789642 00:26:14.867 15:36:32 -- host/target_disconnect.sh@52 -- # sleep 2 00:26:14.867 15:36:32 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:14.867 EAL: No free 2048 kB hugepages reported on node 1 00:26:16.782 15:36:34 -- host/target_disconnect.sh@53 -- # kill -9 1789605 00:26:16.782 15:36:34 -- host/target_disconnect.sh@55 -- # sleep 2 00:26:16.782 Read completed with error (sct=0, sc=8) 00:26:16.782 starting I/O failed 00:26:16.782 Read completed with error (sct=0, sc=8) 00:26:16.782 starting I/O failed 00:26:16.782 Read completed with error (sct=0, sc=8) 00:26:16.782 starting I/O failed 00:26:16.782 Read completed with error (sct=0, sc=8) 00:26:16.782 starting I/O failed 00:26:16.782 Read completed with error (sct=0, sc=8) 00:26:16.782 starting I/O failed 00:26:16.782 Read completed with error (sct=0, sc=8) 00:26:16.782 starting I/O failed 00:26:16.782 Read completed with error (sct=0, sc=8) 00:26:16.782 starting I/O failed 00:26:16.782 Write completed with error (sct=0, sc=8) 00:26:16.782 starting I/O failed 00:26:16.782 Write completed with error (sct=0, sc=8) 00:26:16.782 starting I/O failed 00:26:16.782 Read completed with error (sct=0, sc=8) 00:26:16.782 starting I/O failed 00:26:16.782 Read completed with error (sct=0, sc=8) 00:26:16.782 starting I/O failed 00:26:16.782 Write completed with error (sct=0, sc=8) 00:26:16.782 starting I/O failed 00:26:16.782 Write completed with error (sct=0, sc=8) 00:26:16.782 starting I/O failed 00:26:16.782 Write completed with error (sct=0, sc=8) 00:26:16.782 starting I/O failed 00:26:16.782 Write completed with error (sct=0, sc=8) 00:26:16.782 starting I/O failed 00:26:16.782 Read completed with error (sct=0, sc=8) 00:26:16.782 starting I/O failed 00:26:16.782 Write completed with error (sct=0, sc=8) 00:26:16.782 starting I/O failed 00:26:16.782 Read completed with error (sct=0, sc=8) 00:26:16.782 starting I/O failed 00:26:16.782 Write completed with error (sct=0, sc=8) 00:26:16.782 starting I/O failed 00:26:16.782 Read completed with error (sct=0, sc=8) 00:26:16.782 starting I/O failed 00:26:16.782 Read completed with error (sct=0, sc=8) 00:26:16.782 starting I/O failed 00:26:16.782 Read completed with error (sct=0, sc=8) 00:26:16.782 starting I/O failed 00:26:16.782 Read completed with error (sct=0, sc=8) 00:26:16.782 starting I/O failed 00:26:16.782 Read completed with error (sct=0, sc=8) 00:26:16.782 starting I/O failed 00:26:16.782 Write completed with error (sct=0, sc=8) 00:26:16.782 starting I/O failed 00:26:16.782 Write completed with error (sct=0, sc=8) 00:26:16.782 starting I/O failed 00:26:16.782 Read completed with error (sct=0, sc=8) 00:26:16.782 starting I/O failed 00:26:16.782 Read completed with error (sct=0, sc=8) 00:26:16.782 starting I/O failed 00:26:16.782 Write completed with error (sct=0, sc=8) 00:26:16.782 starting I/O failed 00:26:16.782 Write completed with error (sct=0, sc=8) 00:26:16.782 starting I/O failed 00:26:16.782 Read completed with error (sct=0, sc=8) 00:26:16.782 starting I/O failed 00:26:16.782 Read completed with error (sct=0, sc=8) 00:26:16.782 starting I/O failed 00:26:16.782 [2024-04-26 15:36:34.130858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:16.782 [2024-04-26 15:36:34.131405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.782 [2024-04-26 15:36:34.131760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.782 [2024-04-26 15:36:34.131773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.782 qpair failed and we were unable to recover it. 00:26:16.782 [2024-04-26 15:36:34.132104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.782 [2024-04-26 15:36:34.132484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.782 [2024-04-26 15:36:34.132498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.782 qpair failed and we were unable to recover it. 00:26:16.782 [2024-04-26 15:36:34.132849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.782 [2024-04-26 15:36:34.133259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.782 [2024-04-26 15:36:34.133293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.782 qpair failed and we were unable to recover it. 00:26:16.782 [2024-04-26 15:36:34.133604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.782 [2024-04-26 15:36:34.133865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.782 [2024-04-26 15:36:34.133883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.782 qpair failed and we were unable to recover it. 00:26:16.782 [2024-04-26 15:36:34.134240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.782 [2024-04-26 15:36:34.134569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.782 [2024-04-26 15:36:34.134579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.782 qpair failed and we were unable to recover it. 00:26:16.782 [2024-04-26 15:36:34.134933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.782 [2024-04-26 15:36:34.135294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.782 [2024-04-26 15:36:34.135303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.782 qpair failed and we were unable to recover it. 00:26:16.782 [2024-04-26 15:36:34.135642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.782 [2024-04-26 15:36:34.135846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.782 [2024-04-26 15:36:34.135863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.782 qpair failed and we were unable to recover it. 00:26:16.782 [2024-04-26 15:36:34.136248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.782 [2024-04-26 15:36:34.136518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.782 [2024-04-26 15:36:34.136528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.782 qpair failed and we were unable to recover it. 00:26:16.782 [2024-04-26 15:36:34.136847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.782 [2024-04-26 15:36:34.137067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.782 [2024-04-26 15:36:34.137076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.783 qpair failed and we were unable to recover it. 00:26:16.783 [2024-04-26 15:36:34.137406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.783 [2024-04-26 15:36:34.137713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.783 [2024-04-26 15:36:34.137722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.783 qpair failed and we were unable to recover it. 00:26:16.783 [2024-04-26 15:36:34.137906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.783 [2024-04-26 15:36:34.138264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.783 [2024-04-26 15:36:34.138274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.783 qpair failed and we were unable to recover it. 00:26:16.783 [2024-04-26 15:36:34.138614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.783 [2024-04-26 15:36:34.138961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.783 [2024-04-26 15:36:34.138971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.783 qpair failed and we were unable to recover it. 00:26:16.783 [2024-04-26 15:36:34.139289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.783 [2024-04-26 15:36:34.139477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.783 [2024-04-26 15:36:34.139487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.783 qpair failed and we were unable to recover it. 00:26:16.783 [2024-04-26 15:36:34.139824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.783 [2024-04-26 15:36:34.140186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.783 [2024-04-26 15:36:34.140196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.783 qpair failed and we were unable to recover it. 00:26:16.783 [2024-04-26 15:36:34.140491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.783 [2024-04-26 15:36:34.140852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.783 [2024-04-26 15:36:34.140862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.783 qpair failed and we were unable to recover it. 00:26:16.783 [2024-04-26 15:36:34.141101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.783 [2024-04-26 15:36:34.141277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.783 [2024-04-26 15:36:34.141287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.783 qpair failed and we were unable to recover it. 00:26:16.783 [2024-04-26 15:36:34.141617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.783 [2024-04-26 15:36:34.141943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.783 [2024-04-26 15:36:34.141954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.783 qpair failed and we were unable to recover it. 00:26:16.783 [2024-04-26 15:36:34.142270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.783 [2024-04-26 15:36:34.142607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.783 [2024-04-26 15:36:34.142617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.783 qpair failed and we were unable to recover it. 00:26:16.783 [2024-04-26 15:36:34.142971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.783 [2024-04-26 15:36:34.143313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.783 [2024-04-26 15:36:34.143324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.783 qpair failed and we were unable to recover it. 00:26:16.783 [2024-04-26 15:36:34.143624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.783 [2024-04-26 15:36:34.143965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.783 [2024-04-26 15:36:34.143975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.783 qpair failed and we were unable to recover it. 00:26:16.783 [2024-04-26 15:36:34.144311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.783 [2024-04-26 15:36:34.144629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.783 [2024-04-26 15:36:34.144638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.783 qpair failed and we were unable to recover it. 00:26:16.783 [2024-04-26 15:36:34.144844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.783 [2024-04-26 15:36:34.145161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.783 [2024-04-26 15:36:34.145171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.783 qpair failed and we were unable to recover it. 00:26:16.783 [2024-04-26 15:36:34.145496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.783 [2024-04-26 15:36:34.145688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.783 [2024-04-26 15:36:34.145698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.783 qpair failed and we were unable to recover it. 00:26:16.783 [2024-04-26 15:36:34.146031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.783 [2024-04-26 15:36:34.146263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.783 [2024-04-26 15:36:34.146273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.783 qpair failed and we were unable to recover it. 00:26:16.783 [2024-04-26 15:36:34.146576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.783 [2024-04-26 15:36:34.146876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.783 [2024-04-26 15:36:34.146886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.783 qpair failed and we were unable to recover it. 00:26:16.783 [2024-04-26 15:36:34.147232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.783 [2024-04-26 15:36:34.147430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.783 [2024-04-26 15:36:34.147440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.783 qpair failed and we were unable to recover it. 00:26:16.783 [2024-04-26 15:36:34.147760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.783 [2024-04-26 15:36:34.148046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.783 [2024-04-26 15:36:34.148056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.783 qpair failed and we were unable to recover it. 00:26:16.783 [2024-04-26 15:36:34.148453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.783 [2024-04-26 15:36:34.148764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.783 [2024-04-26 15:36:34.148774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.783 qpair failed and we were unable to recover it. 00:26:16.783 [2024-04-26 15:36:34.149120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.783 [2024-04-26 15:36:34.149380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.783 [2024-04-26 15:36:34.149390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.783 qpair failed and we were unable to recover it. 00:26:16.783 [2024-04-26 15:36:34.149729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.783 [2024-04-26 15:36:34.149933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.783 [2024-04-26 15:36:34.149944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.783 qpair failed and we were unable to recover it. 00:26:16.783 [2024-04-26 15:36:34.150211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.783 [2024-04-26 15:36:34.150482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.783 [2024-04-26 15:36:34.150491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.783 qpair failed and we were unable to recover it. 00:26:16.783 [2024-04-26 15:36:34.150687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.783 [2024-04-26 15:36:34.151010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.783 [2024-04-26 15:36:34.151020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.783 qpair failed and we were unable to recover it. 00:26:16.783 [2024-04-26 15:36:34.151266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.783 [2024-04-26 15:36:34.151591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.783 [2024-04-26 15:36:34.151600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.783 qpair failed and we were unable to recover it. 00:26:16.783 [2024-04-26 15:36:34.151903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.783 [2024-04-26 15:36:34.152245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.783 [2024-04-26 15:36:34.152254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.783 qpair failed and we were unable to recover it. 00:26:16.783 [2024-04-26 15:36:34.152446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.783 [2024-04-26 15:36:34.152604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.783 [2024-04-26 15:36:34.152613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.783 qpair failed and we were unable to recover it. 00:26:16.783 [2024-04-26 15:36:34.152909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.783 [2024-04-26 15:36:34.153223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.783 [2024-04-26 15:36:34.153233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.783 qpair failed and we were unable to recover it. 00:26:16.783 [2024-04-26 15:36:34.153581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.783 [2024-04-26 15:36:34.153929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.783 [2024-04-26 15:36:34.153938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.783 qpair failed and we were unable to recover it. 00:26:16.784 [2024-04-26 15:36:34.154292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.784 [2024-04-26 15:36:34.154616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.784 [2024-04-26 15:36:34.154625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.784 qpair failed and we were unable to recover it. 00:26:16.784 [2024-04-26 15:36:34.155008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.784 [2024-04-26 15:36:34.155324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.784 [2024-04-26 15:36:34.155334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.784 qpair failed and we were unable to recover it. 00:26:16.784 [2024-04-26 15:36:34.155637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.784 [2024-04-26 15:36:34.155963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.784 [2024-04-26 15:36:34.155972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.784 qpair failed and we were unable to recover it. 00:26:16.784 [2024-04-26 15:36:34.156147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.784 [2024-04-26 15:36:34.156411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.784 [2024-04-26 15:36:34.156421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.784 qpair failed and we were unable to recover it. 00:26:16.784 [2024-04-26 15:36:34.156708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.784 [2024-04-26 15:36:34.156954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.784 [2024-04-26 15:36:34.156963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.784 qpair failed and we were unable to recover it. 00:26:16.784 [2024-04-26 15:36:34.157291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.784 [2024-04-26 15:36:34.157603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.784 [2024-04-26 15:36:34.157611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.784 qpair failed and we were unable to recover it. 00:26:16.784 [2024-04-26 15:36:34.157783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.784 [2024-04-26 15:36:34.158153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.784 [2024-04-26 15:36:34.158163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.784 qpair failed and we were unable to recover it. 00:26:16.784 [2024-04-26 15:36:34.158470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.784 [2024-04-26 15:36:34.158793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.784 [2024-04-26 15:36:34.158802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.784 qpair failed and we were unable to recover it. 00:26:16.784 [2024-04-26 15:36:34.159164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.784 [2024-04-26 15:36:34.159451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.784 [2024-04-26 15:36:34.159460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.784 qpair failed and we were unable to recover it. 00:26:16.784 [2024-04-26 15:36:34.159793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.784 [2024-04-26 15:36:34.160088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.784 [2024-04-26 15:36:34.160100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.784 qpair failed and we were unable to recover it. 00:26:16.784 [2024-04-26 15:36:34.160447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.784 [2024-04-26 15:36:34.160636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.784 [2024-04-26 15:36:34.160648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.784 qpair failed and we were unable to recover it. 00:26:16.784 [2024-04-26 15:36:34.160876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.784 [2024-04-26 15:36:34.161195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.784 [2024-04-26 15:36:34.161207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.784 qpair failed and we were unable to recover it. 00:26:16.784 [2024-04-26 15:36:34.161522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.784 [2024-04-26 15:36:34.161866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.784 [2024-04-26 15:36:34.161878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.784 qpair failed and we were unable to recover it. 00:26:16.784 [2024-04-26 15:36:34.162142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.784 [2024-04-26 15:36:34.162358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.784 [2024-04-26 15:36:34.162369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.784 qpair failed and we were unable to recover it. 00:26:16.784 [2024-04-26 15:36:34.162682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.784 [2024-04-26 15:36:34.162879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.784 [2024-04-26 15:36:34.162892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.784 qpair failed and we were unable to recover it. 00:26:16.784 [2024-04-26 15:36:34.163083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.784 [2024-04-26 15:36:34.163382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.784 [2024-04-26 15:36:34.163394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.784 qpair failed and we were unable to recover it. 00:26:16.784 [2024-04-26 15:36:34.163696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.784 [2024-04-26 15:36:34.164018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.784 [2024-04-26 15:36:34.164030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.784 qpair failed and we were unable to recover it. 00:26:16.784 [2024-04-26 15:36:34.164390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.784 [2024-04-26 15:36:34.164731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.784 [2024-04-26 15:36:34.164743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.784 qpair failed and we were unable to recover it. 00:26:16.784 [2024-04-26 15:36:34.165075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.784 [2024-04-26 15:36:34.165399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.784 [2024-04-26 15:36:34.165410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.784 qpair failed and we were unable to recover it. 00:26:16.784 [2024-04-26 15:36:34.165741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.784 [2024-04-26 15:36:34.166076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.784 [2024-04-26 15:36:34.166088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.784 qpair failed and we were unable to recover it. 00:26:16.784 [2024-04-26 15:36:34.166396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.784 [2024-04-26 15:36:34.166727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.784 [2024-04-26 15:36:34.166739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.784 qpair failed and we were unable to recover it. 00:26:16.784 [2024-04-26 15:36:34.167077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.784 [2024-04-26 15:36:34.167403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.784 [2024-04-26 15:36:34.167420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.784 qpair failed and we were unable to recover it. 00:26:16.784 [2024-04-26 15:36:34.167626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.784 [2024-04-26 15:36:34.167941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.784 [2024-04-26 15:36:34.167953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.784 qpair failed and we were unable to recover it. 00:26:16.784 [2024-04-26 15:36:34.168263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.784 [2024-04-26 15:36:34.168569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.784 [2024-04-26 15:36:34.168580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.784 qpair failed and we were unable to recover it. 00:26:16.784 [2024-04-26 15:36:34.168892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.784 [2024-04-26 15:36:34.169209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.784 [2024-04-26 15:36:34.169220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.784 qpair failed and we were unable to recover it. 00:26:16.784 [2024-04-26 15:36:34.169416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.784 [2024-04-26 15:36:34.169765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.784 [2024-04-26 15:36:34.169776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.784 qpair failed and we were unable to recover it. 00:26:16.784 [2024-04-26 15:36:34.170203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.784 [2024-04-26 15:36:34.170512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.784 [2024-04-26 15:36:34.170523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.784 qpair failed and we were unable to recover it. 00:26:16.784 [2024-04-26 15:36:34.170833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.784 [2024-04-26 15:36:34.171193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.785 [2024-04-26 15:36:34.171205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.785 qpair failed and we were unable to recover it. 00:26:16.785 [2024-04-26 15:36:34.171508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.785 [2024-04-26 15:36:34.171700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.785 [2024-04-26 15:36:34.171711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.785 qpair failed and we were unable to recover it. 00:26:16.785 [2024-04-26 15:36:34.171900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.785 [2024-04-26 15:36:34.172194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.785 [2024-04-26 15:36:34.172205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.785 qpair failed and we were unable to recover it. 00:26:16.785 [2024-04-26 15:36:34.172523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.785 [2024-04-26 15:36:34.172825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.785 [2024-04-26 15:36:34.172841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.785 qpair failed and we were unable to recover it. 00:26:16.785 [2024-04-26 15:36:34.173221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.785 [2024-04-26 15:36:34.173610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.785 [2024-04-26 15:36:34.173624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.785 qpair failed and we were unable to recover it. 00:26:16.785 [2024-04-26 15:36:34.173879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.785 [2024-04-26 15:36:34.174204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.785 [2024-04-26 15:36:34.174217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.785 qpair failed and we were unable to recover it. 00:26:16.785 [2024-04-26 15:36:34.174567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.785 [2024-04-26 15:36:34.174881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.785 [2024-04-26 15:36:34.174895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.785 qpair failed and we were unable to recover it. 00:26:16.785 [2024-04-26 15:36:34.175235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.785 [2024-04-26 15:36:34.175567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.785 [2024-04-26 15:36:34.175580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.785 qpair failed and we were unable to recover it. 00:26:16.785 [2024-04-26 15:36:34.175800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.785 [2024-04-26 15:36:34.176110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.785 [2024-04-26 15:36:34.176124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.785 qpair failed and we were unable to recover it. 00:26:16.785 [2024-04-26 15:36:34.176360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.785 [2024-04-26 15:36:34.176730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.785 [2024-04-26 15:36:34.176743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.785 qpair failed and we were unable to recover it. 00:26:16.785 [2024-04-26 15:36:34.177075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.785 [2024-04-26 15:36:34.177427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.785 [2024-04-26 15:36:34.177441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.785 qpair failed and we were unable to recover it. 00:26:16.785 [2024-04-26 15:36:34.177790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.785 [2024-04-26 15:36:34.178161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.785 [2024-04-26 15:36:34.178175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.785 qpair failed and we were unable to recover it. 00:26:16.785 [2024-04-26 15:36:34.178376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.785 [2024-04-26 15:36:34.178668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.785 [2024-04-26 15:36:34.178682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.785 qpair failed and we were unable to recover it. 00:26:16.785 [2024-04-26 15:36:34.179036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.785 [2024-04-26 15:36:34.179359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.785 [2024-04-26 15:36:34.179372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.785 qpair failed and we were unable to recover it. 00:26:16.785 [2024-04-26 15:36:34.179679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.785 [2024-04-26 15:36:34.180082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.785 [2024-04-26 15:36:34.180096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.785 qpair failed and we were unable to recover it. 00:26:16.785 [2024-04-26 15:36:34.180409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.785 [2024-04-26 15:36:34.180759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.785 [2024-04-26 15:36:34.180773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.785 qpair failed and we were unable to recover it. 00:26:16.785 [2024-04-26 15:36:34.181087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.785 [2024-04-26 15:36:34.181436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.785 [2024-04-26 15:36:34.181450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.785 qpair failed and we were unable to recover it. 00:26:16.785 [2024-04-26 15:36:34.181667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.785 [2024-04-26 15:36:34.182015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.785 [2024-04-26 15:36:34.182029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.785 qpair failed and we were unable to recover it. 00:26:16.785 [2024-04-26 15:36:34.182370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.785 [2024-04-26 15:36:34.182683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.785 [2024-04-26 15:36:34.182696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.785 qpair failed and we were unable to recover it. 00:26:16.785 [2024-04-26 15:36:34.183018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.785 [2024-04-26 15:36:34.183374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.785 [2024-04-26 15:36:34.183387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.785 qpair failed and we were unable to recover it. 00:26:16.785 [2024-04-26 15:36:34.183690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.785 [2024-04-26 15:36:34.184021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.785 [2024-04-26 15:36:34.184035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.785 qpair failed and we were unable to recover it. 00:26:16.785 [2024-04-26 15:36:34.184343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.785 [2024-04-26 15:36:34.184659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.785 [2024-04-26 15:36:34.184672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.785 qpair failed and we were unable to recover it. 00:26:16.785 [2024-04-26 15:36:34.184980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.785 [2024-04-26 15:36:34.185343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.785 [2024-04-26 15:36:34.185357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.785 qpair failed and we were unable to recover it. 00:26:16.785 [2024-04-26 15:36:34.185657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.785 [2024-04-26 15:36:34.185984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.785 [2024-04-26 15:36:34.185998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.785 qpair failed and we were unable to recover it. 00:26:16.785 [2024-04-26 15:36:34.186337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.785 [2024-04-26 15:36:34.186679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.785 [2024-04-26 15:36:34.186693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.785 qpair failed and we were unable to recover it. 00:26:16.785 [2024-04-26 15:36:34.187004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.785 [2024-04-26 15:36:34.187324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.785 [2024-04-26 15:36:34.187338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.785 qpair failed and we were unable to recover it. 00:26:16.785 [2024-04-26 15:36:34.187552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.785 [2024-04-26 15:36:34.187897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.785 [2024-04-26 15:36:34.187911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.785 qpair failed and we were unable to recover it. 00:26:16.785 [2024-04-26 15:36:34.188268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.785 [2024-04-26 15:36:34.188617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.785 [2024-04-26 15:36:34.188631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.785 qpair failed and we were unable to recover it. 00:26:16.785 [2024-04-26 15:36:34.188813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.785 [2024-04-26 15:36:34.189141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.786 [2024-04-26 15:36:34.189156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.786 qpair failed and we were unable to recover it. 00:26:16.786 [2024-04-26 15:36:34.189446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.786 [2024-04-26 15:36:34.189791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.786 [2024-04-26 15:36:34.189805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.786 qpair failed and we were unable to recover it. 00:26:16.786 [2024-04-26 15:36:34.190187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.786 [2024-04-26 15:36:34.190542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.786 [2024-04-26 15:36:34.190556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.786 qpair failed and we were unable to recover it. 00:26:16.786 [2024-04-26 15:36:34.190896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.786 [2024-04-26 15:36:34.191252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.786 [2024-04-26 15:36:34.191266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.786 qpair failed and we were unable to recover it. 00:26:16.786 [2024-04-26 15:36:34.191594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.786 [2024-04-26 15:36:34.191940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.786 [2024-04-26 15:36:34.191954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.786 qpair failed and we were unable to recover it. 00:26:16.786 [2024-04-26 15:36:34.192139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.786 [2024-04-26 15:36:34.192452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.786 [2024-04-26 15:36:34.192466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.786 qpair failed and we were unable to recover it. 00:26:16.786 [2024-04-26 15:36:34.192783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.786 [2024-04-26 15:36:34.193107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.786 [2024-04-26 15:36:34.193122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.786 qpair failed and we were unable to recover it. 00:26:16.786 [2024-04-26 15:36:34.193457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.786 [2024-04-26 15:36:34.193809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.786 [2024-04-26 15:36:34.193823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.786 qpair failed and we were unable to recover it. 00:26:16.786 [2024-04-26 15:36:34.194164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.786 [2024-04-26 15:36:34.194513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.786 [2024-04-26 15:36:34.194527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.786 qpair failed and we were unable to recover it. 00:26:16.786 [2024-04-26 15:36:34.194864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.786 [2024-04-26 15:36:34.195262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.786 [2024-04-26 15:36:34.195276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.786 qpair failed and we were unable to recover it. 00:26:16.786 [2024-04-26 15:36:34.195479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.786 [2024-04-26 15:36:34.195683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.786 [2024-04-26 15:36:34.195697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.786 qpair failed and we were unable to recover it. 00:26:16.786 [2024-04-26 15:36:34.196051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.786 [2024-04-26 15:36:34.196367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.786 [2024-04-26 15:36:34.196381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.786 qpair failed and we were unable to recover it. 00:26:16.786 [2024-04-26 15:36:34.196660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.786 [2024-04-26 15:36:34.196995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.786 [2024-04-26 15:36:34.197010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.786 qpair failed and we were unable to recover it. 00:26:16.786 [2024-04-26 15:36:34.197320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.786 [2024-04-26 15:36:34.197630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.786 [2024-04-26 15:36:34.197644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.786 qpair failed and we were unable to recover it. 00:26:16.786 [2024-04-26 15:36:34.197871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.786 [2024-04-26 15:36:34.198181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.786 [2024-04-26 15:36:34.198195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.786 qpair failed and we were unable to recover it. 00:26:16.786 [2024-04-26 15:36:34.198544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.786 [2024-04-26 15:36:34.198971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.786 [2024-04-26 15:36:34.198986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.786 qpair failed and we were unable to recover it. 00:26:16.786 [2024-04-26 15:36:34.199316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.786 [2024-04-26 15:36:34.199671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.786 [2024-04-26 15:36:34.199685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.786 qpair failed and we were unable to recover it. 00:26:16.786 [2024-04-26 15:36:34.199987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.786 [2024-04-26 15:36:34.200337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.786 [2024-04-26 15:36:34.200351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.786 qpair failed and we were unable to recover it. 00:26:16.786 [2024-04-26 15:36:34.200666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.786 [2024-04-26 15:36:34.200856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.786 [2024-04-26 15:36:34.200871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.786 qpair failed and we were unable to recover it. 00:26:16.786 [2024-04-26 15:36:34.201108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.786 [2024-04-26 15:36:34.201430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.786 [2024-04-26 15:36:34.201443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.786 qpair failed and we were unable to recover it. 00:26:16.786 [2024-04-26 15:36:34.201762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.786 [2024-04-26 15:36:34.202063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.786 [2024-04-26 15:36:34.202078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.786 qpair failed and we were unable to recover it. 00:26:16.786 [2024-04-26 15:36:34.202459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.786 [2024-04-26 15:36:34.202805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.786 [2024-04-26 15:36:34.202819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.786 qpair failed and we were unable to recover it. 00:26:16.786 [2024-04-26 15:36:34.203129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.786 [2024-04-26 15:36:34.203443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.786 [2024-04-26 15:36:34.203458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.786 qpair failed and we were unable to recover it. 00:26:16.786 [2024-04-26 15:36:34.203802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.786 [2024-04-26 15:36:34.204147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.786 [2024-04-26 15:36:34.204162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.786 qpair failed and we were unable to recover it. 00:26:16.786 [2024-04-26 15:36:34.204499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.786 [2024-04-26 15:36:34.204793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.786 [2024-04-26 15:36:34.204807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.786 qpair failed and we were unable to recover it. 00:26:16.786 [2024-04-26 15:36:34.205141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.786 [2024-04-26 15:36:34.205491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.786 [2024-04-26 15:36:34.205505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.786 qpair failed and we were unable to recover it. 00:26:16.786 [2024-04-26 15:36:34.205819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.786 [2024-04-26 15:36:34.206178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.786 [2024-04-26 15:36:34.206193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.786 qpair failed and we were unable to recover it. 00:26:16.786 [2024-04-26 15:36:34.206531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.786 [2024-04-26 15:36:34.206877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-26 15:36:34.206892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.787 qpair failed and we were unable to recover it. 00:26:16.787 [2024-04-26 15:36:34.207258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-26 15:36:34.207471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-26 15:36:34.207486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.787 qpair failed and we were unable to recover it. 00:26:16.787 [2024-04-26 15:36:34.207707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-26 15:36:34.208019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-26 15:36:34.208033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.787 qpair failed and we were unable to recover it. 00:26:16.787 [2024-04-26 15:36:34.208383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-26 15:36:34.208721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-26 15:36:34.208734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.787 qpair failed and we were unable to recover it. 00:26:16.787 [2024-04-26 15:36:34.209022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-26 15:36:34.209327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-26 15:36:34.209340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.787 qpair failed and we were unable to recover it. 00:26:16.787 [2024-04-26 15:36:34.209656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-26 15:36:34.209985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-26 15:36:34.209999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.787 qpair failed and we were unable to recover it. 00:26:16.787 [2024-04-26 15:36:34.210365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-26 15:36:34.210717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-26 15:36:34.210731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.787 qpair failed and we were unable to recover it. 00:26:16.787 [2024-04-26 15:36:34.211033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-26 15:36:34.211369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-26 15:36:34.211382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.787 qpair failed and we were unable to recover it. 00:26:16.787 [2024-04-26 15:36:34.211715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-26 15:36:34.212033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-26 15:36:34.212050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.787 qpair failed and we were unable to recover it. 00:26:16.787 [2024-04-26 15:36:34.212352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-26 15:36:34.212661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-26 15:36:34.212676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.787 qpair failed and we were unable to recover it. 00:26:16.787 [2024-04-26 15:36:34.213011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-26 15:36:34.213352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-26 15:36:34.213365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.787 qpair failed and we were unable to recover it. 00:26:16.787 [2024-04-26 15:36:34.213699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-26 15:36:34.214025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-26 15:36:34.214039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.787 qpair failed and we were unable to recover it. 00:26:16.787 [2024-04-26 15:36:34.214309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-26 15:36:34.214663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-26 15:36:34.214677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.787 qpair failed and we were unable to recover it. 00:26:16.787 [2024-04-26 15:36:34.214742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-26 15:36:34.215107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-26 15:36:34.215122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.787 qpair failed and we were unable to recover it. 00:26:16.787 [2024-04-26 15:36:34.215477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-26 15:36:34.215696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-26 15:36:34.215710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.787 qpair failed and we were unable to recover it. 00:26:16.787 [2024-04-26 15:36:34.216016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-26 15:36:34.216302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-26 15:36:34.216315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.787 qpair failed and we were unable to recover it. 00:26:16.787 [2024-04-26 15:36:34.216558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-26 15:36:34.216877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-26 15:36:34.216891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.787 qpair failed and we were unable to recover it. 00:26:16.787 [2024-04-26 15:36:34.217197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-26 15:36:34.217545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-26 15:36:34.217558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.787 qpair failed and we were unable to recover it. 00:26:16.787 [2024-04-26 15:36:34.217856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-26 15:36:34.218180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-26 15:36:34.218197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.787 qpair failed and we were unable to recover it. 00:26:16.787 [2024-04-26 15:36:34.218503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-26 15:36:34.218825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-26 15:36:34.218847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.787 qpair failed and we were unable to recover it. 00:26:16.787 [2024-04-26 15:36:34.219171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-26 15:36:34.219526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-26 15:36:34.219541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.787 qpair failed and we were unable to recover it. 00:26:16.787 [2024-04-26 15:36:34.219859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-26 15:36:34.220177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-26 15:36:34.220190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.787 qpair failed and we were unable to recover it. 00:26:16.787 [2024-04-26 15:36:34.220486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-26 15:36:34.220815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-26 15:36:34.220828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.787 qpair failed and we were unable to recover it. 00:26:16.787 [2024-04-26 15:36:34.221200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-26 15:36:34.221517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-26 15:36:34.221530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.787 qpair failed and we were unable to recover it. 00:26:16.787 [2024-04-26 15:36:34.221842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-26 15:36:34.222247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-26 15:36:34.222260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.787 qpair failed and we were unable to recover it. 00:26:16.787 [2024-04-26 15:36:34.222458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-26 15:36:34.222810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-26 15:36:34.222823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.788 qpair failed and we were unable to recover it. 00:26:16.788 [2024-04-26 15:36:34.223125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.788 [2024-04-26 15:36:34.223440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.788 [2024-04-26 15:36:34.223453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.788 qpair failed and we were unable to recover it. 00:26:16.788 [2024-04-26 15:36:34.223804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.788 [2024-04-26 15:36:34.224141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.788 [2024-04-26 15:36:34.224162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.788 qpair failed and we were unable to recover it. 00:26:16.788 [2024-04-26 15:36:34.224381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.788 [2024-04-26 15:36:34.224736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.788 [2024-04-26 15:36:34.224753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:16.788 qpair failed and we were unable to recover it. 00:26:17.056 [2024-04-26 15:36:34.225895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.056 [2024-04-26 15:36:34.226129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.056 [2024-04-26 15:36:34.226146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.056 qpair failed and we were unable to recover it. 00:26:17.056 [2024-04-26 15:36:34.226555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.056 [2024-04-26 15:36:34.226782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.056 [2024-04-26 15:36:34.226803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.056 qpair failed and we were unable to recover it. 00:26:17.056 [2024-04-26 15:36:34.227133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.056 [2024-04-26 15:36:34.227453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.056 [2024-04-26 15:36:34.227466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.056 qpair failed and we were unable to recover it. 00:26:17.056 [2024-04-26 15:36:34.227798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.056 [2024-04-26 15:36:34.228152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.056 [2024-04-26 15:36:34.228167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.056 qpair failed and we were unable to recover it. 00:26:17.056 [2024-04-26 15:36:34.228562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.056 [2024-04-26 15:36:34.228913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.056 [2024-04-26 15:36:34.228928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.056 qpair failed and we were unable to recover it. 00:26:17.056 [2024-04-26 15:36:34.229271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.056 [2024-04-26 15:36:34.229581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.056 [2024-04-26 15:36:34.229595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.056 qpair failed and we were unable to recover it. 00:26:17.056 [2024-04-26 15:36:34.229942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.056 [2024-04-26 15:36:34.230266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.056 [2024-04-26 15:36:34.230280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.056 qpair failed and we were unable to recover it. 00:26:17.056 [2024-04-26 15:36:34.230590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.056 [2024-04-26 15:36:34.230910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.056 [2024-04-26 15:36:34.230924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.056 qpair failed and we were unable to recover it. 00:26:17.056 [2024-04-26 15:36:34.231249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.056 [2024-04-26 15:36:34.231572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.056 [2024-04-26 15:36:34.231586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.056 qpair failed and we were unable to recover it. 00:26:17.056 [2024-04-26 15:36:34.231891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.056 [2024-04-26 15:36:34.232209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.057 [2024-04-26 15:36:34.232227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.057 qpair failed and we were unable to recover it. 00:26:17.057 [2024-04-26 15:36:34.232523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.057 [2024-04-26 15:36:34.232878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.057 [2024-04-26 15:36:34.232892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.057 qpair failed and we were unable to recover it. 00:26:17.057 [2024-04-26 15:36:34.233233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.057 [2024-04-26 15:36:34.233549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.057 [2024-04-26 15:36:34.233563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.057 qpair failed and we were unable to recover it. 00:26:17.057 [2024-04-26 15:36:34.233929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.057 [2024-04-26 15:36:34.234275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.057 [2024-04-26 15:36:34.234289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.057 qpair failed and we were unable to recover it. 00:26:17.057 [2024-04-26 15:36:34.234629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.057 [2024-04-26 15:36:34.234858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.057 [2024-04-26 15:36:34.234874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.057 qpair failed and we were unable to recover it. 00:26:17.057 [2024-04-26 15:36:34.235214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.057 [2024-04-26 15:36:34.235573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.057 [2024-04-26 15:36:34.235586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.057 qpair failed and we were unable to recover it. 00:26:17.057 [2024-04-26 15:36:34.235895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.057 [2024-04-26 15:36:34.236234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.057 [2024-04-26 15:36:34.236248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.057 qpair failed and we were unable to recover it. 00:26:17.057 [2024-04-26 15:36:34.236582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.057 [2024-04-26 15:36:34.236904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.057 [2024-04-26 15:36:34.236918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.057 qpair failed and we were unable to recover it. 00:26:17.057 [2024-04-26 15:36:34.237242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.057 [2024-04-26 15:36:34.237566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.057 [2024-04-26 15:36:34.237579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.057 qpair failed and we were unable to recover it. 00:26:17.057 [2024-04-26 15:36:34.237927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.057 [2024-04-26 15:36:34.238286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.057 [2024-04-26 15:36:34.238300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.057 qpair failed and we were unable to recover it. 00:26:17.057 [2024-04-26 15:36:34.238606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.057 [2024-04-26 15:36:34.238955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.057 [2024-04-26 15:36:34.238969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.057 qpair failed and we were unable to recover it. 00:26:17.057 [2024-04-26 15:36:34.239288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.057 [2024-04-26 15:36:34.239605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.057 [2024-04-26 15:36:34.239619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.057 qpair failed and we were unable to recover it. 00:26:17.057 [2024-04-26 15:36:34.240012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.057 [2024-04-26 15:36:34.240206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.057 [2024-04-26 15:36:34.240220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.057 qpair failed and we were unable to recover it. 00:26:17.057 [2024-04-26 15:36:34.240561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.057 [2024-04-26 15:36:34.240772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.057 [2024-04-26 15:36:34.240785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.057 qpair failed and we were unable to recover it. 00:26:17.057 [2024-04-26 15:36:34.241127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.057 [2024-04-26 15:36:34.241482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.057 [2024-04-26 15:36:34.241496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.057 qpair failed and we were unable to recover it. 00:26:17.057 [2024-04-26 15:36:34.241800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.057 [2024-04-26 15:36:34.242173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.057 [2024-04-26 15:36:34.242187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.057 qpair failed and we were unable to recover it. 00:26:17.057 [2024-04-26 15:36:34.242524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.057 [2024-04-26 15:36:34.242872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.057 [2024-04-26 15:36:34.242888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.057 qpair failed and we were unable to recover it. 00:26:17.057 [2024-04-26 15:36:34.243139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.057 [2024-04-26 15:36:34.243459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.057 [2024-04-26 15:36:34.243473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.057 qpair failed and we were unable to recover it. 00:26:17.057 [2024-04-26 15:36:34.243785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.057 [2024-04-26 15:36:34.244122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.057 [2024-04-26 15:36:34.244136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.057 qpair failed and we were unable to recover it. 00:26:17.057 [2024-04-26 15:36:34.244446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.057 [2024-04-26 15:36:34.244734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.057 [2024-04-26 15:36:34.244747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.057 qpair failed and we were unable to recover it. 00:26:17.057 [2024-04-26 15:36:34.245060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.057 [2024-04-26 15:36:34.245370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.057 [2024-04-26 15:36:34.245384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.057 qpair failed and we were unable to recover it. 00:26:17.057 [2024-04-26 15:36:34.245730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.057 [2024-04-26 15:36:34.246076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.057 [2024-04-26 15:36:34.246090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.057 qpair failed and we were unable to recover it. 00:26:17.057 [2024-04-26 15:36:34.246303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.057 [2024-04-26 15:36:34.246643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.057 [2024-04-26 15:36:34.246657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.057 qpair failed and we were unable to recover it. 00:26:17.057 [2024-04-26 15:36:34.246999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.057 [2024-04-26 15:36:34.247316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.057 [2024-04-26 15:36:34.247330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.057 qpair failed and we were unable to recover it. 00:26:17.057 [2024-04-26 15:36:34.247668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.057 [2024-04-26 15:36:34.248000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.057 [2024-04-26 15:36:34.248014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.057 qpair failed and we were unable to recover it. 00:26:17.057 [2024-04-26 15:36:34.248408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.057 [2024-04-26 15:36:34.248717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.057 [2024-04-26 15:36:34.248730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.057 qpair failed and we were unable to recover it. 00:26:17.057 [2024-04-26 15:36:34.249070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.057 [2024-04-26 15:36:34.249408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.057 [2024-04-26 15:36:34.249422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.057 qpair failed and we were unable to recover it. 00:26:17.057 [2024-04-26 15:36:34.249766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.057 [2024-04-26 15:36:34.250101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.057 [2024-04-26 15:36:34.250116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.057 qpair failed and we were unable to recover it. 00:26:17.057 [2024-04-26 15:36:34.250433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.057 [2024-04-26 15:36:34.250737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.057 [2024-04-26 15:36:34.250750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.057 qpair failed and we were unable to recover it. 00:26:17.058 [2024-04-26 15:36:34.251094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.058 [2024-04-26 15:36:34.251443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.058 [2024-04-26 15:36:34.251457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.058 qpair failed and we were unable to recover it. 00:26:17.058 [2024-04-26 15:36:34.251798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.058 [2024-04-26 15:36:34.252152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.058 [2024-04-26 15:36:34.252166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.058 qpair failed and we were unable to recover it. 00:26:17.058 [2024-04-26 15:36:34.252509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.058 [2024-04-26 15:36:34.252857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.058 [2024-04-26 15:36:34.252872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.058 qpair failed and we were unable to recover it. 00:26:17.058 [2024-04-26 15:36:34.253205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.058 [2024-04-26 15:36:34.253523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.058 [2024-04-26 15:36:34.253536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.058 qpair failed and we were unable to recover it. 00:26:17.058 [2024-04-26 15:36:34.253755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.058 [2024-04-26 15:36:34.254087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.058 [2024-04-26 15:36:34.254102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.058 qpair failed and we were unable to recover it. 00:26:17.058 [2024-04-26 15:36:34.254452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.058 [2024-04-26 15:36:34.254786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.058 [2024-04-26 15:36:34.254800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.058 qpair failed and we were unable to recover it. 00:26:17.058 [2024-04-26 15:36:34.255116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.058 [2024-04-26 15:36:34.255482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.058 [2024-04-26 15:36:34.255496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.058 qpair failed and we were unable to recover it. 00:26:17.058 [2024-04-26 15:36:34.255809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.058 [2024-04-26 15:36:34.256128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.058 [2024-04-26 15:36:34.256142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.058 qpair failed and we were unable to recover it. 00:26:17.058 [2024-04-26 15:36:34.256491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.058 [2024-04-26 15:36:34.256825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.058 [2024-04-26 15:36:34.256843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.058 qpair failed and we were unable to recover it. 00:26:17.058 [2024-04-26 15:36:34.257273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.058 [2024-04-26 15:36:34.257622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.058 [2024-04-26 15:36:34.257636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.058 qpair failed and we were unable to recover it. 00:26:17.058 [2024-04-26 15:36:34.258087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.058 [2024-04-26 15:36:34.258477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.058 [2024-04-26 15:36:34.258496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.058 qpair failed and we were unable to recover it. 00:26:17.058 [2024-04-26 15:36:34.258861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.058 [2024-04-26 15:36:34.259176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.058 [2024-04-26 15:36:34.259191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.058 qpair failed and we were unable to recover it. 00:26:17.058 [2024-04-26 15:36:34.259409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.058 [2024-04-26 15:36:34.259801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.058 [2024-04-26 15:36:34.259815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.058 qpair failed and we were unable to recover it. 00:26:17.058 [2024-04-26 15:36:34.260200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.058 [2024-04-26 15:36:34.260513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.058 [2024-04-26 15:36:34.260526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.058 qpair failed and we were unable to recover it. 00:26:17.058 [2024-04-26 15:36:34.260879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.058 [2024-04-26 15:36:34.261075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.058 [2024-04-26 15:36:34.261090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.058 qpair failed and we were unable to recover it. 00:26:17.058 [2024-04-26 15:36:34.261418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.058 [2024-04-26 15:36:34.261777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.058 [2024-04-26 15:36:34.261791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.058 qpair failed and we were unable to recover it. 00:26:17.058 [2024-04-26 15:36:34.261994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.058 [2024-04-26 15:36:34.262234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.058 [2024-04-26 15:36:34.262248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.058 qpair failed and we were unable to recover it. 00:26:17.058 [2024-04-26 15:36:34.262473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.058 [2024-04-26 15:36:34.262824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.058 [2024-04-26 15:36:34.262842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.058 qpair failed and we were unable to recover it. 00:26:17.058 [2024-04-26 15:36:34.263223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.058 [2024-04-26 15:36:34.263572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.058 [2024-04-26 15:36:34.263587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.058 qpair failed and we were unable to recover it. 00:26:17.058 [2024-04-26 15:36:34.263938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.058 [2024-04-26 15:36:34.264234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.058 [2024-04-26 15:36:34.264248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.058 qpair failed and we were unable to recover it. 00:26:17.058 [2024-04-26 15:36:34.264586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.058 [2024-04-26 15:36:34.264904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.058 [2024-04-26 15:36:34.264919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.058 qpair failed and we were unable to recover it. 00:26:17.058 [2024-04-26 15:36:34.265289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.058 [2024-04-26 15:36:34.265491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.058 [2024-04-26 15:36:34.265506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.058 qpair failed and we were unable to recover it. 00:26:17.058 [2024-04-26 15:36:34.265844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.058 [2024-04-26 15:36:34.266137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.058 [2024-04-26 15:36:34.266151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.058 qpair failed and we were unable to recover it. 00:26:17.058 [2024-04-26 15:36:34.266463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.058 [2024-04-26 15:36:34.266786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.058 [2024-04-26 15:36:34.266800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.058 qpair failed and we were unable to recover it. 00:26:17.058 [2024-04-26 15:36:34.267109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.058 [2024-04-26 15:36:34.267308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.058 [2024-04-26 15:36:34.267323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.058 qpair failed and we were unable to recover it. 00:26:17.058 [2024-04-26 15:36:34.267638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.058 [2024-04-26 15:36:34.267933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.058 [2024-04-26 15:36:34.267947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.058 qpair failed and we were unable to recover it. 00:26:17.058 [2024-04-26 15:36:34.268264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.058 [2024-04-26 15:36:34.268618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.058 [2024-04-26 15:36:34.268632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.058 qpair failed and we were unable to recover it. 00:26:17.058 [2024-04-26 15:36:34.268937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.058 [2024-04-26 15:36:34.269148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.058 [2024-04-26 15:36:34.269161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.058 qpair failed and we were unable to recover it. 00:26:17.059 [2024-04-26 15:36:34.269509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.059 [2024-04-26 15:36:34.269855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.059 [2024-04-26 15:36:34.269871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.059 qpair failed and we were unable to recover it. 00:26:17.059 [2024-04-26 15:36:34.270159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.059 [2024-04-26 15:36:34.270472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.059 [2024-04-26 15:36:34.270486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.059 qpair failed and we were unable to recover it. 00:26:17.059 [2024-04-26 15:36:34.270827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.059 [2024-04-26 15:36:34.271158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.059 [2024-04-26 15:36:34.271172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.059 qpair failed and we were unable to recover it. 00:26:17.059 [2024-04-26 15:36:34.271521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.059 [2024-04-26 15:36:34.271746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.059 [2024-04-26 15:36:34.271759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.059 qpair failed and we were unable to recover it. 00:26:17.059 [2024-04-26 15:36:34.272095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.059 [2024-04-26 15:36:34.272303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.059 [2024-04-26 15:36:34.272318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.059 qpair failed and we were unable to recover it. 00:26:17.059 [2024-04-26 15:36:34.272653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.059 [2024-04-26 15:36:34.272971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.059 [2024-04-26 15:36:34.272986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.059 qpair failed and we were unable to recover it. 00:26:17.059 [2024-04-26 15:36:34.273343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.059 [2024-04-26 15:36:34.273690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.059 [2024-04-26 15:36:34.273703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.059 qpair failed and we were unable to recover it. 00:26:17.059 [2024-04-26 15:36:34.274003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.059 [2024-04-26 15:36:34.274192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.059 [2024-04-26 15:36:34.274207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.059 qpair failed and we were unable to recover it. 00:26:17.059 [2024-04-26 15:36:34.274515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.059 [2024-04-26 15:36:34.274878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.059 [2024-04-26 15:36:34.274892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.059 qpair failed and we were unable to recover it. 00:26:17.059 [2024-04-26 15:36:34.275239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.059 [2024-04-26 15:36:34.275573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.059 [2024-04-26 15:36:34.275588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.059 qpair failed and we were unable to recover it. 00:26:17.059 [2024-04-26 15:36:34.275906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.059 [2024-04-26 15:36:34.276241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.059 [2024-04-26 15:36:34.276255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.059 qpair failed and we were unable to recover it. 00:26:17.059 [2024-04-26 15:36:34.276588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.059 [2024-04-26 15:36:34.276925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.059 [2024-04-26 15:36:34.276939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.059 qpair failed and we were unable to recover it. 00:26:17.059 [2024-04-26 15:36:34.277294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.059 [2024-04-26 15:36:34.277606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.059 [2024-04-26 15:36:34.277619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.059 qpair failed and we were unable to recover it. 00:26:17.059 [2024-04-26 15:36:34.277939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.059 [2024-04-26 15:36:34.278275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.059 [2024-04-26 15:36:34.278289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.059 qpair failed and we were unable to recover it. 00:26:17.059 [2024-04-26 15:36:34.278635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.059 [2024-04-26 15:36:34.278967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.059 [2024-04-26 15:36:34.278983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.059 qpair failed and we were unable to recover it. 00:26:17.059 [2024-04-26 15:36:34.279318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.059 [2024-04-26 15:36:34.279513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.059 [2024-04-26 15:36:34.279528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.059 qpair failed and we were unable to recover it. 00:26:17.059 [2024-04-26 15:36:34.279912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.059 [2024-04-26 15:36:34.280234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.059 [2024-04-26 15:36:34.280249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.059 qpair failed and we were unable to recover it. 00:26:17.059 [2024-04-26 15:36:34.280597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.059 [2024-04-26 15:36:34.280833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.059 [2024-04-26 15:36:34.280854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.059 qpair failed and we were unable to recover it. 00:26:17.059 [2024-04-26 15:36:34.281052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.059 [2024-04-26 15:36:34.281388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.059 [2024-04-26 15:36:34.281402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.059 qpair failed and we were unable to recover it. 00:26:17.059 [2024-04-26 15:36:34.281741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.059 [2024-04-26 15:36:34.282068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.059 [2024-04-26 15:36:34.282082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.059 qpair failed and we were unable to recover it. 00:26:17.059 [2024-04-26 15:36:34.282367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.059 [2024-04-26 15:36:34.282663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.059 [2024-04-26 15:36:34.282677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.059 qpair failed and we were unable to recover it. 00:26:17.059 [2024-04-26 15:36:34.283030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.059 [2024-04-26 15:36:34.283352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.059 [2024-04-26 15:36:34.283366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.059 qpair failed and we were unable to recover it. 00:26:17.059 [2024-04-26 15:36:34.283670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.059 [2024-04-26 15:36:34.283992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.059 [2024-04-26 15:36:34.284006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.059 qpair failed and we were unable to recover it. 00:26:17.059 [2024-04-26 15:36:34.284306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.059 [2024-04-26 15:36:34.284650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.059 [2024-04-26 15:36:34.284664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.059 qpair failed and we were unable to recover it. 00:26:17.059 [2024-04-26 15:36:34.284965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.059 [2024-04-26 15:36:34.285310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.059 [2024-04-26 15:36:34.285324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.059 qpair failed and we were unable to recover it. 00:26:17.059 [2024-04-26 15:36:34.285665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.059 [2024-04-26 15:36:34.285994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.059 [2024-04-26 15:36:34.286010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.059 qpair failed and we were unable to recover it. 00:26:17.059 [2024-04-26 15:36:34.286354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.059 [2024-04-26 15:36:34.286683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.059 [2024-04-26 15:36:34.286697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.059 qpair failed and we were unable to recover it. 00:26:17.059 [2024-04-26 15:36:34.287003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.059 [2024-04-26 15:36:34.287340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.059 [2024-04-26 15:36:34.287353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.059 qpair failed and we were unable to recover it. 00:26:17.060 [2024-04-26 15:36:34.287692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.060 [2024-04-26 15:36:34.288020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.060 [2024-04-26 15:36:34.288034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.060 qpair failed and we were unable to recover it. 00:26:17.060 [2024-04-26 15:36:34.288385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.060 [2024-04-26 15:36:34.288699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.060 [2024-04-26 15:36:34.288714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.060 qpair failed and we were unable to recover it. 00:26:17.060 [2024-04-26 15:36:34.288901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.060 [2024-04-26 15:36:34.289297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.060 [2024-04-26 15:36:34.289311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.060 qpair failed and we were unable to recover it. 00:26:17.060 [2024-04-26 15:36:34.289657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.060 [2024-04-26 15:36:34.290006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.060 [2024-04-26 15:36:34.290020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.060 qpair failed and we were unable to recover it. 00:26:17.060 [2024-04-26 15:36:34.290410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.060 [2024-04-26 15:36:34.290603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.060 [2024-04-26 15:36:34.290617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.060 qpair failed and we were unable to recover it. 00:26:17.060 [2024-04-26 15:36:34.290956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.060 [2024-04-26 15:36:34.291169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.060 [2024-04-26 15:36:34.291183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.060 qpair failed and we were unable to recover it. 00:26:17.060 [2024-04-26 15:36:34.291543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.060 [2024-04-26 15:36:34.291905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.060 [2024-04-26 15:36:34.291919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.060 qpair failed and we were unable to recover it. 00:26:17.060 [2024-04-26 15:36:34.292222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.060 [2024-04-26 15:36:34.292541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.060 [2024-04-26 15:36:34.292555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.060 qpair failed and we were unable to recover it. 00:26:17.060 [2024-04-26 15:36:34.292872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.060 [2024-04-26 15:36:34.293189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.060 [2024-04-26 15:36:34.293203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.060 qpair failed and we were unable to recover it. 00:26:17.060 [2024-04-26 15:36:34.293514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.060 [2024-04-26 15:36:34.293739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.060 [2024-04-26 15:36:34.293753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.060 qpair failed and we were unable to recover it. 00:26:17.060 [2024-04-26 15:36:34.294100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.060 [2024-04-26 15:36:34.294409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.060 [2024-04-26 15:36:34.294423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.060 qpair failed and we were unable to recover it. 00:26:17.060 [2024-04-26 15:36:34.294733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.060 [2024-04-26 15:36:34.294910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.060 [2024-04-26 15:36:34.294925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.060 qpair failed and we were unable to recover it. 00:26:17.060 [2024-04-26 15:36:34.295276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.060 [2024-04-26 15:36:34.295625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.060 [2024-04-26 15:36:34.295640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.060 qpair failed and we were unable to recover it. 00:26:17.060 [2024-04-26 15:36:34.295937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.060 [2024-04-26 15:36:34.296325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.060 [2024-04-26 15:36:34.296339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.060 qpair failed and we were unable to recover it. 00:26:17.060 [2024-04-26 15:36:34.296653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.060 [2024-04-26 15:36:34.296985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.060 [2024-04-26 15:36:34.296999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.060 qpair failed and we were unable to recover it. 00:26:17.060 [2024-04-26 15:36:34.297318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.060 [2024-04-26 15:36:34.297645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.060 [2024-04-26 15:36:34.297659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.060 qpair failed and we were unable to recover it. 00:26:17.060 [2024-04-26 15:36:34.297985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.060 [2024-04-26 15:36:34.298333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.060 [2024-04-26 15:36:34.298347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.060 qpair failed and we were unable to recover it. 00:26:17.060 [2024-04-26 15:36:34.298659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.060 [2024-04-26 15:36:34.298990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.060 [2024-04-26 15:36:34.299004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.060 qpair failed and we were unable to recover it. 00:26:17.060 [2024-04-26 15:36:34.299312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.060 [2024-04-26 15:36:34.299640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.060 [2024-04-26 15:36:34.299653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.060 qpair failed and we were unable to recover it. 00:26:17.060 [2024-04-26 15:36:34.299960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.060 [2024-04-26 15:36:34.300267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.060 [2024-04-26 15:36:34.300281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.060 qpair failed and we were unable to recover it. 00:26:17.060 [2024-04-26 15:36:34.300624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.060 [2024-04-26 15:36:34.300825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.060 [2024-04-26 15:36:34.300845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.060 qpair failed and we were unable to recover it. 00:26:17.060 [2024-04-26 15:36:34.301220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.060 [2024-04-26 15:36:34.301583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.060 [2024-04-26 15:36:34.301598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.060 qpair failed and we were unable to recover it. 00:26:17.060 [2024-04-26 15:36:34.301937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.060 [2024-04-26 15:36:34.302252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.060 [2024-04-26 15:36:34.302266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.060 qpair failed and we were unable to recover it. 00:26:17.060 [2024-04-26 15:36:34.302578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.060 [2024-04-26 15:36:34.302884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.060 [2024-04-26 15:36:34.302910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.060 qpair failed and we were unable to recover it. 00:26:17.060 [2024-04-26 15:36:34.303279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.060 [2024-04-26 15:36:34.303600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.060 [2024-04-26 15:36:34.303614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.060 qpair failed and we were unable to recover it. 00:26:17.060 [2024-04-26 15:36:34.303799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.060 [2024-04-26 15:36:34.304122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.060 [2024-04-26 15:36:34.304136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.060 qpair failed and we were unable to recover it. 00:26:17.060 [2024-04-26 15:36:34.304487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.060 [2024-04-26 15:36:34.304841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.060 [2024-04-26 15:36:34.304856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.060 qpair failed and we were unable to recover it. 00:26:17.060 [2024-04-26 15:36:34.305209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.060 [2024-04-26 15:36:34.305567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.060 [2024-04-26 15:36:34.305581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.060 qpair failed and we were unable to recover it. 00:26:17.060 [2024-04-26 15:36:34.305817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.060 [2024-04-26 15:36:34.306136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.061 [2024-04-26 15:36:34.306150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.061 qpair failed and we were unable to recover it. 00:26:17.061 [2024-04-26 15:36:34.306466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.061 [2024-04-26 15:36:34.306785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.061 [2024-04-26 15:36:34.306799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.061 qpair failed and we were unable to recover it. 00:26:17.061 [2024-04-26 15:36:34.307099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.061 [2024-04-26 15:36:34.307396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.061 [2024-04-26 15:36:34.307409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.061 qpair failed and we were unable to recover it. 00:26:17.061 [2024-04-26 15:36:34.307744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.061 [2024-04-26 15:36:34.308064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.061 [2024-04-26 15:36:34.308078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.061 qpair failed and we were unable to recover it. 00:26:17.061 [2024-04-26 15:36:34.308435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.061 [2024-04-26 15:36:34.308629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.061 [2024-04-26 15:36:34.308644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.061 qpair failed and we were unable to recover it. 00:26:17.061 [2024-04-26 15:36:34.308980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.061 [2024-04-26 15:36:34.309403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.061 [2024-04-26 15:36:34.309416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.061 qpair failed and we were unable to recover it. 00:26:17.061 [2024-04-26 15:36:34.309715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.061 [2024-04-26 15:36:34.310065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.061 [2024-04-26 15:36:34.310079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.061 qpair failed and we were unable to recover it. 00:26:17.061 [2024-04-26 15:36:34.310326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.061 [2024-04-26 15:36:34.310658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.061 [2024-04-26 15:36:34.310672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.061 qpair failed and we were unable to recover it. 00:26:17.061 [2024-04-26 15:36:34.310984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.061 [2024-04-26 15:36:34.311353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.061 [2024-04-26 15:36:34.311367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.061 qpair failed and we were unable to recover it. 00:26:17.061 [2024-04-26 15:36:34.311565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.061 [2024-04-26 15:36:34.311912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.061 [2024-04-26 15:36:34.311926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.061 qpair failed and we were unable to recover it. 00:26:17.061 [2024-04-26 15:36:34.312315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.061 [2024-04-26 15:36:34.312621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.061 [2024-04-26 15:36:34.312635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.061 qpair failed and we were unable to recover it. 00:26:17.061 [2024-04-26 15:36:34.312963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.061 [2024-04-26 15:36:34.313273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.061 [2024-04-26 15:36:34.313286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.061 qpair failed and we were unable to recover it. 00:26:17.061 [2024-04-26 15:36:34.313605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.061 [2024-04-26 15:36:34.313922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.061 [2024-04-26 15:36:34.313937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.061 qpair failed and we were unable to recover it. 00:26:17.061 [2024-04-26 15:36:34.314277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.061 [2024-04-26 15:36:34.314596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.061 [2024-04-26 15:36:34.314609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.061 qpair failed and we were unable to recover it. 00:26:17.061 [2024-04-26 15:36:34.315019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.061 [2024-04-26 15:36:34.315318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.061 [2024-04-26 15:36:34.315331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.061 qpair failed and we were unable to recover it. 00:26:17.061 [2024-04-26 15:36:34.315667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.061 [2024-04-26 15:36:34.315997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.061 [2024-04-26 15:36:34.316011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.061 qpair failed and we were unable to recover it. 00:26:17.061 [2024-04-26 15:36:34.316340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.061 [2024-04-26 15:36:34.316701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.061 [2024-04-26 15:36:34.316715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.061 qpair failed and we were unable to recover it. 00:26:17.061 [2024-04-26 15:36:34.317021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.061 [2024-04-26 15:36:34.317352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.061 [2024-04-26 15:36:34.317365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.061 qpair failed and we were unable to recover it. 00:26:17.061 [2024-04-26 15:36:34.317678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.061 [2024-04-26 15:36:34.317997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.061 [2024-04-26 15:36:34.318015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.061 qpair failed and we were unable to recover it. 00:26:17.061 [2024-04-26 15:36:34.318422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.061 [2024-04-26 15:36:34.318744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.061 [2024-04-26 15:36:34.318757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.061 qpair failed and we were unable to recover it. 00:26:17.061 [2024-04-26 15:36:34.318976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.061 [2024-04-26 15:36:34.319344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.061 [2024-04-26 15:36:34.319357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.061 qpair failed and we were unable to recover it. 00:26:17.061 [2024-04-26 15:36:34.319574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.061 [2024-04-26 15:36:34.319907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.061 [2024-04-26 15:36:34.319921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.061 qpair failed and we were unable to recover it. 00:26:17.061 [2024-04-26 15:36:34.320298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.061 [2024-04-26 15:36:34.320631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.061 [2024-04-26 15:36:34.320645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.061 qpair failed and we were unable to recover it. 00:26:17.061 [2024-04-26 15:36:34.320987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.061 [2024-04-26 15:36:34.321389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.061 [2024-04-26 15:36:34.321402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.061 qpair failed and we were unable to recover it. 00:26:17.061 [2024-04-26 15:36:34.321713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.062 [2024-04-26 15:36:34.322071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.062 [2024-04-26 15:36:34.322085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.062 qpair failed and we were unable to recover it. 00:26:17.062 [2024-04-26 15:36:34.322399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.062 [2024-04-26 15:36:34.322731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.062 [2024-04-26 15:36:34.322744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.062 qpair failed and we were unable to recover it. 00:26:17.062 [2024-04-26 15:36:34.323121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.062 [2024-04-26 15:36:34.323331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.062 [2024-04-26 15:36:34.323345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.062 qpair failed and we were unable to recover it. 00:26:17.062 [2024-04-26 15:36:34.323680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.062 [2024-04-26 15:36:34.324002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.062 [2024-04-26 15:36:34.324016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.062 qpair failed and we were unable to recover it. 00:26:17.062 [2024-04-26 15:36:34.324335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.062 [2024-04-26 15:36:34.324663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.062 [2024-04-26 15:36:34.324682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.062 qpair failed and we were unable to recover it. 00:26:17.062 [2024-04-26 15:36:34.325026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.062 [2024-04-26 15:36:34.325357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.062 [2024-04-26 15:36:34.325371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.062 qpair failed and we were unable to recover it. 00:26:17.062 [2024-04-26 15:36:34.325655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.062 [2024-04-26 15:36:34.325995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.062 [2024-04-26 15:36:34.326008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.062 qpair failed and we were unable to recover it. 00:26:17.062 [2024-04-26 15:36:34.326337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.062 [2024-04-26 15:36:34.326657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.062 [2024-04-26 15:36:34.326671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.062 qpair failed and we were unable to recover it. 00:26:17.062 [2024-04-26 15:36:34.326900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.062 [2024-04-26 15:36:34.327327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.062 [2024-04-26 15:36:34.327340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.062 qpair failed and we were unable to recover it. 00:26:17.062 [2024-04-26 15:36:34.327710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.062 [2024-04-26 15:36:34.327955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.062 [2024-04-26 15:36:34.327968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.062 qpair failed and we were unable to recover it. 00:26:17.062 [2024-04-26 15:36:34.328273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.062 [2024-04-26 15:36:34.328591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.062 [2024-04-26 15:36:34.328605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.062 qpair failed and we were unable to recover it. 00:26:17.062 [2024-04-26 15:36:34.328984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.062 [2024-04-26 15:36:34.329324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.062 [2024-04-26 15:36:34.329337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.062 qpair failed and we were unable to recover it. 00:26:17.062 [2024-04-26 15:36:34.329678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.062 [2024-04-26 15:36:34.330027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.062 [2024-04-26 15:36:34.330041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.062 qpair failed and we were unable to recover it. 00:26:17.062 [2024-04-26 15:36:34.330386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.062 [2024-04-26 15:36:34.330746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.062 [2024-04-26 15:36:34.330759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.062 qpair failed and we were unable to recover it. 00:26:17.062 [2024-04-26 15:36:34.330992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.062 [2024-04-26 15:36:34.331325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.062 [2024-04-26 15:36:34.331341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.062 qpair failed and we were unable to recover it. 00:26:17.062 [2024-04-26 15:36:34.331680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.062 [2024-04-26 15:36:34.332018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.062 [2024-04-26 15:36:34.332033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.062 qpair failed and we were unable to recover it. 00:26:17.062 [2024-04-26 15:36:34.332333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.062 [2024-04-26 15:36:34.332690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.062 [2024-04-26 15:36:34.332703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.062 qpair failed and we were unable to recover it. 00:26:17.062 [2024-04-26 15:36:34.333003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.062 [2024-04-26 15:36:34.333364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.062 [2024-04-26 15:36:34.333377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.062 qpair failed and we were unable to recover it. 00:26:17.062 [2024-04-26 15:36:34.333719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.062 [2024-04-26 15:36:34.333923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.062 [2024-04-26 15:36:34.333938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.062 qpair failed and we were unable to recover it. 00:26:17.062 [2024-04-26 15:36:34.334267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.062 [2024-04-26 15:36:34.334630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.062 [2024-04-26 15:36:34.334644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.062 qpair failed and we were unable to recover it. 00:26:17.062 [2024-04-26 15:36:34.334964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.062 [2024-04-26 15:36:34.335360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.062 [2024-04-26 15:36:34.335374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.062 qpair failed and we were unable to recover it. 00:26:17.062 [2024-04-26 15:36:34.335684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.062 [2024-04-26 15:36:34.336011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.062 [2024-04-26 15:36:34.336025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.062 qpair failed and we were unable to recover it. 00:26:17.062 [2024-04-26 15:36:34.336272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.062 [2024-04-26 15:36:34.336604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.062 [2024-04-26 15:36:34.336617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.062 qpair failed and we were unable to recover it. 00:26:17.062 [2024-04-26 15:36:34.336917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.062 [2024-04-26 15:36:34.337280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.062 [2024-04-26 15:36:34.337293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.062 qpair failed and we were unable to recover it. 00:26:17.062 [2024-04-26 15:36:34.337699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.062 [2024-04-26 15:36:34.338034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.062 [2024-04-26 15:36:34.338052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.062 qpair failed and we were unable to recover it. 00:26:17.062 [2024-04-26 15:36:34.338364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.062 [2024-04-26 15:36:34.338680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.062 [2024-04-26 15:36:34.338693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.062 qpair failed and we were unable to recover it. 00:26:17.062 [2024-04-26 15:36:34.338927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.062 [2024-04-26 15:36:34.339250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.062 [2024-04-26 15:36:34.339264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.062 qpair failed and we were unable to recover it. 00:26:17.062 [2024-04-26 15:36:34.339584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.062 [2024-04-26 15:36:34.339938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.062 [2024-04-26 15:36:34.339953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.062 qpair failed and we were unable to recover it. 00:26:17.062 [2024-04-26 15:36:34.340334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.062 [2024-04-26 15:36:34.340699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.063 [2024-04-26 15:36:34.340713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.063 qpair failed and we were unable to recover it. 00:26:17.063 [2024-04-26 15:36:34.341022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.063 [2024-04-26 15:36:34.341347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.063 [2024-04-26 15:36:34.341360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.063 qpair failed and we were unable to recover it. 00:26:17.063 [2024-04-26 15:36:34.341670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.063 [2024-04-26 15:36:34.342003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.063 [2024-04-26 15:36:34.342018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.063 qpair failed and we were unable to recover it. 00:26:17.063 [2024-04-26 15:36:34.342334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.063 [2024-04-26 15:36:34.342646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.063 [2024-04-26 15:36:34.342659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.063 qpair failed and we were unable to recover it. 00:26:17.063 [2024-04-26 15:36:34.343041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.063 [2024-04-26 15:36:34.343351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.063 [2024-04-26 15:36:34.343365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.063 qpair failed and we were unable to recover it. 00:26:17.063 [2024-04-26 15:36:34.343701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.063 [2024-04-26 15:36:34.344056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.063 [2024-04-26 15:36:34.344070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.063 qpair failed and we were unable to recover it. 00:26:17.063 [2024-04-26 15:36:34.344386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.063 [2024-04-26 15:36:34.344755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.063 [2024-04-26 15:36:34.344769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.063 qpair failed and we were unable to recover it. 00:26:17.063 [2024-04-26 15:36:34.345113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.063 [2024-04-26 15:36:34.345473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.063 [2024-04-26 15:36:34.345487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.063 qpair failed and we were unable to recover it. 00:26:17.063 [2024-04-26 15:36:34.345825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.063 [2024-04-26 15:36:34.346167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.063 [2024-04-26 15:36:34.346181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.063 qpair failed and we were unable to recover it. 00:26:17.063 [2024-04-26 15:36:34.346568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.063 [2024-04-26 15:36:34.346921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.063 [2024-04-26 15:36:34.346935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.063 qpair failed and we were unable to recover it. 00:26:17.063 [2024-04-26 15:36:34.347158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.063 [2024-04-26 15:36:34.347483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.063 [2024-04-26 15:36:34.347497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.063 qpair failed and we were unable to recover it. 00:26:17.063 [2024-04-26 15:36:34.347842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.063 [2024-04-26 15:36:34.348224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.063 [2024-04-26 15:36:34.348238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.063 qpair failed and we were unable to recover it. 00:26:17.063 [2024-04-26 15:36:34.348594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.063 [2024-04-26 15:36:34.348924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.063 [2024-04-26 15:36:34.348939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.063 qpair failed and we were unable to recover it. 00:26:17.063 [2024-04-26 15:36:34.349281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.063 [2024-04-26 15:36:34.349644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.063 [2024-04-26 15:36:34.349658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.063 qpair failed and we were unable to recover it. 00:26:17.063 [2024-04-26 15:36:34.349882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.063 [2024-04-26 15:36:34.350273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.063 [2024-04-26 15:36:34.350287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.063 qpair failed and we were unable to recover it. 00:26:17.063 [2024-04-26 15:36:34.350670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.063 [2024-04-26 15:36:34.350998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.063 [2024-04-26 15:36:34.351013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.063 qpair failed and we were unable to recover it. 00:26:17.063 [2024-04-26 15:36:34.351301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.063 [2024-04-26 15:36:34.351612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.063 [2024-04-26 15:36:34.351625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.063 qpair failed and we were unable to recover it. 00:26:17.063 [2024-04-26 15:36:34.351868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.063 [2024-04-26 15:36:34.352200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.063 [2024-04-26 15:36:34.352213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.063 qpair failed and we were unable to recover it. 00:26:17.063 [2024-04-26 15:36:34.352461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.063 [2024-04-26 15:36:34.352792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.063 [2024-04-26 15:36:34.352805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.063 qpair failed and we were unable to recover it. 00:26:17.063 [2024-04-26 15:36:34.352993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.063 [2024-04-26 15:36:34.353315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.063 [2024-04-26 15:36:34.353328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.063 qpair failed and we were unable to recover it. 00:26:17.063 [2024-04-26 15:36:34.353626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.063 [2024-04-26 15:36:34.353969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.063 [2024-04-26 15:36:34.353983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.063 qpair failed and we were unable to recover it. 00:26:17.063 [2024-04-26 15:36:34.354298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.063 [2024-04-26 15:36:34.354615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.063 [2024-04-26 15:36:34.354629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.063 qpair failed and we were unable to recover it. 00:26:17.063 [2024-04-26 15:36:34.354962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.063 [2024-04-26 15:36:34.355288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.063 [2024-04-26 15:36:34.355301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.063 qpair failed and we were unable to recover it. 00:26:17.063 [2024-04-26 15:36:34.355607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.063 [2024-04-26 15:36:34.355931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.063 [2024-04-26 15:36:34.355945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.063 qpair failed and we were unable to recover it. 00:26:17.063 [2024-04-26 15:36:34.356303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.063 [2024-04-26 15:36:34.356655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.063 [2024-04-26 15:36:34.356669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.063 qpair failed and we were unable to recover it. 00:26:17.063 [2024-04-26 15:36:34.356852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.063 [2024-04-26 15:36:34.357207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.063 [2024-04-26 15:36:34.357221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.063 qpair failed and we were unable to recover it. 00:26:17.063 [2024-04-26 15:36:34.357533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.063 [2024-04-26 15:36:34.357861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.063 [2024-04-26 15:36:34.357875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.063 qpair failed and we were unable to recover it. 00:26:17.063 [2024-04-26 15:36:34.358203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.063 [2024-04-26 15:36:34.358550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.063 [2024-04-26 15:36:34.358564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.063 qpair failed and we were unable to recover it. 00:26:17.064 [2024-04-26 15:36:34.358771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.064 [2024-04-26 15:36:34.359130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.064 [2024-04-26 15:36:34.359144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.064 qpair failed and we were unable to recover it. 00:26:17.064 [2024-04-26 15:36:34.359485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.064 [2024-04-26 15:36:34.359830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.064 [2024-04-26 15:36:34.359849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.064 qpair failed and we were unable to recover it. 00:26:17.064 [2024-04-26 15:36:34.360202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.064 [2024-04-26 15:36:34.360567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.064 [2024-04-26 15:36:34.360580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.064 qpair failed and we were unable to recover it. 00:26:17.064 [2024-04-26 15:36:34.360781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.064 [2024-04-26 15:36:34.361104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.064 [2024-04-26 15:36:34.361118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.064 qpair failed and we were unable to recover it. 00:26:17.064 [2024-04-26 15:36:34.361442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.064 [2024-04-26 15:36:34.361794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.064 [2024-04-26 15:36:34.361808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.064 qpair failed and we were unable to recover it. 00:26:17.064 [2024-04-26 15:36:34.362153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.064 [2024-04-26 15:36:34.362488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.064 [2024-04-26 15:36:34.362502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.064 qpair failed and we were unable to recover it. 00:26:17.064 [2024-04-26 15:36:34.362847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.064 [2024-04-26 15:36:34.363148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.064 [2024-04-26 15:36:34.363162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.064 qpair failed and we were unable to recover it. 00:26:17.064 [2024-04-26 15:36:34.363349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.064 [2024-04-26 15:36:34.363665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.064 [2024-04-26 15:36:34.363680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.064 qpair failed and we were unable to recover it. 00:26:17.064 [2024-04-26 15:36:34.363997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.064 [2024-04-26 15:36:34.364331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.064 [2024-04-26 15:36:34.364346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.064 qpair failed and we were unable to recover it. 00:26:17.064 [2024-04-26 15:36:34.364694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.064 [2024-04-26 15:36:34.364964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.064 [2024-04-26 15:36:34.364979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.064 qpair failed and we were unable to recover it. 00:26:17.064 [2024-04-26 15:36:34.365278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.064 [2024-04-26 15:36:34.365610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.064 [2024-04-26 15:36:34.365624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.064 qpair failed and we were unable to recover it. 00:26:17.064 [2024-04-26 15:36:34.365934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.064 [2024-04-26 15:36:34.366269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.064 [2024-04-26 15:36:34.366282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.064 qpair failed and we were unable to recover it. 00:26:17.064 [2024-04-26 15:36:34.366618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.064 [2024-04-26 15:36:34.367007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.064 [2024-04-26 15:36:34.367021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.064 qpair failed and we were unable to recover it. 00:26:17.064 [2024-04-26 15:36:34.367221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.064 [2024-04-26 15:36:34.367560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.064 [2024-04-26 15:36:34.367574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.064 qpair failed and we were unable to recover it. 00:26:17.064 [2024-04-26 15:36:34.367889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.064 [2024-04-26 15:36:34.368234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.064 [2024-04-26 15:36:34.368248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.064 qpair failed and we were unable to recover it. 00:26:17.064 [2024-04-26 15:36:34.368552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.064 [2024-04-26 15:36:34.368895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.064 [2024-04-26 15:36:34.368909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.064 qpair failed and we were unable to recover it. 00:26:17.064 [2024-04-26 15:36:34.369239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.064 [2024-04-26 15:36:34.369567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.064 [2024-04-26 15:36:34.369580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.064 qpair failed and we were unable to recover it. 00:26:17.064 [2024-04-26 15:36:34.369884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.064 [2024-04-26 15:36:34.370110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.064 [2024-04-26 15:36:34.370123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.064 qpair failed and we were unable to recover it. 00:26:17.064 [2024-04-26 15:36:34.370438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.064 [2024-04-26 15:36:34.370819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.064 [2024-04-26 15:36:34.370832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.064 qpair failed and we were unable to recover it. 00:26:17.064 [2024-04-26 15:36:34.371213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.064 [2024-04-26 15:36:34.371508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.064 [2024-04-26 15:36:34.371522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.064 qpair failed and we were unable to recover it. 00:26:17.064 [2024-04-26 15:36:34.371876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.064 [2024-04-26 15:36:34.372199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.064 [2024-04-26 15:36:34.372213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.064 qpair failed and we were unable to recover it. 00:26:17.064 [2024-04-26 15:36:34.372521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.064 [2024-04-26 15:36:34.372845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.064 [2024-04-26 15:36:34.372860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.064 qpair failed and we were unable to recover it. 00:26:17.064 [2024-04-26 15:36:34.373190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.064 [2024-04-26 15:36:34.373601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.064 [2024-04-26 15:36:34.373614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.064 qpair failed and we were unable to recover it. 00:26:17.064 [2024-04-26 15:36:34.373926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.064 [2024-04-26 15:36:34.374253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.064 [2024-04-26 15:36:34.374267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.064 qpair failed and we were unable to recover it. 00:26:17.064 [2024-04-26 15:36:34.374506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.064 [2024-04-26 15:36:34.374721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.064 [2024-04-26 15:36:34.374736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.064 qpair failed and we were unable to recover it. 00:26:17.064 [2024-04-26 15:36:34.375035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.064 [2024-04-26 15:36:34.375386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.064 [2024-04-26 15:36:34.375399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.064 qpair failed and we were unable to recover it. 00:26:17.064 [2024-04-26 15:36:34.375749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.064 [2024-04-26 15:36:34.376105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.064 [2024-04-26 15:36:34.376119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.064 qpair failed and we were unable to recover it. 00:26:17.064 [2024-04-26 15:36:34.376429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.064 [2024-04-26 15:36:34.376763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.064 [2024-04-26 15:36:34.376777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.064 qpair failed and we were unable to recover it. 00:26:17.064 [2024-04-26 15:36:34.377132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.064 [2024-04-26 15:36:34.377485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.065 [2024-04-26 15:36:34.377499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.065 qpair failed and we were unable to recover it. 00:26:17.065 [2024-04-26 15:36:34.377814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.065 [2024-04-26 15:36:34.378161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.065 [2024-04-26 15:36:34.378175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.065 qpair failed and we were unable to recover it. 00:26:17.065 [2024-04-26 15:36:34.378520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.065 [2024-04-26 15:36:34.378835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.065 [2024-04-26 15:36:34.378855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.065 qpair failed and we were unable to recover it. 00:26:17.065 [2024-04-26 15:36:34.379108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.065 [2024-04-26 15:36:34.379436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.065 [2024-04-26 15:36:34.379450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.065 qpair failed and we were unable to recover it. 00:26:17.065 [2024-04-26 15:36:34.379878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.065 [2024-04-26 15:36:34.380232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.065 [2024-04-26 15:36:34.380247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.065 qpair failed and we were unable to recover it. 00:26:17.065 [2024-04-26 15:36:34.380563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.065 [2024-04-26 15:36:34.380924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.065 [2024-04-26 15:36:34.380939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.065 qpair failed and we were unable to recover it. 00:26:17.065 [2024-04-26 15:36:34.381145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.065 [2024-04-26 15:36:34.381449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.065 [2024-04-26 15:36:34.381464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.065 qpair failed and we were unable to recover it. 00:26:17.065 [2024-04-26 15:36:34.381801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.065 [2024-04-26 15:36:34.382127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.065 [2024-04-26 15:36:34.382141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.065 qpair failed and we were unable to recover it. 00:26:17.065 [2024-04-26 15:36:34.382563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.065 [2024-04-26 15:36:34.382801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.065 [2024-04-26 15:36:34.382815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.065 qpair failed and we were unable to recover it. 00:26:17.065 [2024-04-26 15:36:34.383161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.065 [2024-04-26 15:36:34.383477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.065 [2024-04-26 15:36:34.383491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.065 qpair failed and we were unable to recover it. 00:26:17.065 [2024-04-26 15:36:34.383809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.065 [2024-04-26 15:36:34.384137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.065 [2024-04-26 15:36:34.384151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.065 qpair failed and we were unable to recover it. 00:26:17.065 [2024-04-26 15:36:34.384513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.065 [2024-04-26 15:36:34.384870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.065 [2024-04-26 15:36:34.384885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.065 qpair failed and we were unable to recover it. 00:26:17.065 [2024-04-26 15:36:34.385193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.065 [2024-04-26 15:36:34.385442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.065 [2024-04-26 15:36:34.385456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.065 qpair failed and we were unable to recover it. 00:26:17.065 [2024-04-26 15:36:34.385683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.065 [2024-04-26 15:36:34.386023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.065 [2024-04-26 15:36:34.386037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.065 qpair failed and we were unable to recover it. 00:26:17.065 [2024-04-26 15:36:34.386396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.065 [2024-04-26 15:36:34.386747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.065 [2024-04-26 15:36:34.386760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.065 qpair failed and we were unable to recover it. 00:26:17.065 [2024-04-26 15:36:34.387090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.065 [2024-04-26 15:36:34.387385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.065 [2024-04-26 15:36:34.387399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.065 qpair failed and we were unable to recover it. 00:26:17.065 [2024-04-26 15:36:34.387634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.065 [2024-04-26 15:36:34.387982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.065 [2024-04-26 15:36:34.387997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.065 qpair failed and we were unable to recover it. 00:26:17.065 [2024-04-26 15:36:34.388351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.065 [2024-04-26 15:36:34.388715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.065 [2024-04-26 15:36:34.388729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.065 qpair failed and we were unable to recover it. 00:26:17.065 [2024-04-26 15:36:34.389103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.065 [2024-04-26 15:36:34.389434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.065 [2024-04-26 15:36:34.389448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.065 qpair failed and we were unable to recover it. 00:26:17.065 [2024-04-26 15:36:34.389770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.065 [2024-04-26 15:36:34.390098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.065 [2024-04-26 15:36:34.390113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.065 qpair failed and we were unable to recover it. 00:26:17.065 [2024-04-26 15:36:34.390428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.065 [2024-04-26 15:36:34.390740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.065 [2024-04-26 15:36:34.390753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.065 qpair failed and we were unable to recover it. 00:26:17.065 [2024-04-26 15:36:34.391087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.065 [2024-04-26 15:36:34.391433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.065 [2024-04-26 15:36:34.391448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.065 qpair failed and we were unable to recover it. 00:26:17.065 [2024-04-26 15:36:34.391768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.065 [2024-04-26 15:36:34.392126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.065 [2024-04-26 15:36:34.392141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.065 qpair failed and we were unable to recover it. 00:26:17.065 [2024-04-26 15:36:34.392441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.065 [2024-04-26 15:36:34.392780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.065 [2024-04-26 15:36:34.392794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.065 qpair failed and we were unable to recover it. 00:26:17.065 [2024-04-26 15:36:34.393108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.065 [2024-04-26 15:36:34.393462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.065 [2024-04-26 15:36:34.393476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.065 qpair failed and we were unable to recover it. 00:26:17.066 [2024-04-26 15:36:34.393797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.066 [2024-04-26 15:36:34.394123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.066 [2024-04-26 15:36:34.394138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.066 qpair failed and we were unable to recover it. 00:26:17.066 [2024-04-26 15:36:34.394479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.066 [2024-04-26 15:36:34.394708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.066 [2024-04-26 15:36:34.394722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.066 qpair failed and we were unable to recover it. 00:26:17.066 [2024-04-26 15:36:34.395067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.066 [2024-04-26 15:36:34.395377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.066 [2024-04-26 15:36:34.395391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.066 qpair failed and we were unable to recover it. 00:26:17.066 [2024-04-26 15:36:34.395723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.066 [2024-04-26 15:36:34.396079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.066 [2024-04-26 15:36:34.396094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.066 qpair failed and we were unable to recover it. 00:26:17.066 [2024-04-26 15:36:34.396410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.066 [2024-04-26 15:36:34.396766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.066 [2024-04-26 15:36:34.396780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.066 qpair failed and we were unable to recover it. 00:26:17.066 [2024-04-26 15:36:34.397156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.066 [2024-04-26 15:36:34.397506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.066 [2024-04-26 15:36:34.397520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.066 qpair failed and we were unable to recover it. 00:26:17.066 [2024-04-26 15:36:34.397879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.066 [2024-04-26 15:36:34.398234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.066 [2024-04-26 15:36:34.398249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.066 qpair failed and we were unable to recover it. 00:26:17.066 [2024-04-26 15:36:34.398595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.066 [2024-04-26 15:36:34.398909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.066 [2024-04-26 15:36:34.398923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.066 qpair failed and we were unable to recover it. 00:26:17.066 [2024-04-26 15:36:34.399161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.066 [2024-04-26 15:36:34.399543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.066 [2024-04-26 15:36:34.399556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.066 qpair failed and we were unable to recover it. 00:26:17.066 [2024-04-26 15:36:34.399798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.066 [2024-04-26 15:36:34.400163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.066 [2024-04-26 15:36:34.400177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.066 qpair failed and we were unable to recover it. 00:26:17.066 [2024-04-26 15:36:34.400523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.066 [2024-04-26 15:36:34.400853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.066 [2024-04-26 15:36:34.400867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.066 qpair failed and we were unable to recover it. 00:26:17.066 [2024-04-26 15:36:34.401261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.066 [2024-04-26 15:36:34.401612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.066 [2024-04-26 15:36:34.401626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.066 qpair failed and we were unable to recover it. 00:26:17.066 [2024-04-26 15:36:34.401943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.066 [2024-04-26 15:36:34.402301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.066 [2024-04-26 15:36:34.402315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.066 qpair failed and we were unable to recover it. 00:26:17.066 [2024-04-26 15:36:34.402538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.066 [2024-04-26 15:36:34.402857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.066 [2024-04-26 15:36:34.402872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.066 qpair failed and we were unable to recover it. 00:26:17.066 [2024-04-26 15:36:34.403212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.066 [2024-04-26 15:36:34.403528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.066 [2024-04-26 15:36:34.403542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.066 qpair failed and we were unable to recover it. 00:26:17.066 [2024-04-26 15:36:34.403880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.066 [2024-04-26 15:36:34.404228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.066 [2024-04-26 15:36:34.404242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.066 qpair failed and we were unable to recover it. 00:26:17.066 [2024-04-26 15:36:34.404572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.066 [2024-04-26 15:36:34.404766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.066 [2024-04-26 15:36:34.404781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.066 qpair failed and we were unable to recover it. 00:26:17.066 [2024-04-26 15:36:34.405117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.066 [2024-04-26 15:36:34.405480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.066 [2024-04-26 15:36:34.405494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.066 qpair failed and we were unable to recover it. 00:26:17.066 [2024-04-26 15:36:34.405676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.066 [2024-04-26 15:36:34.406018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.066 [2024-04-26 15:36:34.406032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.066 qpair failed and we were unable to recover it. 00:26:17.066 [2024-04-26 15:36:34.406372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.066 [2024-04-26 15:36:34.406688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.066 [2024-04-26 15:36:34.406702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.066 qpair failed and we were unable to recover it. 00:26:17.066 [2024-04-26 15:36:34.407024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.066 [2024-04-26 15:36:34.407308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.066 [2024-04-26 15:36:34.407322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.066 qpair failed and we were unable to recover it. 00:26:17.066 [2024-04-26 15:36:34.407487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.066 [2024-04-26 15:36:34.407830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.066 [2024-04-26 15:36:34.407851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.066 qpair failed and we were unable to recover it. 00:26:17.066 [2024-04-26 15:36:34.408118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.066 [2024-04-26 15:36:34.408439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.066 [2024-04-26 15:36:34.408452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.066 qpair failed and we were unable to recover it. 00:26:17.066 [2024-04-26 15:36:34.408778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.066 [2024-04-26 15:36:34.409134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.066 [2024-04-26 15:36:34.409148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.066 qpair failed and we were unable to recover it. 00:26:17.066 [2024-04-26 15:36:34.409344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.066 [2024-04-26 15:36:34.409700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.066 [2024-04-26 15:36:34.409714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.066 qpair failed and we were unable to recover it. 00:26:17.066 [2024-04-26 15:36:34.410025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.066 [2024-04-26 15:36:34.410236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.066 [2024-04-26 15:36:34.410251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.066 qpair failed and we were unable to recover it. 00:26:17.066 [2024-04-26 15:36:34.410582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.066 [2024-04-26 15:36:34.410912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.066 [2024-04-26 15:36:34.410927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.066 qpair failed and we were unable to recover it. 00:26:17.066 [2024-04-26 15:36:34.411273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.066 [2024-04-26 15:36:34.411623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.066 [2024-04-26 15:36:34.411637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.066 qpair failed and we were unable to recover it. 00:26:17.066 [2024-04-26 15:36:34.411826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.066 [2024-04-26 15:36:34.412265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.067 [2024-04-26 15:36:34.412279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.067 qpair failed and we were unable to recover it. 00:26:17.067 [2024-04-26 15:36:34.412617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.067 [2024-04-26 15:36:34.412940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.067 [2024-04-26 15:36:34.412954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.067 qpair failed and we were unable to recover it. 00:26:17.067 [2024-04-26 15:36:34.413317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.067 [2024-04-26 15:36:34.413543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.067 [2024-04-26 15:36:34.413557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.067 qpair failed and we were unable to recover it. 00:26:17.067 [2024-04-26 15:36:34.413900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.067 [2024-04-26 15:36:34.414176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.067 [2024-04-26 15:36:34.414190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.067 qpair failed and we were unable to recover it. 00:26:17.067 [2024-04-26 15:36:34.414525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.067 [2024-04-26 15:36:34.414874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.067 [2024-04-26 15:36:34.414888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.067 qpair failed and we were unable to recover it. 00:26:17.067 [2024-04-26 15:36:34.415186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.067 [2024-04-26 15:36:34.415539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.067 [2024-04-26 15:36:34.415553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.067 qpair failed and we were unable to recover it. 00:26:17.067 [2024-04-26 15:36:34.415877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.067 [2024-04-26 15:36:34.416235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.067 [2024-04-26 15:36:34.416249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.067 qpair failed and we were unable to recover it. 00:26:17.067 [2024-04-26 15:36:34.416585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.067 [2024-04-26 15:36:34.416825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.067 [2024-04-26 15:36:34.416843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.067 qpair failed and we were unable to recover it. 00:26:17.067 [2024-04-26 15:36:34.417195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.067 [2024-04-26 15:36:34.417511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.067 [2024-04-26 15:36:34.417525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.067 qpair failed and we were unable to recover it. 00:26:17.067 [2024-04-26 15:36:34.417748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.067 [2024-04-26 15:36:34.418110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.067 [2024-04-26 15:36:34.418125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.067 qpair failed and we were unable to recover it. 00:26:17.067 [2024-04-26 15:36:34.418449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.067 [2024-04-26 15:36:34.418763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.067 [2024-04-26 15:36:34.418777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.067 qpair failed and we were unable to recover it. 00:26:17.067 [2024-04-26 15:36:34.419148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.067 [2024-04-26 15:36:34.419502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.067 [2024-04-26 15:36:34.419516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.067 qpair failed and we were unable to recover it. 00:26:17.067 [2024-04-26 15:36:34.419857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.067 [2024-04-26 15:36:34.420224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.067 [2024-04-26 15:36:34.420238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.067 qpair failed and we were unable to recover it. 00:26:17.067 [2024-04-26 15:36:34.420557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.067 [2024-04-26 15:36:34.420906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.067 [2024-04-26 15:36:34.420920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.067 qpair failed and we were unable to recover it. 00:26:17.067 [2024-04-26 15:36:34.421121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.067 [2024-04-26 15:36:34.421423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.067 [2024-04-26 15:36:34.421437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.067 qpair failed and we were unable to recover it. 00:26:17.067 [2024-04-26 15:36:34.421783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.067 [2024-04-26 15:36:34.422125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.067 [2024-04-26 15:36:34.422140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.067 qpair failed and we were unable to recover it. 00:26:17.067 [2024-04-26 15:36:34.422513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.067 [2024-04-26 15:36:34.422741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.067 [2024-04-26 15:36:34.422755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.067 qpair failed and we were unable to recover it. 00:26:17.067 [2024-04-26 15:36:34.423134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.067 [2024-04-26 15:36:34.423484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.067 [2024-04-26 15:36:34.423498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.067 qpair failed and we were unable to recover it. 00:26:17.067 [2024-04-26 15:36:34.423820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.067 [2024-04-26 15:36:34.424181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.067 [2024-04-26 15:36:34.424201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.067 qpair failed and we were unable to recover it. 00:26:17.067 [2024-04-26 15:36:34.424537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.067 [2024-04-26 15:36:34.424854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.067 [2024-04-26 15:36:34.424869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.067 qpair failed and we were unable to recover it. 00:26:17.067 [2024-04-26 15:36:34.425216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.067 [2024-04-26 15:36:34.425572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.067 [2024-04-26 15:36:34.425585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.067 qpair failed and we were unable to recover it. 00:26:17.067 [2024-04-26 15:36:34.425900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.067 [2024-04-26 15:36:34.426198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.067 [2024-04-26 15:36:34.426212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.067 qpair failed and we were unable to recover it. 00:26:17.067 [2024-04-26 15:36:34.426557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.067 [2024-04-26 15:36:34.426724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.067 [2024-04-26 15:36:34.426738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.067 qpair failed and we were unable to recover it. 00:26:17.067 [2024-04-26 15:36:34.427075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.067 [2024-04-26 15:36:34.427428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.067 [2024-04-26 15:36:34.427442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.067 qpair failed and we were unable to recover it. 00:26:17.067 [2024-04-26 15:36:34.427675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.067 [2024-04-26 15:36:34.428010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.067 [2024-04-26 15:36:34.428024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.067 qpair failed and we were unable to recover it. 00:26:17.067 [2024-04-26 15:36:34.428338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.067 [2024-04-26 15:36:34.428576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.067 [2024-04-26 15:36:34.428589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.067 qpair failed and we were unable to recover it. 00:26:17.067 [2024-04-26 15:36:34.428847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.067 [2024-04-26 15:36:34.429247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.067 [2024-04-26 15:36:34.429261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.067 qpair failed and we were unable to recover it. 00:26:17.067 [2024-04-26 15:36:34.429594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.067 [2024-04-26 15:36:34.429870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.068 [2024-04-26 15:36:34.429884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.068 qpair failed and we were unable to recover it. 00:26:17.068 [2024-04-26 15:36:34.430087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.068 [2024-04-26 15:36:34.430273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.068 [2024-04-26 15:36:34.430290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.068 qpair failed and we were unable to recover it. 00:26:17.068 [2024-04-26 15:36:34.430628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.068 [2024-04-26 15:36:34.430929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.068 [2024-04-26 15:36:34.430942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.068 qpair failed and we were unable to recover it. 00:26:17.068 [2024-04-26 15:36:34.431300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.068 [2024-04-26 15:36:34.431503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.068 [2024-04-26 15:36:34.431517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.068 qpair failed and we were unable to recover it. 00:26:17.068 [2024-04-26 15:36:34.431855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.068 [2024-04-26 15:36:34.432186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.068 [2024-04-26 15:36:34.432200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.068 qpair failed and we were unable to recover it. 00:26:17.068 [2024-04-26 15:36:34.432504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.068 [2024-04-26 15:36:34.432821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.068 [2024-04-26 15:36:34.432835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.068 qpair failed and we were unable to recover it. 00:26:17.068 [2024-04-26 15:36:34.433193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.068 [2024-04-26 15:36:34.433590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.068 [2024-04-26 15:36:34.433604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.068 qpair failed and we were unable to recover it. 00:26:17.068 [2024-04-26 15:36:34.433925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.068 [2024-04-26 15:36:34.434264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.068 [2024-04-26 15:36:34.434278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.068 qpair failed and we were unable to recover it. 00:26:17.068 [2024-04-26 15:36:34.434660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.068 [2024-04-26 15:36:34.435002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.068 [2024-04-26 15:36:34.435017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.068 qpair failed and we were unable to recover it. 00:26:17.068 [2024-04-26 15:36:34.435241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.068 [2024-04-26 15:36:34.435471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.068 [2024-04-26 15:36:34.435485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.068 qpair failed and we were unable to recover it. 00:26:17.068 [2024-04-26 15:36:34.435832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.068 [2024-04-26 15:36:34.436201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.068 [2024-04-26 15:36:34.436216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.068 qpair failed and we were unable to recover it. 00:26:17.068 [2024-04-26 15:36:34.436546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.068 [2024-04-26 15:36:34.436789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.068 [2024-04-26 15:36:34.436806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.068 qpair failed and we were unable to recover it. 00:26:17.068 [2024-04-26 15:36:34.437118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.068 [2024-04-26 15:36:34.437453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.068 [2024-04-26 15:36:34.437467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.068 qpair failed and we were unable to recover it. 00:26:17.068 [2024-04-26 15:36:34.437872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.068 [2024-04-26 15:36:34.438084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.068 [2024-04-26 15:36:34.438098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.068 qpair failed and we were unable to recover it. 00:26:17.068 [2024-04-26 15:36:34.438452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.068 [2024-04-26 15:36:34.438689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.068 [2024-04-26 15:36:34.438703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.068 qpair failed and we were unable to recover it. 00:26:17.068 [2024-04-26 15:36:34.438938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.068 [2024-04-26 15:36:34.439297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.068 [2024-04-26 15:36:34.439311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.068 qpair failed and we were unable to recover it. 00:26:17.068 [2024-04-26 15:36:34.439631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.068 [2024-04-26 15:36:34.439828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.068 [2024-04-26 15:36:34.439852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.068 qpair failed and we were unable to recover it. 00:26:17.068 [2024-04-26 15:36:34.440187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.068 [2024-04-26 15:36:34.440514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.068 [2024-04-26 15:36:34.440528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.068 qpair failed and we were unable to recover it. 00:26:17.068 [2024-04-26 15:36:34.440860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.068 [2024-04-26 15:36:34.441201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.068 [2024-04-26 15:36:34.441214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.068 qpair failed and we were unable to recover it. 00:26:17.068 [2024-04-26 15:36:34.441612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.068 [2024-04-26 15:36:34.441865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.068 [2024-04-26 15:36:34.441880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.068 qpair failed and we were unable to recover it. 00:26:17.068 [2024-04-26 15:36:34.442212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.068 [2024-04-26 15:36:34.442436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.068 [2024-04-26 15:36:34.442449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.068 qpair failed and we were unable to recover it. 00:26:17.068 [2024-04-26 15:36:34.442806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.068 [2024-04-26 15:36:34.443198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.068 [2024-04-26 15:36:34.443215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.068 qpair failed and we were unable to recover it. 00:26:17.068 [2024-04-26 15:36:34.443537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.068 [2024-04-26 15:36:34.443890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.068 [2024-04-26 15:36:34.443905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.068 qpair failed and we were unable to recover it. 00:26:17.068 [2024-04-26 15:36:34.444225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.068 [2024-04-26 15:36:34.444587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.068 [2024-04-26 15:36:34.444601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.068 qpair failed and we were unable to recover it. 00:26:17.068 [2024-04-26 15:36:34.444921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.068 [2024-04-26 15:36:34.445285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.068 [2024-04-26 15:36:34.445300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.068 qpair failed and we were unable to recover it. 00:26:17.068 [2024-04-26 15:36:34.445619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.068 [2024-04-26 15:36:34.445850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.068 [2024-04-26 15:36:34.445866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.068 qpair failed and we were unable to recover it. 00:26:17.068 [2024-04-26 15:36:34.446179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.068 [2024-04-26 15:36:34.446375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.068 [2024-04-26 15:36:34.446390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.068 qpair failed and we were unable to recover it. 00:26:17.068 [2024-04-26 15:36:34.446617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.068 [2024-04-26 15:36:34.446848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.068 [2024-04-26 15:36:34.446863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.068 qpair failed and we were unable to recover it. 00:26:17.068 [2024-04-26 15:36:34.447222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.068 [2024-04-26 15:36:34.447585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.068 [2024-04-26 15:36:34.447599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.068 qpair failed and we were unable to recover it. 00:26:17.068 [2024-04-26 15:36:34.447914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.069 [2024-04-26 15:36:34.448256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.069 [2024-04-26 15:36:34.448270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.069 qpair failed and we were unable to recover it. 00:26:17.069 [2024-04-26 15:36:34.448571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.069 [2024-04-26 15:36:34.448912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.069 [2024-04-26 15:36:34.448927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.069 qpair failed and we were unable to recover it. 00:26:17.069 [2024-04-26 15:36:34.449275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.069 [2024-04-26 15:36:34.449491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.069 [2024-04-26 15:36:34.449505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.069 qpair failed and we were unable to recover it. 00:26:17.069 [2024-04-26 15:36:34.449704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.069 [2024-04-26 15:36:34.449901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.069 [2024-04-26 15:36:34.449917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.069 qpair failed and we were unable to recover it. 00:26:17.069 [2024-04-26 15:36:34.450243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.069 [2024-04-26 15:36:34.450578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.069 [2024-04-26 15:36:34.450592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.069 qpair failed and we were unable to recover it. 00:26:17.069 [2024-04-26 15:36:34.450922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.069 [2024-04-26 15:36:34.451304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.069 [2024-04-26 15:36:34.451317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.069 qpair failed and we were unable to recover it. 00:26:17.069 [2024-04-26 15:36:34.451656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.069 [2024-04-26 15:36:34.451943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.069 [2024-04-26 15:36:34.451958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.069 qpair failed and we were unable to recover it. 00:26:17.069 [2024-04-26 15:36:34.452257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.069 [2024-04-26 15:36:34.452573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.069 [2024-04-26 15:36:34.452588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.069 qpair failed and we were unable to recover it. 00:26:17.069 [2024-04-26 15:36:34.452902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.069 [2024-04-26 15:36:34.453229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.069 [2024-04-26 15:36:34.453243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.069 qpair failed and we were unable to recover it. 00:26:17.069 [2024-04-26 15:36:34.453552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.069 [2024-04-26 15:36:34.453903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.069 [2024-04-26 15:36:34.453917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.069 qpair failed and we were unable to recover it. 00:26:17.069 [2024-04-26 15:36:34.454265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.069 [2024-04-26 15:36:34.454616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.069 [2024-04-26 15:36:34.454631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.069 qpair failed and we were unable to recover it. 00:26:17.069 [2024-04-26 15:36:34.454894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.069 [2024-04-26 15:36:34.455286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.069 [2024-04-26 15:36:34.455301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.069 qpair failed and we were unable to recover it. 00:26:17.069 [2024-04-26 15:36:34.455488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.069 [2024-04-26 15:36:34.455797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.069 [2024-04-26 15:36:34.455812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.069 qpair failed and we were unable to recover it. 00:26:17.069 [2024-04-26 15:36:34.456053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.069 [2024-04-26 15:36:34.456273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.069 [2024-04-26 15:36:34.456287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.069 qpair failed and we were unable to recover it. 00:26:17.069 [2024-04-26 15:36:34.456619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.069 [2024-04-26 15:36:34.456831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.069 [2024-04-26 15:36:34.456852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.069 qpair failed and we were unable to recover it. 00:26:17.069 [2024-04-26 15:36:34.457076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.069 [2024-04-26 15:36:34.457440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.069 [2024-04-26 15:36:34.457457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.069 qpair failed and we were unable to recover it. 00:26:17.069 [2024-04-26 15:36:34.457789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.069 [2024-04-26 15:36:34.458120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.069 [2024-04-26 15:36:34.458134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.069 qpair failed and we were unable to recover it. 00:26:17.069 [2024-04-26 15:36:34.458463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.069 [2024-04-26 15:36:34.458817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.069 [2024-04-26 15:36:34.458831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.069 qpair failed and we were unable to recover it. 00:26:17.069 [2024-04-26 15:36:34.459156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.069 [2024-04-26 15:36:34.459510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.069 [2024-04-26 15:36:34.459524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.069 qpair failed and we were unable to recover it. 00:26:17.069 [2024-04-26 15:36:34.459911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.069 [2024-04-26 15:36:34.460206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.069 [2024-04-26 15:36:34.460220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.069 qpair failed and we were unable to recover it. 00:26:17.069 [2024-04-26 15:36:34.460619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.069 [2024-04-26 15:36:34.460972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.069 [2024-04-26 15:36:34.460986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.069 qpair failed and we were unable to recover it. 00:26:17.069 [2024-04-26 15:36:34.461346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.069 [2024-04-26 15:36:34.461542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.069 [2024-04-26 15:36:34.461555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.069 qpair failed and we were unable to recover it. 00:26:17.069 [2024-04-26 15:36:34.461891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.069 [2024-04-26 15:36:34.462051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.069 [2024-04-26 15:36:34.462065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.069 qpair failed and we were unable to recover it. 00:26:17.069 [2024-04-26 15:36:34.462422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.069 [2024-04-26 15:36:34.462774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.069 [2024-04-26 15:36:34.462788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.069 qpair failed and we were unable to recover it. 00:26:17.069 [2024-04-26 15:36:34.463019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.069 [2024-04-26 15:36:34.463354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.069 [2024-04-26 15:36:34.463368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.069 qpair failed and we were unable to recover it. 00:26:17.069 [2024-04-26 15:36:34.463596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.069 [2024-04-26 15:36:34.464009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.069 [2024-04-26 15:36:34.464023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.069 qpair failed and we were unable to recover it. 00:26:17.069 [2024-04-26 15:36:34.464348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.069 [2024-04-26 15:36:34.464545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.069 [2024-04-26 15:36:34.464559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.069 qpair failed and we were unable to recover it. 00:26:17.069 [2024-04-26 15:36:34.464897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.069 [2024-04-26 15:36:34.465262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-26 15:36:34.465276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.070 qpair failed and we were unable to recover it. 00:26:17.070 [2024-04-26 15:36:34.465499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-26 15:36:34.465691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-26 15:36:34.465706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.070 qpair failed and we were unable to recover it. 00:26:17.070 [2024-04-26 15:36:34.465898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-26 15:36:34.466257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-26 15:36:34.466271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.070 qpair failed and we were unable to recover it. 00:26:17.070 [2024-04-26 15:36:34.466601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-26 15:36:34.466821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-26 15:36:34.466834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.070 qpair failed and we were unable to recover it. 00:26:17.070 [2024-04-26 15:36:34.467167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-26 15:36:34.467528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-26 15:36:34.467542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.070 qpair failed and we were unable to recover it. 00:26:17.070 [2024-04-26 15:36:34.467734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-26 15:36:34.468056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-26 15:36:34.468072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.070 qpair failed and we were unable to recover it. 00:26:17.070 [2024-04-26 15:36:34.468446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-26 15:36:34.468781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-26 15:36:34.468795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.070 qpair failed and we were unable to recover it. 00:26:17.070 [2024-04-26 15:36:34.469144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-26 15:36:34.469497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-26 15:36:34.469512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.070 qpair failed and we were unable to recover it. 00:26:17.070 [2024-04-26 15:36:34.469866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-26 15:36:34.470102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-26 15:36:34.470116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.070 qpair failed and we were unable to recover it. 00:26:17.070 [2024-04-26 15:36:34.470450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-26 15:36:34.470663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-26 15:36:34.470677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.070 qpair failed and we were unable to recover it. 00:26:17.070 [2024-04-26 15:36:34.470862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-26 15:36:34.471152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-26 15:36:34.471166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.070 qpair failed and we were unable to recover it. 00:26:17.070 [2024-04-26 15:36:34.471469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-26 15:36:34.471799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-26 15:36:34.471813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.070 qpair failed and we were unable to recover it. 00:26:17.070 [2024-04-26 15:36:34.472026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-26 15:36:34.472382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-26 15:36:34.472396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.070 qpair failed and we were unable to recover it. 00:26:17.070 [2024-04-26 15:36:34.472752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-26 15:36:34.473116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-26 15:36:34.473130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.070 qpair failed and we were unable to recover it. 00:26:17.070 [2024-04-26 15:36:34.473262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-26 15:36:34.473601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-26 15:36:34.473615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.070 qpair failed and we were unable to recover it. 00:26:17.070 [2024-04-26 15:36:34.473947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-26 15:36:34.474276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-26 15:36:34.474290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.070 qpair failed and we were unable to recover it. 00:26:17.070 [2024-04-26 15:36:34.474634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-26 15:36:34.474992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-26 15:36:34.475007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.070 qpair failed and we were unable to recover it. 00:26:17.070 [2024-04-26 15:36:34.475359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-26 15:36:34.475696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-26 15:36:34.475710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.070 qpair failed and we were unable to recover it. 00:26:17.070 [2024-04-26 15:36:34.475963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-26 15:36:34.476304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-26 15:36:34.476318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.070 qpair failed and we were unable to recover it. 00:26:17.070 [2024-04-26 15:36:34.476651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-26 15:36:34.477008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-26 15:36:34.477023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.070 qpair failed and we were unable to recover it. 00:26:17.070 [2024-04-26 15:36:34.477221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-26 15:36:34.477531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-26 15:36:34.477544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.070 qpair failed and we were unable to recover it. 00:26:17.070 [2024-04-26 15:36:34.477870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-26 15:36:34.478235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-26 15:36:34.478249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.070 qpair failed and we were unable to recover it. 00:26:17.071 [2024-04-26 15:36:34.478573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-26 15:36:34.478932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-26 15:36:34.478946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.071 qpair failed and we were unable to recover it. 00:26:17.071 [2024-04-26 15:36:34.479300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-26 15:36:34.479527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-26 15:36:34.479540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.071 qpair failed and we were unable to recover it. 00:26:17.071 [2024-04-26 15:36:34.479885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-26 15:36:34.480256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-26 15:36:34.480270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.071 qpair failed and we were unable to recover it. 00:26:17.071 [2024-04-26 15:36:34.480488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-26 15:36:34.480689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-26 15:36:34.480702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.071 qpair failed and we were unable to recover it. 00:26:17.071 [2024-04-26 15:36:34.480935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-26 15:36:34.481143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-26 15:36:34.481158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.071 qpair failed and we were unable to recover it. 00:26:17.071 [2024-04-26 15:36:34.481505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-26 15:36:34.481833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-26 15:36:34.481853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.071 qpair failed and we were unable to recover it. 00:26:17.071 [2024-04-26 15:36:34.481950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-26 15:36:34.482348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-26 15:36:34.482361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.071 qpair failed and we were unable to recover it. 00:26:17.071 [2024-04-26 15:36:34.482708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-26 15:36:34.483069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-26 15:36:34.483083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.071 qpair failed and we were unable to recover it. 00:26:17.071 [2024-04-26 15:36:34.483405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-26 15:36:34.483764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-26 15:36:34.483778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.071 qpair failed and we were unable to recover it. 00:26:17.071 [2024-04-26 15:36:34.484179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-26 15:36:34.484529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-26 15:36:34.484543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.071 qpair failed and we were unable to recover it. 00:26:17.071 [2024-04-26 15:36:34.484613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-26 15:36:34.484934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-26 15:36:34.484948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.071 qpair failed and we were unable to recover it. 00:26:17.071 [2024-04-26 15:36:34.485352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-26 15:36:34.485699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-26 15:36:34.485713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.071 qpair failed and we were unable to recover it. 00:26:17.071 [2024-04-26 15:36:34.486063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-26 15:36:34.486402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-26 15:36:34.486416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.071 qpair failed and we were unable to recover it. 00:26:17.071 [2024-04-26 15:36:34.486605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-26 15:36:34.486920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-26 15:36:34.486934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.071 qpair failed and we were unable to recover it. 00:26:17.071 [2024-04-26 15:36:34.487249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-26 15:36:34.487574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-26 15:36:34.487588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.071 qpair failed and we were unable to recover it. 00:26:17.071 [2024-04-26 15:36:34.487935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-26 15:36:34.488262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-26 15:36:34.488276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.071 qpair failed and we were unable to recover it. 00:26:17.071 [2024-04-26 15:36:34.488599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-26 15:36:34.488955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-26 15:36:34.488969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.071 qpair failed and we were unable to recover it. 00:26:17.071 [2024-04-26 15:36:34.489297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-26 15:36:34.489600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-26 15:36:34.489614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.071 qpair failed and we were unable to recover it. 00:26:17.071 [2024-04-26 15:36:34.489954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-26 15:36:34.490298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-26 15:36:34.490311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.071 qpair failed and we were unable to recover it. 00:26:17.071 [2024-04-26 15:36:34.490645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-26 15:36:34.490993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-26 15:36:34.491007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.071 qpair failed and we were unable to recover it. 00:26:17.071 [2024-04-26 15:36:34.491231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-26 15:36:34.491587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-26 15:36:34.491601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.071 qpair failed and we were unable to recover it. 00:26:17.071 [2024-04-26 15:36:34.491932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-26 15:36:34.492268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-26 15:36:34.492282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.071 qpair failed and we were unable to recover it. 00:26:17.071 [2024-04-26 15:36:34.492540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-26 15:36:34.492864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-26 15:36:34.492878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.071 qpair failed and we were unable to recover it. 00:26:17.071 [2024-04-26 15:36:34.493228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-26 15:36:34.493568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-26 15:36:34.493582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.071 qpair failed and we were unable to recover it. 00:26:17.071 [2024-04-26 15:36:34.493655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-26 15:36:34.493962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-26 15:36:34.493977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.071 qpair failed and we were unable to recover it. 00:26:17.071 [2024-04-26 15:36:34.494300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-26 15:36:34.494648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-26 15:36:34.494662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.071 qpair failed and we were unable to recover it. 00:26:17.071 [2024-04-26 15:36:34.495024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-26 15:36:34.495359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-26 15:36:34.495373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.071 qpair failed and we were unable to recover it. 00:26:17.071 [2024-04-26 15:36:34.495706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-26 15:36:34.496037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-26 15:36:34.496051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.071 qpair failed and we were unable to recover it. 00:26:17.071 [2024-04-26 15:36:34.496388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-26 15:36:34.496726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.072 [2024-04-26 15:36:34.496740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.072 qpair failed and we were unable to recover it. 00:26:17.072 [2024-04-26 15:36:34.497056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 15:36:34.497375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 15:36:34.497389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.341 qpair failed and we were unable to recover it. 00:26:17.341 [2024-04-26 15:36:34.497733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 15:36:34.498086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 15:36:34.498100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.341 qpair failed and we were unable to recover it. 00:26:17.341 [2024-04-26 15:36:34.498416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 15:36:34.498766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 15:36:34.498779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.341 qpair failed and we were unable to recover it. 00:26:17.341 [2024-04-26 15:36:34.499106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 15:36:34.499373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 15:36:34.499387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.341 qpair failed and we were unable to recover it. 00:26:17.341 [2024-04-26 15:36:34.499740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 15:36:34.500085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 15:36:34.500099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.341 qpair failed and we were unable to recover it. 00:26:17.341 [2024-04-26 15:36:34.500295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 15:36:34.500662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 15:36:34.500676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.341 qpair failed and we were unable to recover it. 00:26:17.341 [2024-04-26 15:36:34.501051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 15:36:34.501380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 15:36:34.501393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.341 qpair failed and we were unable to recover it. 00:26:17.341 [2024-04-26 15:36:34.501751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 15:36:34.502081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 15:36:34.502096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.341 qpair failed and we were unable to recover it. 00:26:17.341 [2024-04-26 15:36:34.502437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 15:36:34.502769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 15:36:34.502782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.341 qpair failed and we were unable to recover it. 00:26:17.341 [2024-04-26 15:36:34.503008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 15:36:34.503321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 15:36:34.503334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.341 qpair failed and we were unable to recover it. 00:26:17.341 [2024-04-26 15:36:34.503686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 15:36:34.504007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 15:36:34.504022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.341 qpair failed and we were unable to recover it. 00:26:17.341 [2024-04-26 15:36:34.504389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 15:36:34.504685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 15:36:34.504699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.341 qpair failed and we were unable to recover it. 00:26:17.341 [2024-04-26 15:36:34.505039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 15:36:34.505287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 15:36:34.505301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.341 qpair failed and we were unable to recover it. 00:26:17.341 [2024-04-26 15:36:34.505621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 15:36:34.505893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 15:36:34.505908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.341 qpair failed and we were unable to recover it. 00:26:17.341 [2024-04-26 15:36:34.506105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 15:36:34.506400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 15:36:34.506415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.341 qpair failed and we were unable to recover it. 00:26:17.341 [2024-04-26 15:36:34.506722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 15:36:34.507078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 15:36:34.507093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.341 qpair failed and we were unable to recover it. 00:26:17.341 [2024-04-26 15:36:34.507400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 15:36:34.507712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 15:36:34.507725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.341 qpair failed and we were unable to recover it. 00:26:17.341 [2024-04-26 15:36:34.507993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 15:36:34.508291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 15:36:34.508305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.341 qpair failed and we were unable to recover it. 00:26:17.341 [2024-04-26 15:36:34.508522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 15:36:34.508855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 15:36:34.508870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.341 qpair failed and we were unable to recover it. 00:26:17.341 [2024-04-26 15:36:34.509202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 15:36:34.509573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 15:36:34.509586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.341 qpair failed and we were unable to recover it. 00:26:17.341 [2024-04-26 15:36:34.509953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 15:36:34.510161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 15:36:34.510174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.341 qpair failed and we were unable to recover it. 00:26:17.341 [2024-04-26 15:36:34.510522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 15:36:34.510678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 15:36:34.510693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.341 qpair failed and we were unable to recover it. 00:26:17.341 [2024-04-26 15:36:34.510967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 15:36:34.511273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 15:36:34.511287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.341 qpair failed and we were unable to recover it. 00:26:17.341 [2024-04-26 15:36:34.511597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 15:36:34.511960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 15:36:34.511974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.341 qpair failed and we were unable to recover it. 00:26:17.341 [2024-04-26 15:36:34.512327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 15:36:34.512510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 15:36:34.512525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.341 qpair failed and we were unable to recover it. 00:26:17.341 [2024-04-26 15:36:34.512857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 15:36:34.513196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.341 [2024-04-26 15:36:34.513211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.341 qpair failed and we were unable to recover it. 00:26:17.342 [2024-04-26 15:36:34.513575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 15:36:34.513931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 15:36:34.513945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.342 qpair failed and we were unable to recover it. 00:26:17.342 [2024-04-26 15:36:34.514288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 15:36:34.514599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 15:36:34.514613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.342 qpair failed and we were unable to recover it. 00:26:17.342 [2024-04-26 15:36:34.514967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 15:36:34.515305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 15:36:34.515319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.342 qpair failed and we were unable to recover it. 00:26:17.342 [2024-04-26 15:36:34.515548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 15:36:34.515908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 15:36:34.515922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.342 qpair failed and we were unable to recover it. 00:26:17.342 [2024-04-26 15:36:34.516246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 15:36:34.516574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 15:36:34.516587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.342 qpair failed and we were unable to recover it. 00:26:17.342 [2024-04-26 15:36:34.516943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 15:36:34.517315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 15:36:34.517329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.342 qpair failed and we were unable to recover it. 00:26:17.342 [2024-04-26 15:36:34.517514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 15:36:34.517830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 15:36:34.517850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.342 qpair failed and we were unable to recover it. 00:26:17.342 [2024-04-26 15:36:34.518191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 15:36:34.518546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 15:36:34.518561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.342 qpair failed and we were unable to recover it. 00:26:17.342 [2024-04-26 15:36:34.518942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 15:36:34.519276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 15:36:34.519289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.342 qpair failed and we were unable to recover it. 00:26:17.342 [2024-04-26 15:36:34.519621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 15:36:34.519905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 15:36:34.519927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.342 qpair failed and we were unable to recover it. 00:26:17.342 [2024-04-26 15:36:34.520258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 15:36:34.520499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 15:36:34.520512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.342 qpair failed and we were unable to recover it. 00:26:17.342 [2024-04-26 15:36:34.520825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 15:36:34.521166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 15:36:34.521180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.342 qpair failed and we were unable to recover it. 00:26:17.342 [2024-04-26 15:36:34.521487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 15:36:34.521684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 15:36:34.521699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.342 qpair failed and we were unable to recover it. 00:26:17.342 [2024-04-26 15:36:34.522027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 15:36:34.522383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 15:36:34.522396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.342 qpair failed and we were unable to recover it. 00:26:17.342 [2024-04-26 15:36:34.522709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 15:36:34.523070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 15:36:34.523085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.342 qpair failed and we were unable to recover it. 00:26:17.342 [2024-04-26 15:36:34.523391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 15:36:34.523744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 15:36:34.523758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.342 qpair failed and we were unable to recover it. 00:26:17.342 [2024-04-26 15:36:34.524167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 15:36:34.524505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 15:36:34.524518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.342 qpair failed and we were unable to recover it. 00:26:17.342 [2024-04-26 15:36:34.524860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 15:36:34.525200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 15:36:34.525213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.342 qpair failed and we were unable to recover it. 00:26:17.342 [2024-04-26 15:36:34.525524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 15:36:34.525858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 15:36:34.525873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.342 qpair failed and we were unable to recover it. 00:26:17.342 [2024-04-26 15:36:34.526204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 15:36:34.526557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 15:36:34.526574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.342 qpair failed and we were unable to recover it. 00:26:17.342 [2024-04-26 15:36:34.526893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 15:36:34.527221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 15:36:34.527234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.342 qpair failed and we were unable to recover it. 00:26:17.342 [2024-04-26 15:36:34.527448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 15:36:34.527728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 15:36:34.527742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.342 qpair failed and we were unable to recover it. 00:26:17.342 [2024-04-26 15:36:34.528101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 15:36:34.528304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 15:36:34.528320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.342 qpair failed and we were unable to recover it. 00:26:17.342 [2024-04-26 15:36:34.528552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 15:36:34.528783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 15:36:34.528797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.342 qpair failed and we were unable to recover it. 00:26:17.342 [2024-04-26 15:36:34.529164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 15:36:34.529485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 15:36:34.529499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.342 qpair failed and we were unable to recover it. 00:26:17.342 [2024-04-26 15:36:34.529819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 15:36:34.530237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 15:36:34.530251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.342 qpair failed and we were unable to recover it. 00:26:17.342 [2024-04-26 15:36:34.530566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 15:36:34.530883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 15:36:34.530897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.342 qpair failed and we were unable to recover it. 00:26:17.342 [2024-04-26 15:36:34.531232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 15:36:34.531559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.342 [2024-04-26 15:36:34.531572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.342 qpair failed and we were unable to recover it. 00:26:17.342 [2024-04-26 15:36:34.531933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 15:36:34.532290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 15:36:34.532305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.343 qpair failed and we were unable to recover it. 00:26:17.343 [2024-04-26 15:36:34.532631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 15:36:34.533006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 15:36:34.533024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.343 qpair failed and we were unable to recover it. 00:26:17.343 [2024-04-26 15:36:34.533377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 15:36:34.533704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 15:36:34.533718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.343 qpair failed and we were unable to recover it. 00:26:17.343 [2024-04-26 15:36:34.534031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 15:36:34.534390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 15:36:34.534404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.343 qpair failed and we were unable to recover it. 00:26:17.343 [2024-04-26 15:36:34.534726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 15:36:34.535078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 15:36:34.535092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.343 qpair failed and we were unable to recover it. 00:26:17.343 [2024-04-26 15:36:34.535408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 15:36:34.535739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 15:36:34.535752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.343 qpair failed and we were unable to recover it. 00:26:17.343 [2024-04-26 15:36:34.535966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 15:36:34.536275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 15:36:34.536289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.343 qpair failed and we were unable to recover it. 00:26:17.343 [2024-04-26 15:36:34.536602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 15:36:34.536938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 15:36:34.536952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.343 qpair failed and we were unable to recover it. 00:26:17.343 [2024-04-26 15:36:34.537148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 15:36:34.537505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 15:36:34.537519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.343 qpair failed and we were unable to recover it. 00:26:17.343 [2024-04-26 15:36:34.537835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 15:36:34.538166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 15:36:34.538179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.343 qpair failed and we were unable to recover it. 00:26:17.343 [2024-04-26 15:36:34.538561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 15:36:34.538876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 15:36:34.538890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.343 qpair failed and we were unable to recover it. 00:26:17.343 [2024-04-26 15:36:34.539175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 15:36:34.539497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 15:36:34.539514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.343 qpair failed and we were unable to recover it. 00:26:17.343 [2024-04-26 15:36:34.539855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 15:36:34.540210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 15:36:34.540224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.343 qpair failed and we were unable to recover it. 00:26:17.343 [2024-04-26 15:36:34.540542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 15:36:34.540859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 15:36:34.540873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.343 qpair failed and we were unable to recover it. 00:26:17.343 [2024-04-26 15:36:34.541224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 15:36:34.541597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 15:36:34.541611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.343 qpair failed and we were unable to recover it. 00:26:17.343 [2024-04-26 15:36:34.541948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 15:36:34.542313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 15:36:34.542327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.343 qpair failed and we were unable to recover it. 00:26:17.343 [2024-04-26 15:36:34.542551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 15:36:34.542774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 15:36:34.542787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.343 qpair failed and we were unable to recover it. 00:26:17.343 [2024-04-26 15:36:34.543129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 15:36:34.543463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 15:36:34.543476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.343 qpair failed and we were unable to recover it. 00:26:17.343 [2024-04-26 15:36:34.543823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 15:36:34.544052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 15:36:34.544068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.343 qpair failed and we were unable to recover it. 00:26:17.343 [2024-04-26 15:36:34.544295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 15:36:34.544692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 15:36:34.544705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.343 qpair failed and we were unable to recover it. 00:26:17.343 [2024-04-26 15:36:34.545040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 15:36:34.545417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 15:36:34.545430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.343 qpair failed and we were unable to recover it. 00:26:17.343 [2024-04-26 15:36:34.545739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 15:36:34.546096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 15:36:34.546112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.343 qpair failed and we were unable to recover it. 00:26:17.343 [2024-04-26 15:36:34.546451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 15:36:34.546787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 15:36:34.546802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.343 qpair failed and we were unable to recover it. 00:26:17.343 [2024-04-26 15:36:34.547123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 15:36:34.547482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 15:36:34.547497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.343 qpair failed and we were unable to recover it. 00:26:17.343 [2024-04-26 15:36:34.547849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 15:36:34.548176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 15:36:34.548190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.343 qpair failed and we were unable to recover it. 00:26:17.343 [2024-04-26 15:36:34.548543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 15:36:34.548908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 15:36:34.548923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.343 qpair failed and we were unable to recover it. 00:26:17.343 [2024-04-26 15:36:34.549238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 15:36:34.549591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 15:36:34.549604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.343 qpair failed and we were unable to recover it. 00:26:17.343 [2024-04-26 15:36:34.549925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 15:36:34.550328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 15:36:34.550342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.343 qpair failed and we were unable to recover it. 00:26:17.343 [2024-04-26 15:36:34.550700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.343 [2024-04-26 15:36:34.551033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 15:36:34.551048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.344 qpair failed and we were unable to recover it. 00:26:17.344 [2024-04-26 15:36:34.551381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 15:36:34.551582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 15:36:34.551596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.344 qpair failed and we were unable to recover it. 00:26:17.344 [2024-04-26 15:36:34.551784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 15:36:34.552119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 15:36:34.552144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.344 qpair failed and we were unable to recover it. 00:26:17.344 [2024-04-26 15:36:34.552455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 15:36:34.552798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 15:36:34.552811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.344 qpair failed and we were unable to recover it. 00:26:17.344 [2024-04-26 15:36:34.553145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 15:36:34.553368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 15:36:34.553381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.344 qpair failed and we were unable to recover it. 00:26:17.344 [2024-04-26 15:36:34.553710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 15:36:34.554073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 15:36:34.554088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.344 qpair failed and we were unable to recover it. 00:26:17.344 [2024-04-26 15:36:34.554285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 15:36:34.554672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 15:36:34.554686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.344 qpair failed and we were unable to recover it. 00:26:17.344 [2024-04-26 15:36:34.554998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 15:36:34.555356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 15:36:34.555370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.344 qpair failed and we were unable to recover it. 00:26:17.344 [2024-04-26 15:36:34.555684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 15:36:34.556021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 15:36:34.556036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.344 qpair failed and we were unable to recover it. 00:26:17.344 [2024-04-26 15:36:34.556343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 15:36:34.556667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 15:36:34.556680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.344 qpair failed and we were unable to recover it. 00:26:17.344 [2024-04-26 15:36:34.557006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 15:36:34.557363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 15:36:34.557377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.344 qpair failed and we were unable to recover it. 00:26:17.344 [2024-04-26 15:36:34.557732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 15:36:34.558073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 15:36:34.558088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.344 qpair failed and we were unable to recover it. 00:26:17.344 [2024-04-26 15:36:34.558280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 15:36:34.558472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 15:36:34.558486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.344 qpair failed and we were unable to recover it. 00:26:17.344 [2024-04-26 15:36:34.558915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 15:36:34.559229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 15:36:34.559243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.344 qpair failed and we were unable to recover it. 00:26:17.344 [2024-04-26 15:36:34.559588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 15:36:34.559929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 15:36:34.559943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.344 qpair failed and we were unable to recover it. 00:26:17.344 [2024-04-26 15:36:34.560297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 15:36:34.560629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 15:36:34.560643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.344 qpair failed and we were unable to recover it. 00:26:17.344 [2024-04-26 15:36:34.560998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 15:36:34.561202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 15:36:34.561217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.344 qpair failed and we were unable to recover it. 00:26:17.344 [2024-04-26 15:36:34.561565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 15:36:34.561933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 15:36:34.561948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.344 qpair failed and we were unable to recover it. 00:26:17.344 [2024-04-26 15:36:34.562269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 15:36:34.562592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 15:36:34.562605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.344 qpair failed and we were unable to recover it. 00:26:17.344 [2024-04-26 15:36:34.562894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 15:36:34.563231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 15:36:34.563245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.344 qpair failed and we were unable to recover it. 00:26:17.344 [2024-04-26 15:36:34.563556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 15:36:34.563911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 15:36:34.563926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.344 qpair failed and we were unable to recover it. 00:26:17.344 [2024-04-26 15:36:34.564244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 15:36:34.564573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 15:36:34.564587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.344 qpair failed and we were unable to recover it. 00:26:17.344 [2024-04-26 15:36:34.564898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 15:36:34.565170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 15:36:34.565184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.344 qpair failed and we were unable to recover it. 00:26:17.344 [2024-04-26 15:36:34.565534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 15:36:34.565874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 15:36:34.565888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.344 qpair failed and we were unable to recover it. 00:26:17.344 [2024-04-26 15:36:34.566247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 15:36:34.566582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 15:36:34.566595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.344 qpair failed and we were unable to recover it. 00:26:17.344 [2024-04-26 15:36:34.566941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 15:36:34.567316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 15:36:34.567330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.344 qpair failed and we were unable to recover it. 00:26:17.344 [2024-04-26 15:36:34.567543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 15:36:34.567876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 15:36:34.567891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.344 qpair failed and we were unable to recover it. 00:26:17.344 [2024-04-26 15:36:34.568214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 15:36:34.568546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 15:36:34.568559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.344 qpair failed and we were unable to recover it. 00:26:17.344 [2024-04-26 15:36:34.568880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 15:36:34.569211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.344 [2024-04-26 15:36:34.569225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.344 qpair failed and we were unable to recover it. 00:26:17.344 [2024-04-26 15:36:34.569574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 15:36:34.569898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 15:36:34.569912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.345 qpair failed and we were unable to recover it. 00:26:17.345 [2024-04-26 15:36:34.570219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 15:36:34.570575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 15:36:34.570589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.345 qpair failed and we were unable to recover it. 00:26:17.345 [2024-04-26 15:36:34.570944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 15:36:34.571273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 15:36:34.571287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.345 qpair failed and we were unable to recover it. 00:26:17.345 [2024-04-26 15:36:34.571602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 15:36:34.571849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 15:36:34.571863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.345 qpair failed and we were unable to recover it. 00:26:17.345 [2024-04-26 15:36:34.572203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 15:36:34.572560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 15:36:34.572574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.345 qpair failed and we were unable to recover it. 00:26:17.345 [2024-04-26 15:36:34.572932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 15:36:34.573271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 15:36:34.573285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.345 qpair failed and we were unable to recover it. 00:26:17.345 [2024-04-26 15:36:34.573606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 15:36:34.573964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 15:36:34.573978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.345 qpair failed and we were unable to recover it. 00:26:17.345 [2024-04-26 15:36:34.574335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 15:36:34.574693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 15:36:34.574706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.345 qpair failed and we were unable to recover it. 00:26:17.345 [2024-04-26 15:36:34.575033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 15:36:34.575392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 15:36:34.575406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.345 qpair failed and we were unable to recover it. 00:26:17.345 [2024-04-26 15:36:34.575651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 15:36:34.576014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 15:36:34.576030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.345 qpair failed and we were unable to recover it. 00:26:17.345 [2024-04-26 15:36:34.576260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 15:36:34.576595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 15:36:34.576608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.345 qpair failed and we were unable to recover it. 00:26:17.345 [2024-04-26 15:36:34.576967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 15:36:34.577287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 15:36:34.577301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.345 qpair failed and we were unable to recover it. 00:26:17.345 [2024-04-26 15:36:34.577700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 15:36:34.577899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 15:36:34.577914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.345 qpair failed and we were unable to recover it. 00:26:17.345 [2024-04-26 15:36:34.578252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 15:36:34.578596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 15:36:34.578610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.345 qpair failed and we were unable to recover it. 00:26:17.345 [2024-04-26 15:36:34.578994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 15:36:34.579354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 15:36:34.579368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.345 qpair failed and we were unable to recover it. 00:26:17.345 [2024-04-26 15:36:34.579783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 15:36:34.580158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 15:36:34.580173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.345 qpair failed and we were unable to recover it. 00:26:17.345 [2024-04-26 15:36:34.580467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 15:36:34.580790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 15:36:34.580804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.345 qpair failed and we were unable to recover it. 00:26:17.345 [2024-04-26 15:36:34.581047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 15:36:34.581414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 15:36:34.581427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.345 qpair failed and we were unable to recover it. 00:26:17.345 [2024-04-26 15:36:34.581785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 15:36:34.582121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 15:36:34.582136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.345 qpair failed and we were unable to recover it. 00:26:17.345 [2024-04-26 15:36:34.582364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 15:36:34.582690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 15:36:34.582704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.345 qpair failed and we were unable to recover it. 00:26:17.345 [2024-04-26 15:36:34.583048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 15:36:34.583407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 15:36:34.583421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.345 qpair failed and we were unable to recover it. 00:26:17.345 [2024-04-26 15:36:34.583778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 15:36:34.584113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 15:36:34.584128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.345 qpair failed and we were unable to recover it. 00:26:17.345 [2024-04-26 15:36:34.584379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 15:36:34.584737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 15:36:34.584752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.345 qpair failed and we were unable to recover it. 00:26:17.345 [2024-04-26 15:36:34.585133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 15:36:34.585335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.345 [2024-04-26 15:36:34.585350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.346 qpair failed and we were unable to recover it. 00:26:17.346 [2024-04-26 15:36:34.585689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 15:36:34.586034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 15:36:34.586048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.346 qpair failed and we were unable to recover it. 00:26:17.346 [2024-04-26 15:36:34.586371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 15:36:34.586705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 15:36:34.586719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.346 qpair failed and we were unable to recover it. 00:26:17.346 [2024-04-26 15:36:34.587032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 15:36:34.587385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 15:36:34.587398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.346 qpair failed and we were unable to recover it. 00:26:17.346 [2024-04-26 15:36:34.587697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 15:36:34.588042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 15:36:34.588056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.346 qpair failed and we were unable to recover it. 00:26:17.346 [2024-04-26 15:36:34.588378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 15:36:34.588708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 15:36:34.588721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.346 qpair failed and we were unable to recover it. 00:26:17.346 [2024-04-26 15:36:34.589035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 15:36:34.589391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 15:36:34.589405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.346 qpair failed and we were unable to recover it. 00:26:17.346 [2024-04-26 15:36:34.589732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 15:36:34.590129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 15:36:34.590143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.346 qpair failed and we were unable to recover it. 00:26:17.346 [2024-04-26 15:36:34.590459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 15:36:34.590662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 15:36:34.590677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.346 qpair failed and we were unable to recover it. 00:26:17.346 [2024-04-26 15:36:34.590918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 15:36:34.591288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 15:36:34.591301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.346 qpair failed and we were unable to recover it. 00:26:17.346 [2024-04-26 15:36:34.591658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 15:36:34.592061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 15:36:34.592075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.346 qpair failed and we were unable to recover it. 00:26:17.346 [2024-04-26 15:36:34.592423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 15:36:34.592766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 15:36:34.592779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.346 qpair failed and we were unable to recover it. 00:26:17.346 [2024-04-26 15:36:34.593111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 15:36:34.593477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 15:36:34.593492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.346 qpair failed and we were unable to recover it. 00:26:17.346 [2024-04-26 15:36:34.593824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 15:36:34.594192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 15:36:34.594208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.346 qpair failed and we were unable to recover it. 00:26:17.346 [2024-04-26 15:36:34.594536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 15:36:34.594804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 15:36:34.594818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.346 qpair failed and we were unable to recover it. 00:26:17.346 [2024-04-26 15:36:34.595190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 15:36:34.595522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 15:36:34.595536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.346 qpair failed and we were unable to recover it. 00:26:17.346 [2024-04-26 15:36:34.595878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 15:36:34.596196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 15:36:34.596210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.346 qpair failed and we were unable to recover it. 00:26:17.346 [2024-04-26 15:36:34.596423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 15:36:34.596782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 15:36:34.596797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.346 qpair failed and we were unable to recover it. 00:26:17.346 [2024-04-26 15:36:34.597131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 15:36:34.597456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 15:36:34.597470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.346 qpair failed and we were unable to recover it. 00:26:17.346 [2024-04-26 15:36:34.597812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 15:36:34.598015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 15:36:34.598030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.346 qpair failed and we were unable to recover it. 00:26:17.346 [2024-04-26 15:36:34.598353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 15:36:34.598688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 15:36:34.598702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.346 qpair failed and we were unable to recover it. 00:26:17.346 [2024-04-26 15:36:34.599031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 15:36:34.599363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 15:36:34.599376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.346 qpair failed and we were unable to recover it. 00:26:17.346 [2024-04-26 15:36:34.599701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 15:36:34.599903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 15:36:34.599919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.346 qpair failed and we were unable to recover it. 00:26:17.346 [2024-04-26 15:36:34.600262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 15:36:34.600619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 15:36:34.600634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.346 qpair failed and we were unable to recover it. 00:26:17.346 [2024-04-26 15:36:34.600863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 15:36:34.601213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 15:36:34.601227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.346 qpair failed and we were unable to recover it. 00:26:17.346 [2024-04-26 15:36:34.601566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 15:36:34.601916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 15:36:34.601930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.346 qpair failed and we were unable to recover it. 00:26:17.346 [2024-04-26 15:36:34.602245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 15:36:34.602584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 15:36:34.602598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.346 qpair failed and we were unable to recover it. 00:26:17.346 [2024-04-26 15:36:34.602952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 15:36:34.603312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 15:36:34.603327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.346 qpair failed and we were unable to recover it. 00:26:17.346 [2024-04-26 15:36:34.603651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 15:36:34.604067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.346 [2024-04-26 15:36:34.604082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.346 qpair failed and we were unable to recover it. 00:26:17.346 [2024-04-26 15:36:34.604268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.347 [2024-04-26 15:36:34.604630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.347 [2024-04-26 15:36:34.604644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.347 qpair failed and we were unable to recover it. 00:26:17.347 [2024-04-26 15:36:34.604955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.347 [2024-04-26 15:36:34.605315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.347 [2024-04-26 15:36:34.605329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.347 qpair failed and we were unable to recover it. 00:26:17.347 [2024-04-26 15:36:34.605656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.347 [2024-04-26 15:36:34.605983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.347 [2024-04-26 15:36:34.605997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.347 qpair failed and we were unable to recover it. 00:26:17.347 [2024-04-26 15:36:34.606321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.347 [2024-04-26 15:36:34.606688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.347 [2024-04-26 15:36:34.606703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.347 qpair failed and we were unable to recover it. 00:26:17.347 [2024-04-26 15:36:34.607031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.347 [2024-04-26 15:36:34.607436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.347 [2024-04-26 15:36:34.607449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.347 qpair failed and we were unable to recover it. 00:26:17.347 [2024-04-26 15:36:34.607758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.347 [2024-04-26 15:36:34.608119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.347 [2024-04-26 15:36:34.608134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.347 qpair failed and we were unable to recover it. 00:26:17.347 [2024-04-26 15:36:34.608491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.347 [2024-04-26 15:36:34.608813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.347 [2024-04-26 15:36:34.608828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.347 qpair failed and we were unable to recover it. 00:26:17.347 [2024-04-26 15:36:34.609150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.347 [2024-04-26 15:36:34.609477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.347 [2024-04-26 15:36:34.609491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.347 qpair failed and we were unable to recover it. 00:26:17.347 [2024-04-26 15:36:34.609792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.347 [2024-04-26 15:36:34.609974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.347 [2024-04-26 15:36:34.609989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.347 qpair failed and we were unable to recover it. 00:26:17.347 [2024-04-26 15:36:34.610332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.347 [2024-04-26 15:36:34.610661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.347 [2024-04-26 15:36:34.610674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.347 qpair failed and we were unable to recover it. 00:26:17.347 [2024-04-26 15:36:34.611012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.347 [2024-04-26 15:36:34.611340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.347 [2024-04-26 15:36:34.611353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.347 qpair failed and we were unable to recover it. 00:26:17.347 [2024-04-26 15:36:34.611668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.347 [2024-04-26 15:36:34.612003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.347 [2024-04-26 15:36:34.612018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.347 qpair failed and we were unable to recover it. 00:26:17.347 [2024-04-26 15:36:34.612409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.347 [2024-04-26 15:36:34.612607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.347 [2024-04-26 15:36:34.612621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.347 qpair failed and we were unable to recover it. 00:26:17.347 [2024-04-26 15:36:34.612961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.347 [2024-04-26 15:36:34.613314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.347 [2024-04-26 15:36:34.613327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.347 qpair failed and we were unable to recover it. 00:26:17.347 [2024-04-26 15:36:34.613674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.347 [2024-04-26 15:36:34.614030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.347 [2024-04-26 15:36:34.614044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.347 qpair failed and we were unable to recover it. 00:26:17.347 [2024-04-26 15:36:34.614372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.347 [2024-04-26 15:36:34.614711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.347 [2024-04-26 15:36:34.614724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.347 qpair failed and we were unable to recover it. 00:26:17.347 [2024-04-26 15:36:34.615128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.347 [2024-04-26 15:36:34.615513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.347 [2024-04-26 15:36:34.615526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.347 qpair failed and we were unable to recover it. 00:26:17.347 [2024-04-26 15:36:34.615723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.347 [2024-04-26 15:36:34.616078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.347 [2024-04-26 15:36:34.616092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.347 qpair failed and we were unable to recover it. 00:26:17.347 [2024-04-26 15:36:34.616428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.347 [2024-04-26 15:36:34.616770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.347 [2024-04-26 15:36:34.616784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.347 qpair failed and we were unable to recover it. 00:26:17.347 [2024-04-26 15:36:34.617109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.347 [2024-04-26 15:36:34.617444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.347 [2024-04-26 15:36:34.617457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.347 qpair failed and we were unable to recover it. 00:26:17.347 [2024-04-26 15:36:34.617780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.347 [2024-04-26 15:36:34.618141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.347 [2024-04-26 15:36:34.618155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.347 qpair failed and we were unable to recover it. 00:26:17.347 [2024-04-26 15:36:34.618473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.347 [2024-04-26 15:36:34.618810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.347 [2024-04-26 15:36:34.618823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.347 qpair failed and we were unable to recover it. 00:26:17.347 [2024-04-26 15:36:34.619179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.347 [2024-04-26 15:36:34.619518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.347 [2024-04-26 15:36:34.619532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.347 qpair failed and we were unable to recover it. 00:26:17.347 [2024-04-26 15:36:34.619869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.347 [2024-04-26 15:36:34.620189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.347 [2024-04-26 15:36:34.620203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.347 qpair failed and we were unable to recover it. 00:26:17.347 [2024-04-26 15:36:34.620417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.347 [2024-04-26 15:36:34.620784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.347 [2024-04-26 15:36:34.620798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.347 qpair failed and we were unable to recover it. 00:26:17.347 [2024-04-26 15:36:34.621137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.347 [2024-04-26 15:36:34.621497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.347 [2024-04-26 15:36:34.621511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.347 qpair failed and we were unable to recover it. 00:26:17.347 [2024-04-26 15:36:34.621829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.347 [2024-04-26 15:36:34.622171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.347 [2024-04-26 15:36:34.622185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.347 qpair failed and we were unable to recover it. 00:26:17.347 [2024-04-26 15:36:34.622507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.347 [2024-04-26 15:36:34.622843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.347 [2024-04-26 15:36:34.622858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.347 qpair failed and we were unable to recover it. 00:26:17.347 [2024-04-26 15:36:34.623183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.347 [2024-04-26 15:36:34.623536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.347 [2024-04-26 15:36:34.623550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.348 qpair failed and we were unable to recover it. 00:26:17.348 [2024-04-26 15:36:34.623807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.348 [2024-04-26 15:36:34.624142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.348 [2024-04-26 15:36:34.624156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.348 qpair failed and we were unable to recover it. 00:26:17.348 [2024-04-26 15:36:34.624475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.348 [2024-04-26 15:36:34.624819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.348 [2024-04-26 15:36:34.624833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.348 qpair failed and we were unable to recover it. 00:26:17.348 [2024-04-26 15:36:34.625035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.348 [2024-04-26 15:36:34.625360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.348 [2024-04-26 15:36:34.625374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.348 qpair failed and we were unable to recover it. 00:26:17.348 [2024-04-26 15:36:34.625665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.348 [2024-04-26 15:36:34.626011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.348 [2024-04-26 15:36:34.626027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.348 qpair failed and we were unable to recover it. 00:26:17.348 [2024-04-26 15:36:34.626351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.348 [2024-04-26 15:36:34.626595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.348 [2024-04-26 15:36:34.626613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.348 qpair failed and we were unable to recover it. 00:26:17.348 [2024-04-26 15:36:34.626959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.348 [2024-04-26 15:36:34.627339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.348 [2024-04-26 15:36:34.627352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.348 qpair failed and we were unable to recover it. 00:26:17.348 [2024-04-26 15:36:34.627747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.348 [2024-04-26 15:36:34.628094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.348 [2024-04-26 15:36:34.628110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.348 qpair failed and we were unable to recover it. 00:26:17.348 [2024-04-26 15:36:34.628499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.348 [2024-04-26 15:36:34.628828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.348 [2024-04-26 15:36:34.628849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.348 qpair failed and we were unable to recover it. 00:26:17.348 [2024-04-26 15:36:34.629154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.348 [2024-04-26 15:36:34.629522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.348 [2024-04-26 15:36:34.629535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.348 qpair failed and we were unable to recover it. 00:26:17.348 [2024-04-26 15:36:34.629778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.348 [2024-04-26 15:36:34.630134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.348 [2024-04-26 15:36:34.630148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.348 qpair failed and we were unable to recover it. 00:26:17.348 [2024-04-26 15:36:34.630469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.348 [2024-04-26 15:36:34.630710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.348 [2024-04-26 15:36:34.630723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.348 qpair failed and we were unable to recover it. 00:26:17.348 [2024-04-26 15:36:34.631079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.348 [2024-04-26 15:36:34.631418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.348 [2024-04-26 15:36:34.631431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.348 qpair failed and we were unable to recover it. 00:26:17.348 [2024-04-26 15:36:34.631764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.348 [2024-04-26 15:36:34.632096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.348 [2024-04-26 15:36:34.632109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.348 qpair failed and we were unable to recover it. 00:26:17.348 [2024-04-26 15:36:34.632482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.348 [2024-04-26 15:36:34.632719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.348 [2024-04-26 15:36:34.632732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.348 qpair failed and we were unable to recover it. 00:26:17.348 [2024-04-26 15:36:34.633086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.348 [2024-04-26 15:36:34.633350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.348 [2024-04-26 15:36:34.633367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.348 qpair failed and we were unable to recover it. 00:26:17.348 [2024-04-26 15:36:34.633698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.348 [2024-04-26 15:36:34.634087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.348 [2024-04-26 15:36:34.634102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.348 qpair failed and we were unable to recover it. 00:26:17.348 [2024-04-26 15:36:34.634325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.348 [2024-04-26 15:36:34.634667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.348 [2024-04-26 15:36:34.634682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.348 qpair failed and we were unable to recover it. 00:26:17.348 [2024-04-26 15:36:34.634901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.348 [2024-04-26 15:36:34.635114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.348 [2024-04-26 15:36:34.635130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.348 qpair failed and we were unable to recover it. 00:26:17.348 [2024-04-26 15:36:34.635489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.348 [2024-04-26 15:36:34.635852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.348 [2024-04-26 15:36:34.635868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.348 qpair failed and we were unable to recover it. 00:26:17.348 [2024-04-26 15:36:34.636058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.348 [2024-04-26 15:36:34.636382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.348 [2024-04-26 15:36:34.636397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.348 qpair failed and we were unable to recover it. 00:26:17.348 [2024-04-26 15:36:34.636719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.348 [2024-04-26 15:36:34.637074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.348 [2024-04-26 15:36:34.637088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.348 qpair failed and we were unable to recover it. 00:26:17.348 [2024-04-26 15:36:34.637443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.348 [2024-04-26 15:36:34.637765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.348 [2024-04-26 15:36:34.637780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.348 qpair failed and we were unable to recover it. 00:26:17.348 [2024-04-26 15:36:34.638124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.348 [2024-04-26 15:36:34.638448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.348 [2024-04-26 15:36:34.638463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.348 qpair failed and we were unable to recover it. 00:26:17.348 [2024-04-26 15:36:34.638650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.348 [2024-04-26 15:36:34.639028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.348 [2024-04-26 15:36:34.639042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.348 qpair failed and we were unable to recover it. 00:26:17.348 [2024-04-26 15:36:34.639364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.348 [2024-04-26 15:36:34.639717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.348 [2024-04-26 15:36:34.639734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.348 qpair failed and we were unable to recover it. 00:26:17.348 [2024-04-26 15:36:34.640050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.348 [2024-04-26 15:36:34.640363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.348 [2024-04-26 15:36:34.640377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.348 qpair failed and we were unable to recover it. 00:26:17.348 [2024-04-26 15:36:34.640705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.348 [2024-04-26 15:36:34.641043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.348 [2024-04-26 15:36:34.641058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.348 qpair failed and we were unable to recover it. 00:26:17.348 [2024-04-26 15:36:34.641377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.348 [2024-04-26 15:36:34.641706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.348 [2024-04-26 15:36:34.641720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.348 qpair failed and we were unable to recover it. 00:26:17.348 [2024-04-26 15:36:34.642044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.348 [2024-04-26 15:36:34.642287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.349 [2024-04-26 15:36:34.642301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.349 qpair failed and we were unable to recover it. 00:26:17.349 [2024-04-26 15:36:34.642682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.349 [2024-04-26 15:36:34.643034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.349 [2024-04-26 15:36:34.643049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.349 qpair failed and we were unable to recover it. 00:26:17.349 [2024-04-26 15:36:34.643375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.349 [2024-04-26 15:36:34.643703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.349 [2024-04-26 15:36:34.643717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.349 qpair failed and we were unable to recover it. 00:26:17.349 [2024-04-26 15:36:34.644030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.349 [2024-04-26 15:36:34.644366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.349 [2024-04-26 15:36:34.644379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.349 qpair failed and we were unable to recover it. 00:26:17.349 [2024-04-26 15:36:34.644751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.349 [2024-04-26 15:36:34.644997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.349 [2024-04-26 15:36:34.645011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.349 qpair failed and we were unable to recover it. 00:26:17.349 [2024-04-26 15:36:34.645365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.349 [2024-04-26 15:36:34.645569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.349 [2024-04-26 15:36:34.645585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.349 qpair failed and we were unable to recover it. 00:26:17.349 [2024-04-26 15:36:34.645916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.349 [2024-04-26 15:36:34.646295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.349 [2024-04-26 15:36:34.646313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.349 qpair failed and we were unable to recover it. 00:26:17.349 [2024-04-26 15:36:34.646645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.349 [2024-04-26 15:36:34.646968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.349 [2024-04-26 15:36:34.646982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.349 qpair failed and we were unable to recover it. 00:26:17.349 [2024-04-26 15:36:34.647315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.349 [2024-04-26 15:36:34.647646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.349 [2024-04-26 15:36:34.647660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.349 qpair failed and we were unable to recover it. 00:26:17.349 [2024-04-26 15:36:34.648052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.349 [2024-04-26 15:36:34.648446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.349 [2024-04-26 15:36:34.648460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.349 qpair failed and we were unable to recover it. 00:26:17.349 [2024-04-26 15:36:34.648692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.349 [2024-04-26 15:36:34.649039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.349 [2024-04-26 15:36:34.649054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.349 qpair failed and we were unable to recover it. 00:26:17.349 [2024-04-26 15:36:34.649431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.349 [2024-04-26 15:36:34.649634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.349 [2024-04-26 15:36:34.649648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.349 qpair failed and we were unable to recover it. 00:26:17.349 [2024-04-26 15:36:34.649995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.349 [2024-04-26 15:36:34.650355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.349 [2024-04-26 15:36:34.650369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.349 qpair failed and we were unable to recover it. 00:26:17.349 [2024-04-26 15:36:34.650758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.349 [2024-04-26 15:36:34.651130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.349 [2024-04-26 15:36:34.651144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.349 qpair failed and we were unable to recover it. 00:26:17.349 [2024-04-26 15:36:34.651470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.349 [2024-04-26 15:36:34.651797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.349 [2024-04-26 15:36:34.651811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.349 qpair failed and we were unable to recover it. 00:26:17.349 [2024-04-26 15:36:34.652100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.349 [2024-04-26 15:36:34.652419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.349 [2024-04-26 15:36:34.652433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.349 qpair failed and we were unable to recover it. 00:26:17.349 [2024-04-26 15:36:34.652800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.349 [2024-04-26 15:36:34.653157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.349 [2024-04-26 15:36:34.653172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.349 qpair failed and we were unable to recover it. 00:26:17.349 [2024-04-26 15:36:34.653514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.349 [2024-04-26 15:36:34.653715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.349 [2024-04-26 15:36:34.653731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.349 qpair failed and we were unable to recover it. 00:26:17.349 [2024-04-26 15:36:34.654061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.349 [2024-04-26 15:36:34.654414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.349 [2024-04-26 15:36:34.654428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.349 qpair failed and we were unable to recover it. 00:26:17.349 [2024-04-26 15:36:34.654787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.349 [2024-04-26 15:36:34.655138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.349 [2024-04-26 15:36:34.655153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.349 qpair failed and we were unable to recover it. 00:26:17.349 [2024-04-26 15:36:34.655343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.349 [2024-04-26 15:36:34.655574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.349 [2024-04-26 15:36:34.655589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.349 qpair failed and we were unable to recover it. 00:26:17.349 [2024-04-26 15:36:34.655929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.349 [2024-04-26 15:36:34.656268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.349 [2024-04-26 15:36:34.656282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.349 qpair failed and we were unable to recover it. 00:26:17.349 [2024-04-26 15:36:34.656624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.349 [2024-04-26 15:36:34.656959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.349 [2024-04-26 15:36:34.656974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.349 qpair failed and we were unable to recover it. 00:26:17.349 [2024-04-26 15:36:34.657340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.349 [2024-04-26 15:36:34.657701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.349 [2024-04-26 15:36:34.657715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.349 qpair failed and we were unable to recover it. 00:26:17.349 [2024-04-26 15:36:34.658052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.349 [2024-04-26 15:36:34.658261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.349 [2024-04-26 15:36:34.658276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.349 qpair failed and we were unable to recover it. 00:26:17.349 [2024-04-26 15:36:34.658622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.349 [2024-04-26 15:36:34.658986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.349 [2024-04-26 15:36:34.659001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.349 qpair failed and we were unable to recover it. 00:26:17.349 [2024-04-26 15:36:34.659166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.349 [2024-04-26 15:36:34.659518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.349 [2024-04-26 15:36:34.659531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.349 qpair failed and we were unable to recover it. 00:26:17.349 [2024-04-26 15:36:34.659885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.349 [2024-04-26 15:36:34.660223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.349 [2024-04-26 15:36:34.660236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.349 qpair failed and we were unable to recover it. 00:26:17.349 [2024-04-26 15:36:34.660591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.349 [2024-04-26 15:36:34.660960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.349 [2024-04-26 15:36:34.660975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.349 qpair failed and we were unable to recover it. 00:26:17.349 [2024-04-26 15:36:34.661342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.350 [2024-04-26 15:36:34.661672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.350 [2024-04-26 15:36:34.661687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.350 qpair failed and we were unable to recover it. 00:26:17.350 [2024-04-26 15:36:34.662069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.350 [2024-04-26 15:36:34.662413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.350 [2024-04-26 15:36:34.662426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.350 qpair failed and we were unable to recover it. 00:26:17.350 [2024-04-26 15:36:34.662832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.350 [2024-04-26 15:36:34.663075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.350 [2024-04-26 15:36:34.663088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.350 qpair failed and we were unable to recover it. 00:26:17.350 [2024-04-26 15:36:34.663312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.350 [2024-04-26 15:36:34.663547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.350 [2024-04-26 15:36:34.663562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.350 qpair failed and we were unable to recover it. 00:26:17.350 [2024-04-26 15:36:34.663796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.350 [2024-04-26 15:36:34.664125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.350 [2024-04-26 15:36:34.664140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.350 qpair failed and we were unable to recover it. 00:26:17.350 [2024-04-26 15:36:34.664507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.350 [2024-04-26 15:36:34.664834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.350 [2024-04-26 15:36:34.664854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.350 qpair failed and we were unable to recover it. 00:26:17.350 [2024-04-26 15:36:34.665226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.350 [2024-04-26 15:36:34.665453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.350 [2024-04-26 15:36:34.665467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.350 qpair failed and we were unable to recover it. 00:26:17.350 [2024-04-26 15:36:34.665806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.350 [2024-04-26 15:36:34.666119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.350 [2024-04-26 15:36:34.666134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.350 qpair failed and we were unable to recover it. 00:26:17.350 [2024-04-26 15:36:34.666467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.350 [2024-04-26 15:36:34.666849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.350 [2024-04-26 15:36:34.666864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.350 qpair failed and we were unable to recover it. 00:26:17.350 [2024-04-26 15:36:34.667247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.350 [2024-04-26 15:36:34.667477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.350 [2024-04-26 15:36:34.667491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.350 qpair failed and we were unable to recover it. 00:26:17.350 [2024-04-26 15:36:34.667698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.350 [2024-04-26 15:36:34.667903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.350 [2024-04-26 15:36:34.667918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.350 qpair failed and we were unable to recover it. 00:26:17.350 [2024-04-26 15:36:34.668277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.350 [2024-04-26 15:36:34.668644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.350 [2024-04-26 15:36:34.668658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.350 qpair failed and we were unable to recover it. 00:26:17.350 [2024-04-26 15:36:34.669021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.350 [2024-04-26 15:36:34.669252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.350 [2024-04-26 15:36:34.669266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.350 qpair failed and we were unable to recover it. 00:26:17.350 [2024-04-26 15:36:34.669622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.350 [2024-04-26 15:36:34.669972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.350 [2024-04-26 15:36:34.669987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.350 qpair failed and we were unable to recover it. 00:26:17.350 [2024-04-26 15:36:34.670347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.350 [2024-04-26 15:36:34.670713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.350 [2024-04-26 15:36:34.670727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.350 qpair failed and we were unable to recover it. 00:26:17.350 [2024-04-26 15:36:34.671075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.350 [2024-04-26 15:36:34.671408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.350 [2024-04-26 15:36:34.671422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.350 qpair failed and we were unable to recover it. 00:26:17.350 [2024-04-26 15:36:34.671784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.350 [2024-04-26 15:36:34.672039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.350 [2024-04-26 15:36:34.672054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.350 qpair failed and we were unable to recover it. 00:26:17.350 [2024-04-26 15:36:34.672426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.350 [2024-04-26 15:36:34.672633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.350 [2024-04-26 15:36:34.672648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.350 qpair failed and we were unable to recover it. 00:26:17.350 [2024-04-26 15:36:34.672973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.350 [2024-04-26 15:36:34.673337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.350 [2024-04-26 15:36:34.673352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.350 qpair failed and we were unable to recover it. 00:26:17.350 [2024-04-26 15:36:34.673672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.350 [2024-04-26 15:36:34.673862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.350 [2024-04-26 15:36:34.673878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.350 qpair failed and we were unable to recover it. 00:26:17.350 [2024-04-26 15:36:34.674224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.350 [2024-04-26 15:36:34.674591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.350 [2024-04-26 15:36:34.674605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.350 qpair failed and we were unable to recover it. 00:26:17.350 [2024-04-26 15:36:34.674964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.350 [2024-04-26 15:36:34.675199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.350 [2024-04-26 15:36:34.675214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.350 qpair failed and we were unable to recover it. 00:26:17.350 [2024-04-26 15:36:34.675538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.350 [2024-04-26 15:36:34.675882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.350 [2024-04-26 15:36:34.675897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.350 qpair failed and we were unable to recover it. 00:26:17.350 [2024-04-26 15:36:34.676238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.350 [2024-04-26 15:36:34.676604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.350 [2024-04-26 15:36:34.676618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.350 qpair failed and we were unable to recover it. 00:26:17.350 [2024-04-26 15:36:34.676969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.350 [2024-04-26 15:36:34.677295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.350 [2024-04-26 15:36:34.677309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.350 qpair failed and we were unable to recover it. 00:26:17.350 [2024-04-26 15:36:34.677639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.351 [2024-04-26 15:36:34.677968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.351 [2024-04-26 15:36:34.677983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.351 qpair failed and we were unable to recover it. 00:26:17.351 [2024-04-26 15:36:34.678336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.351 [2024-04-26 15:36:34.678560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.351 [2024-04-26 15:36:34.678574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.351 qpair failed and we were unable to recover it. 00:26:17.351 [2024-04-26 15:36:34.678930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.351 [2024-04-26 15:36:34.679291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.351 [2024-04-26 15:36:34.679305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.351 qpair failed and we were unable to recover it. 00:26:17.351 [2024-04-26 15:36:34.679638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.351 [2024-04-26 15:36:34.679967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.351 [2024-04-26 15:36:34.679981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.351 qpair failed and we were unable to recover it. 00:26:17.351 [2024-04-26 15:36:34.680351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.351 [2024-04-26 15:36:34.680694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.351 [2024-04-26 15:36:34.680708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.351 qpair failed and we were unable to recover it. 00:26:17.351 [2024-04-26 15:36:34.680899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.351 [2024-04-26 15:36:34.681115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.351 [2024-04-26 15:36:34.681129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.351 qpair failed and we were unable to recover it. 00:26:17.351 [2024-04-26 15:36:34.681473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.351 [2024-04-26 15:36:34.681845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.351 [2024-04-26 15:36:34.681861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.351 qpair failed and we were unable to recover it. 00:26:17.351 [2024-04-26 15:36:34.682222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.351 [2024-04-26 15:36:34.682582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.351 [2024-04-26 15:36:34.682596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.351 qpair failed and we were unable to recover it. 00:26:17.351 [2024-04-26 15:36:34.682930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.351 [2024-04-26 15:36:34.683267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.351 [2024-04-26 15:36:34.683281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.351 qpair failed and we were unable to recover it. 00:26:17.351 [2024-04-26 15:36:34.683695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.351 [2024-04-26 15:36:34.683906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.351 [2024-04-26 15:36:34.683922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.351 qpair failed and we were unable to recover it. 00:26:17.351 [2024-04-26 15:36:34.684224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.351 [2024-04-26 15:36:34.684566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.351 [2024-04-26 15:36:34.684581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.351 qpair failed and we were unable to recover it. 00:26:17.351 [2024-04-26 15:36:34.684932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.351 [2024-04-26 15:36:34.685135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.351 [2024-04-26 15:36:34.685150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.351 qpair failed and we were unable to recover it. 00:26:17.351 [2024-04-26 15:36:34.685481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.351 [2024-04-26 15:36:34.685688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.351 [2024-04-26 15:36:34.685703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.351 qpair failed and we were unable to recover it. 00:26:17.351 [2024-04-26 15:36:34.686046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.351 [2024-04-26 15:36:34.686400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.351 [2024-04-26 15:36:34.686413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.351 qpair failed and we were unable to recover it. 00:26:17.351 [2024-04-26 15:36:34.686641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.351 [2024-04-26 15:36:34.687057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.351 [2024-04-26 15:36:34.687071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.351 qpair failed and we were unable to recover it. 00:26:17.351 [2024-04-26 15:36:34.687315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.351 [2024-04-26 15:36:34.687669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.351 [2024-04-26 15:36:34.687682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.351 qpair failed and we were unable to recover it. 00:26:17.351 [2024-04-26 15:36:34.688079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.351 [2024-04-26 15:36:34.688443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.351 [2024-04-26 15:36:34.688457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.351 qpair failed and we were unable to recover it. 00:26:17.351 [2024-04-26 15:36:34.688809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.351 [2024-04-26 15:36:34.689013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.351 [2024-04-26 15:36:34.689028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.351 qpair failed and we were unable to recover it. 00:26:17.351 [2024-04-26 15:36:34.689391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.351 [2024-04-26 15:36:34.689716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.351 [2024-04-26 15:36:34.689729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.351 qpair failed and we were unable to recover it. 00:26:17.351 [2024-04-26 15:36:34.690088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.351 [2024-04-26 15:36:34.690444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.351 [2024-04-26 15:36:34.690458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.351 qpair failed and we were unable to recover it. 00:26:17.351 [2024-04-26 15:36:34.690792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.351 [2024-04-26 15:36:34.691123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.351 [2024-04-26 15:36:34.691138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.351 qpair failed and we were unable to recover it. 00:26:17.351 [2024-04-26 15:36:34.691485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.351 [2024-04-26 15:36:34.691892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.351 [2024-04-26 15:36:34.691907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.351 qpair failed and we were unable to recover it. 00:26:17.351 [2024-04-26 15:36:34.692247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.351 [2024-04-26 15:36:34.692599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.351 [2024-04-26 15:36:34.692613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.351 qpair failed and we were unable to recover it. 00:26:17.351 [2024-04-26 15:36:34.692972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.351 [2024-04-26 15:36:34.693362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.351 [2024-04-26 15:36:34.693376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.351 qpair failed and we were unable to recover it. 00:26:17.351 [2024-04-26 15:36:34.693727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.351 [2024-04-26 15:36:34.694098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.351 [2024-04-26 15:36:34.694114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.351 qpair failed and we were unable to recover it. 00:26:17.351 [2024-04-26 15:36:34.694452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.351 [2024-04-26 15:36:34.694819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.351 [2024-04-26 15:36:34.694834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.351 qpair failed and we were unable to recover it. 00:26:17.351 [2024-04-26 15:36:34.695186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.351 [2024-04-26 15:36:34.695548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.351 [2024-04-26 15:36:34.695562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.351 qpair failed and we were unable to recover it. 00:26:17.351 [2024-04-26 15:36:34.695905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.351 [2024-04-26 15:36:34.696275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.351 [2024-04-26 15:36:34.696290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.351 qpair failed and we were unable to recover it. 00:26:17.351 [2024-04-26 15:36:34.696644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.351 [2024-04-26 15:36:34.696852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-26 15:36:34.696869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.352 qpair failed and we were unable to recover it. 00:26:17.352 [2024-04-26 15:36:34.697297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-26 15:36:34.697624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-26 15:36:34.697639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.352 qpair failed and we were unable to recover it. 00:26:17.352 [2024-04-26 15:36:34.698001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-26 15:36:34.698375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-26 15:36:34.698389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.352 qpair failed and we were unable to recover it. 00:26:17.352 [2024-04-26 15:36:34.698744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-26 15:36:34.699070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-26 15:36:34.699085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.352 qpair failed and we were unable to recover it. 00:26:17.352 [2024-04-26 15:36:34.699429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-26 15:36:34.699829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-26 15:36:34.699854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.352 qpair failed and we were unable to recover it. 00:26:17.352 [2024-04-26 15:36:34.700200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-26 15:36:34.700570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-26 15:36:34.700584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.352 qpair failed and we were unable to recover it. 00:26:17.352 [2024-04-26 15:36:34.700939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-26 15:36:34.701286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-26 15:36:34.701299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.352 qpair failed and we were unable to recover it. 00:26:17.352 [2024-04-26 15:36:34.701564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-26 15:36:34.701929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-26 15:36:34.701944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.352 qpair failed and we were unable to recover it. 00:26:17.352 [2024-04-26 15:36:34.702190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-26 15:36:34.702558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-26 15:36:34.702572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.352 qpair failed and we were unable to recover it. 00:26:17.352 [2024-04-26 15:36:34.702934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-26 15:36:34.703307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-26 15:36:34.703321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.352 qpair failed and we were unable to recover it. 00:26:17.352 [2024-04-26 15:36:34.703708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-26 15:36:34.703960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-26 15:36:34.703974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.352 qpair failed and we were unable to recover it. 00:26:17.352 [2024-04-26 15:36:34.704322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-26 15:36:34.704689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-26 15:36:34.704702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.352 qpair failed and we were unable to recover it. 00:26:17.352 [2024-04-26 15:36:34.705069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-26 15:36:34.705441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-26 15:36:34.705454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.352 qpair failed and we were unable to recover it. 00:26:17.352 [2024-04-26 15:36:34.705815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-26 15:36:34.706179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-26 15:36:34.706194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.352 qpair failed and we were unable to recover it. 00:26:17.352 [2024-04-26 15:36:34.706549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-26 15:36:34.706896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-26 15:36:34.706911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.352 qpair failed and we were unable to recover it. 00:26:17.352 [2024-04-26 15:36:34.707271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-26 15:36:34.707644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-26 15:36:34.707659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.352 qpair failed and we were unable to recover it. 00:26:17.352 [2024-04-26 15:36:34.708018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-26 15:36:34.708361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-26 15:36:34.708378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.352 qpair failed and we were unable to recover it. 00:26:17.352 [2024-04-26 15:36:34.708603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-26 15:36:34.708974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-26 15:36:34.708990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.352 qpair failed and we were unable to recover it. 00:26:17.352 [2024-04-26 15:36:34.709404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-26 15:36:34.709728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-26 15:36:34.709742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.352 qpair failed and we were unable to recover it. 00:26:17.352 [2024-04-26 15:36:34.710081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-26 15:36:34.710458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-26 15:36:34.710473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.352 qpair failed and we were unable to recover it. 00:26:17.352 [2024-04-26 15:36:34.710797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-26 15:36:34.711168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-26 15:36:34.711184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.352 qpair failed and we were unable to recover it. 00:26:17.352 [2024-04-26 15:36:34.711544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-26 15:36:34.711903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-26 15:36:34.711919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.352 qpair failed and we were unable to recover it. 00:26:17.352 [2024-04-26 15:36:34.712265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-26 15:36:34.712641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-26 15:36:34.712655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.352 qpair failed and we were unable to recover it. 00:26:17.352 [2024-04-26 15:36:34.712980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-26 15:36:34.713356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-26 15:36:34.713369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.352 qpair failed and we were unable to recover it. 00:26:17.352 [2024-04-26 15:36:34.713604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-26 15:36:34.713978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-26 15:36:34.713992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.352 qpair failed and we were unable to recover it. 00:26:17.352 [2024-04-26 15:36:34.714321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-26 15:36:34.714690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-26 15:36:34.714704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.352 qpair failed and we were unable to recover it. 00:26:17.352 [2024-04-26 15:36:34.714976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-26 15:36:34.715335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-26 15:36:34.715349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.352 qpair failed and we were unable to recover it. 00:26:17.352 [2024-04-26 15:36:34.715601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-26 15:36:34.715945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-26 15:36:34.715960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.352 qpair failed and we were unable to recover it. 00:26:17.352 [2024-04-26 15:36:34.716286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-26 15:36:34.716643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-26 15:36:34.716656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.352 qpair failed and we were unable to recover it. 00:26:17.352 [2024-04-26 15:36:34.716984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-26 15:36:34.717244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-26 15:36:34.717258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.353 qpair failed and we were unable to recover it. 00:26:17.353 [2024-04-26 15:36:34.717576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-26 15:36:34.717797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-26 15:36:34.717811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.353 qpair failed and we were unable to recover it. 00:26:17.353 [2024-04-26 15:36:34.718201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-26 15:36:34.718580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-26 15:36:34.718593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.353 qpair failed and we were unable to recover it. 00:26:17.353 [2024-04-26 15:36:34.718994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-26 15:36:34.719350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-26 15:36:34.719365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.353 qpair failed and we were unable to recover it. 00:26:17.353 [2024-04-26 15:36:34.719728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-26 15:36:34.720089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-26 15:36:34.720105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.353 qpair failed and we were unable to recover it. 00:26:17.353 [2024-04-26 15:36:34.720427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-26 15:36:34.720798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-26 15:36:34.720812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.353 qpair failed and we were unable to recover it. 00:26:17.353 [2024-04-26 15:36:34.721140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-26 15:36:34.721474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-26 15:36:34.721489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.353 qpair failed and we were unable to recover it. 00:26:17.353 [2024-04-26 15:36:34.721825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-26 15:36:34.722136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-26 15:36:34.722150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.353 qpair failed and we were unable to recover it. 00:26:17.353 [2024-04-26 15:36:34.722474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-26 15:36:34.722807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-26 15:36:34.722822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.353 qpair failed and we were unable to recover it. 00:26:17.353 [2024-04-26 15:36:34.723035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-26 15:36:34.723380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-26 15:36:34.723394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.353 qpair failed and we were unable to recover it. 00:26:17.353 [2024-04-26 15:36:34.723762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-26 15:36:34.723966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-26 15:36:34.723981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.353 qpair failed and we were unable to recover it. 00:26:17.353 [2024-04-26 15:36:34.724340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-26 15:36:34.724693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-26 15:36:34.724706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.353 qpair failed and we were unable to recover it. 00:26:17.353 [2024-04-26 15:36:34.725033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-26 15:36:34.725420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-26 15:36:34.725433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.353 qpair failed and we were unable to recover it. 00:26:17.353 [2024-04-26 15:36:34.725750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-26 15:36:34.726069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-26 15:36:34.726083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.353 qpair failed and we were unable to recover it. 00:26:17.353 [2024-04-26 15:36:34.726301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-26 15:36:34.726652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-26 15:36:34.726665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.353 qpair failed and we were unable to recover it. 00:26:17.353 [2024-04-26 15:36:34.727015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-26 15:36:34.727388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-26 15:36:34.727402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.353 qpair failed and we were unable to recover it. 00:26:17.353 [2024-04-26 15:36:34.727764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-26 15:36:34.728130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-26 15:36:34.728146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.353 qpair failed and we were unable to recover it. 00:26:17.353 [2024-04-26 15:36:34.728490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-26 15:36:34.728824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-26 15:36:34.728844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.353 qpair failed and we were unable to recover it. 00:26:17.353 [2024-04-26 15:36:34.729195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-26 15:36:34.729445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-26 15:36:34.729461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.353 qpair failed and we were unable to recover it. 00:26:17.353 [2024-04-26 15:36:34.729808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-26 15:36:34.730131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-26 15:36:34.730146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.353 qpair failed and we were unable to recover it. 00:26:17.353 [2024-04-26 15:36:34.730354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-26 15:36:34.730700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-26 15:36:34.730713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.353 qpair failed and we were unable to recover it. 00:26:17.353 [2024-04-26 15:36:34.730921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-26 15:36:34.731306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-26 15:36:34.731320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.353 qpair failed and we were unable to recover it. 00:26:17.353 [2024-04-26 15:36:34.731684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-26 15:36:34.732025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-26 15:36:34.732040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.353 qpair failed and we were unable to recover it. 00:26:17.353 [2024-04-26 15:36:34.732409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-26 15:36:34.732792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-26 15:36:34.732806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.353 qpair failed and we were unable to recover it. 00:26:17.353 [2024-04-26 15:36:34.733134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-26 15:36:34.733502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-26 15:36:34.733516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.353 qpair failed and we were unable to recover it. 00:26:17.353 [2024-04-26 15:36:34.733879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-26 15:36:34.734221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-26 15:36:34.734236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.353 qpair failed and we were unable to recover it. 00:26:17.353 [2024-04-26 15:36:34.734625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-26 15:36:34.734822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-26 15:36:34.734846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.353 qpair failed and we were unable to recover it. 00:26:17.353 [2024-04-26 15:36:34.735174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-26 15:36:34.735506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-26 15:36:34.735521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.353 qpair failed and we were unable to recover it. 00:26:17.353 [2024-04-26 15:36:34.735870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-26 15:36:34.736225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-26 15:36:34.736239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.353 qpair failed and we were unable to recover it. 00:26:17.354 [2024-04-26 15:36:34.736552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-26 15:36:34.736926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-26 15:36:34.736940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.354 qpair failed and we were unable to recover it. 00:26:17.354 [2024-04-26 15:36:34.737242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-26 15:36:34.737578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-26 15:36:34.737591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.354 qpair failed and we were unable to recover it. 00:26:17.354 [2024-04-26 15:36:34.737953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-26 15:36:34.738318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-26 15:36:34.738331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.354 qpair failed and we were unable to recover it. 00:26:17.354 [2024-04-26 15:36:34.738658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-26 15:36:34.739027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-26 15:36:34.739041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.354 qpair failed and we were unable to recover it. 00:26:17.354 [2024-04-26 15:36:34.739360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-26 15:36:34.739715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-26 15:36:34.739729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.354 qpair failed and we were unable to recover it. 00:26:17.354 [2024-04-26 15:36:34.740085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-26 15:36:34.740331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-26 15:36:34.740345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.354 qpair failed and we were unable to recover it. 00:26:17.354 [2024-04-26 15:36:34.740669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-26 15:36:34.741028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-26 15:36:34.741043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.354 qpair failed and we were unable to recover it. 00:26:17.354 [2024-04-26 15:36:34.741372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-26 15:36:34.741735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-26 15:36:34.741753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.354 qpair failed and we were unable to recover it. 00:26:17.354 [2024-04-26 15:36:34.742125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-26 15:36:34.742490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-26 15:36:34.742504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.354 qpair failed and we were unable to recover it. 00:26:17.354 [2024-04-26 15:36:34.742865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-26 15:36:34.743220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-26 15:36:34.743234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.354 qpair failed and we were unable to recover it. 00:26:17.354 [2024-04-26 15:36:34.743599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-26 15:36:34.743942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-26 15:36:34.743958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.354 qpair failed and we were unable to recover it. 00:26:17.354 [2024-04-26 15:36:34.744311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-26 15:36:34.744619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-26 15:36:34.744634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.354 qpair failed and we were unable to recover it. 00:26:17.354 [2024-04-26 15:36:34.745025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-26 15:36:34.745343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-26 15:36:34.745357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.354 qpair failed and we were unable to recover it. 00:26:17.354 [2024-04-26 15:36:34.745709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-26 15:36:34.746077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-26 15:36:34.746091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.354 qpair failed and we were unable to recover it. 00:26:17.354 [2024-04-26 15:36:34.746497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-26 15:36:34.746862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-26 15:36:34.746877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.354 qpair failed and we were unable to recover it. 00:26:17.354 [2024-04-26 15:36:34.747236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-26 15:36:34.747578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-26 15:36:34.747592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.354 qpair failed and we were unable to recover it. 00:26:17.354 [2024-04-26 15:36:34.747927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-26 15:36:34.748299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-26 15:36:34.748313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.354 qpair failed and we were unable to recover it. 00:26:17.354 [2024-04-26 15:36:34.748669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-26 15:36:34.749011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-26 15:36:34.749028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.354 qpair failed and we were unable to recover it. 00:26:17.354 [2024-04-26 15:36:34.749358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-26 15:36:34.749709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-26 15:36:34.749722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.354 qpair failed and we were unable to recover it. 00:26:17.354 [2024-04-26 15:36:34.750113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-26 15:36:34.750490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-26 15:36:34.750505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.354 qpair failed and we were unable to recover it. 00:26:17.354 [2024-04-26 15:36:34.750853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-26 15:36:34.751179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-26 15:36:34.751192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.354 qpair failed and we were unable to recover it. 00:26:17.354 [2024-04-26 15:36:34.751420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-26 15:36:34.751762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-26 15:36:34.751775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.354 qpair failed and we were unable to recover it. 00:26:17.354 [2024-04-26 15:36:34.752097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-26 15:36:34.752539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-26 15:36:34.752552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.355 qpair failed and we were unable to recover it. 00:26:17.355 [2024-04-26 15:36:34.752871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-26 15:36:34.753257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-26 15:36:34.753271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.355 qpair failed and we were unable to recover it. 00:26:17.355 [2024-04-26 15:36:34.753595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-26 15:36:34.753989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-26 15:36:34.754003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.355 qpair failed and we were unable to recover it. 00:26:17.355 [2024-04-26 15:36:34.754353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-26 15:36:34.754724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-26 15:36:34.754737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.355 qpair failed and we were unable to recover it. 00:26:17.355 [2024-04-26 15:36:34.755108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-26 15:36:34.755468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-26 15:36:34.755481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.355 qpair failed and we were unable to recover it. 00:26:17.355 [2024-04-26 15:36:34.755810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-26 15:36:34.756179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-26 15:36:34.756197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.355 qpair failed and we were unable to recover it. 00:26:17.355 [2024-04-26 15:36:34.756514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-26 15:36:34.756844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-26 15:36:34.756858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.355 qpair failed and we were unable to recover it. 00:26:17.355 [2024-04-26 15:36:34.757202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-26 15:36:34.757567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-26 15:36:34.757580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.355 qpair failed and we were unable to recover it. 00:26:17.355 [2024-04-26 15:36:34.757940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-26 15:36:34.758144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-26 15:36:34.758160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.355 qpair failed and we were unable to recover it. 00:26:17.355 [2024-04-26 15:36:34.758577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-26 15:36:34.758918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-26 15:36:34.758933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.355 qpair failed and we were unable to recover it. 00:26:17.355 [2024-04-26 15:36:34.759286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-26 15:36:34.759652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-26 15:36:34.759665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.355 qpair failed and we were unable to recover it. 00:26:17.355 [2024-04-26 15:36:34.759990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-26 15:36:34.760367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-26 15:36:34.760380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.355 qpair failed and we were unable to recover it. 00:26:17.355 [2024-04-26 15:36:34.760745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-26 15:36:34.761077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-26 15:36:34.761092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.355 qpair failed and we were unable to recover it. 00:26:17.355 [2024-04-26 15:36:34.761462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-26 15:36:34.761739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-26 15:36:34.761753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.355 qpair failed and we were unable to recover it. 00:26:17.355 [2024-04-26 15:36:34.762090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-26 15:36:34.762462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-26 15:36:34.762476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.355 qpair failed and we were unable to recover it. 00:26:17.355 [2024-04-26 15:36:34.762799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-26 15:36:34.763164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-26 15:36:34.763178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.355 qpair failed and we were unable to recover it. 00:26:17.355 [2024-04-26 15:36:34.763502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-26 15:36:34.763720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-26 15:36:34.763734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.355 qpair failed and we were unable to recover it. 00:26:17.355 [2024-04-26 15:36:34.764073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-26 15:36:34.764412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-26 15:36:34.764426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.355 qpair failed and we were unable to recover it. 00:26:17.355 [2024-04-26 15:36:34.764755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-26 15:36:34.765119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-26 15:36:34.765133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.355 qpair failed and we were unable to recover it. 00:26:17.355 [2024-04-26 15:36:34.765390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-26 15:36:34.765612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-26 15:36:34.765628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.355 qpair failed and we were unable to recover it. 00:26:17.355 [2024-04-26 15:36:34.765823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-26 15:36:34.766173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-26 15:36:34.766187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.355 qpair failed and we were unable to recover it. 00:26:17.355 [2024-04-26 15:36:34.766508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-26 15:36:34.766876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-26 15:36:34.766890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.355 qpair failed and we were unable to recover it. 00:26:17.355 [2024-04-26 15:36:34.767215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-26 15:36:34.767562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-26 15:36:34.767575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.355 qpair failed and we were unable to recover it. 00:26:17.355 [2024-04-26 15:36:34.767946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-26 15:36:34.768282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-26 15:36:34.768297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.355 qpair failed and we were unable to recover it. 00:26:17.355 [2024-04-26 15:36:34.768628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-26 15:36:34.768957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-26 15:36:34.768971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.355 qpair failed and we were unable to recover it. 00:26:17.355 [2024-04-26 15:36:34.769326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-26 15:36:34.769697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-26 15:36:34.769711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.355 qpair failed and we were unable to recover it. 00:26:17.355 [2024-04-26 15:36:34.770027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-26 15:36:34.770389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-26 15:36:34.770402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.355 qpair failed and we were unable to recover it. 00:26:17.355 [2024-04-26 15:36:34.770729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-26 15:36:34.771068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-26 15:36:34.771082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.355 qpair failed and we were unable to recover it. 00:26:17.355 [2024-04-26 15:36:34.771473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-26 15:36:34.771836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-26 15:36:34.771856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.355 qpair failed and we were unable to recover it. 00:26:17.355 [2024-04-26 15:36:34.772139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-26 15:36:34.772478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-26 15:36:34.772491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.356 qpair failed and we were unable to recover it. 00:26:17.356 [2024-04-26 15:36:34.772846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-26 15:36:34.773206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-26 15:36:34.773220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.356 qpair failed and we were unable to recover it. 00:26:17.356 [2024-04-26 15:36:34.773551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-26 15:36:34.773830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-26 15:36:34.773850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.356 qpair failed and we were unable to recover it. 00:26:17.356 [2024-04-26 15:36:34.774214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-26 15:36:34.774593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-26 15:36:34.774606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.356 qpair failed and we were unable to recover it. 00:26:17.356 [2024-04-26 15:36:34.774968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-26 15:36:34.775320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-26 15:36:34.775335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.356 qpair failed and we were unable to recover it. 00:26:17.356 [2024-04-26 15:36:34.775641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-26 15:36:34.776001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-26 15:36:34.776017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.356 qpair failed and we were unable to recover it. 00:26:17.356 [2024-04-26 15:36:34.776378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-26 15:36:34.776718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-26 15:36:34.776731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.356 qpair failed and we were unable to recover it. 00:26:17.356 [2024-04-26 15:36:34.777068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-26 15:36:34.777504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-26 15:36:34.777517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.356 qpair failed and we were unable to recover it. 00:26:17.356 [2024-04-26 15:36:34.777869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-26 15:36:34.778195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-26 15:36:34.778209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.356 qpair failed and we were unable to recover it. 00:26:17.356 [2024-04-26 15:36:34.778548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-26 15:36:34.778849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-26 15:36:34.778864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.356 qpair failed and we were unable to recover it. 00:26:17.356 [2024-04-26 15:36:34.779203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-26 15:36:34.779534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-26 15:36:34.779547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.356 qpair failed and we were unable to recover it. 00:26:17.356 [2024-04-26 15:36:34.779889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-26 15:36:34.780210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-26 15:36:34.780223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.356 qpair failed and we were unable to recover it. 00:26:17.356 [2024-04-26 15:36:34.780431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-26 15:36:34.780804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-26 15:36:34.780819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.356 qpair failed and we were unable to recover it. 00:26:17.628 [2024-04-26 15:36:34.781181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 15:36:34.781512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 15:36:34.781529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.628 qpair failed and we were unable to recover it. 00:26:17.628 [2024-04-26 15:36:34.781884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 15:36:34.782241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 15:36:34.782256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.628 qpair failed and we were unable to recover it. 00:26:17.628 [2024-04-26 15:36:34.782601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 15:36:34.783010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 15:36:34.783024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.628 qpair failed and we were unable to recover it. 00:26:17.628 [2024-04-26 15:36:34.783351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 15:36:34.783687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 15:36:34.783701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.628 qpair failed and we were unable to recover it. 00:26:17.628 [2024-04-26 15:36:34.784040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 15:36:34.784431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 15:36:34.784445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.628 qpair failed and we were unable to recover it. 00:26:17.628 [2024-04-26 15:36:34.784761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 15:36:34.785104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 15:36:34.785118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.628 qpair failed and we were unable to recover it. 00:26:17.628 [2024-04-26 15:36:34.785468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 15:36:34.785844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 15:36:34.785859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.628 qpair failed and we were unable to recover it. 00:26:17.628 [2024-04-26 15:36:34.786081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 15:36:34.786426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 15:36:34.786440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.628 qpair failed and we were unable to recover it. 00:26:17.628 [2024-04-26 15:36:34.786757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 15:36:34.786973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 15:36:34.786989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.628 qpair failed and we were unable to recover it. 00:26:17.628 [2024-04-26 15:36:34.787344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 15:36:34.787708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 15:36:34.787721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.628 qpair failed and we were unable to recover it. 00:26:17.628 [2024-04-26 15:36:34.787913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 15:36:34.788276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 15:36:34.788290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.628 qpair failed and we were unable to recover it. 00:26:17.628 [2024-04-26 15:36:34.788650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 15:36:34.788990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 15:36:34.789005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.628 qpair failed and we were unable to recover it. 00:26:17.628 [2024-04-26 15:36:34.789347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 15:36:34.789686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 15:36:34.789700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.628 qpair failed and we were unable to recover it. 00:26:17.628 [2024-04-26 15:36:34.790059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 15:36:34.790419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 15:36:34.790433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.628 qpair failed and we were unable to recover it. 00:26:17.628 [2024-04-26 15:36:34.790759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 15:36:34.790902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 15:36:34.790917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.628 qpair failed and we were unable to recover it. 00:26:17.628 [2024-04-26 15:36:34.791279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 15:36:34.791648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 15:36:34.791661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.628 qpair failed and we were unable to recover it. 00:26:17.628 [2024-04-26 15:36:34.791958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 15:36:34.792223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 15:36:34.792237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.628 qpair failed and we were unable to recover it. 00:26:17.628 [2024-04-26 15:36:34.792556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 15:36:34.792748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 15:36:34.792763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.628 qpair failed and we were unable to recover it. 00:26:17.628 [2024-04-26 15:36:34.793158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 15:36:34.793524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 15:36:34.793538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.628 qpair failed and we were unable to recover it. 00:26:17.628 [2024-04-26 15:36:34.793864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 15:36:34.794240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 15:36:34.794253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.628 qpair failed and we were unable to recover it. 00:26:17.628 [2024-04-26 15:36:34.794477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 15:36:34.794822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 15:36:34.794836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.628 qpair failed and we were unable to recover it. 00:26:17.628 [2024-04-26 15:36:34.795230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 15:36:34.795595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 15:36:34.795609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.628 qpair failed and we were unable to recover it. 00:26:17.628 [2024-04-26 15:36:34.795997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 15:36:34.796371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 15:36:34.796385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.628 qpair failed and we were unable to recover it. 00:26:17.628 [2024-04-26 15:36:34.796750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 15:36:34.797116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 15:36:34.797131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.628 qpair failed and we were unable to recover it. 00:26:17.628 [2024-04-26 15:36:34.797426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 15:36:34.797761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 15:36:34.797775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.628 qpair failed and we were unable to recover it. 00:26:17.628 [2024-04-26 15:36:34.798122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 15:36:34.798489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 15:36:34.798503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.628 qpair failed and we were unable to recover it. 00:26:17.628 [2024-04-26 15:36:34.798905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 15:36:34.799226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 15:36:34.799240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.628 qpair failed and we were unable to recover it. 00:26:17.628 [2024-04-26 15:36:34.799599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.628 [2024-04-26 15:36:34.799955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.799970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.629 qpair failed and we were unable to recover it. 00:26:17.629 [2024-04-26 15:36:34.800293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.800657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.800670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.629 qpair failed and we were unable to recover it. 00:26:17.629 [2024-04-26 15:36:34.801106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.801438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.801452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.629 qpair failed and we were unable to recover it. 00:26:17.629 [2024-04-26 15:36:34.801657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.801990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.802004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.629 qpair failed and we were unable to recover it. 00:26:17.629 [2024-04-26 15:36:34.802234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.802595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.802610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.629 qpair failed and we were unable to recover it. 00:26:17.629 [2024-04-26 15:36:34.802933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.803159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.803173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.629 qpair failed and we were unable to recover it. 00:26:17.629 [2024-04-26 15:36:34.803528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.803835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.803862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.629 qpair failed and we were unable to recover it. 00:26:17.629 [2024-04-26 15:36:34.804058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.804398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.804411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.629 qpair failed and we were unable to recover it. 00:26:17.629 [2024-04-26 15:36:34.804734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.804967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.804981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.629 qpair failed and we were unable to recover it. 00:26:17.629 [2024-04-26 15:36:34.805342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.805688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.805701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.629 qpair failed and we were unable to recover it. 00:26:17.629 [2024-04-26 15:36:34.806035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.806432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.806445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.629 qpair failed and we were unable to recover it. 00:26:17.629 [2024-04-26 15:36:34.806772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.807052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.807067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.629 qpair failed and we were unable to recover it. 00:26:17.629 [2024-04-26 15:36:34.807396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.807800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.807813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.629 qpair failed and we were unable to recover it. 00:26:17.629 [2024-04-26 15:36:34.808190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.808561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.808575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.629 qpair failed and we were unable to recover it. 00:26:17.629 [2024-04-26 15:36:34.808870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.809277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.809291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.629 qpair failed and we were unable to recover it. 00:26:17.629 [2024-04-26 15:36:34.809618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.809999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.810013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.629 qpair failed and we were unable to recover it. 00:26:17.629 [2024-04-26 15:36:34.810331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.810576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.810590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.629 qpair failed and we were unable to recover it. 00:26:17.629 [2024-04-26 15:36:34.810835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.811198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.811212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.629 qpair failed and we were unable to recover it. 00:26:17.629 [2024-04-26 15:36:34.811514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.811874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.811889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.629 qpair failed and we were unable to recover it. 00:26:17.629 [2024-04-26 15:36:34.812207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.812423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.812438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.629 qpair failed and we were unable to recover it. 00:26:17.629 [2024-04-26 15:36:34.812766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.813106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.813121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.629 qpair failed and we were unable to recover it. 00:26:17.629 [2024-04-26 15:36:34.813452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.813801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.813814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.629 qpair failed and we were unable to recover it. 00:26:17.629 [2024-04-26 15:36:34.814179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.814548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.814563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.629 qpair failed and we were unable to recover it. 00:26:17.629 [2024-04-26 15:36:34.814906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.815314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.815327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.629 qpair failed and we were unable to recover it. 00:26:17.629 [2024-04-26 15:36:34.815550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.815909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.815923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.629 qpair failed and we were unable to recover it. 00:26:17.629 [2024-04-26 15:36:34.816250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.816616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.816629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.629 qpair failed and we were unable to recover it. 00:26:17.629 [2024-04-26 15:36:34.816972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.817224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.817237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.629 qpair failed and we were unable to recover it. 00:26:17.629 [2024-04-26 15:36:34.817556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.817853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.817868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.629 qpair failed and we were unable to recover it. 00:26:17.629 [2024-04-26 15:36:34.818231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.818560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.818574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.629 qpair failed and we were unable to recover it. 00:26:17.629 [2024-04-26 15:36:34.818908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.819250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.819264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.629 qpair failed and we were unable to recover it. 00:26:17.629 [2024-04-26 15:36:34.819568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.819925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.819939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.629 qpair failed and we were unable to recover it. 00:26:17.629 [2024-04-26 15:36:34.820302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.820681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.820694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.629 qpair failed and we were unable to recover it. 00:26:17.629 [2024-04-26 15:36:34.820894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.821236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.821251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.629 qpair failed and we were unable to recover it. 00:26:17.629 [2024-04-26 15:36:34.821614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.821981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.821996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.629 qpair failed and we were unable to recover it. 00:26:17.629 [2024-04-26 15:36:34.822356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.822669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.822684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.629 qpair failed and we were unable to recover it. 00:26:17.629 [2024-04-26 15:36:34.823049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.823388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.823403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.629 qpair failed and we were unable to recover it. 00:26:17.629 [2024-04-26 15:36:34.823652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.824012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.824027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.629 qpair failed and we were unable to recover it. 00:26:17.629 [2024-04-26 15:36:34.824348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.824552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.824568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.629 qpair failed and we were unable to recover it. 00:26:17.629 [2024-04-26 15:36:34.824923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.825286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.825301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.629 qpair failed and we were unable to recover it. 00:26:17.629 [2024-04-26 15:36:34.825539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.825882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.825898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.629 qpair failed and we were unable to recover it. 00:26:17.629 [2024-04-26 15:36:34.826287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.826623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.826637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.629 qpair failed and we were unable to recover it. 00:26:17.629 [2024-04-26 15:36:34.826980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.827231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.827244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.629 qpair failed and we were unable to recover it. 00:26:17.629 [2024-04-26 15:36:34.827575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.827933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.827948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.629 qpair failed and we were unable to recover it. 00:26:17.629 [2024-04-26 15:36:34.828284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.828669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.828684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.629 qpair failed and we were unable to recover it. 00:26:17.629 [2024-04-26 15:36:34.828998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.829362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.829376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.629 qpair failed and we were unable to recover it. 00:26:17.629 [2024-04-26 15:36:34.829700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.829859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.829874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.629 qpair failed and we were unable to recover it. 00:26:17.629 [2024-04-26 15:36:34.830230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.830608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.830621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.629 qpair failed and we were unable to recover it. 00:26:17.629 [2024-04-26 15:36:34.830942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.831330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-04-26 15:36:34.831345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.630 qpair failed and we were unable to recover it. 00:26:17.630 [2024-04-26 15:36:34.831647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.832000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.832014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.630 qpair failed and we were unable to recover it. 00:26:17.630 [2024-04-26 15:36:34.832334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.832682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.832695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.630 qpair failed and we were unable to recover it. 00:26:17.630 [2024-04-26 15:36:34.833025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.833415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.833429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.630 qpair failed and we were unable to recover it. 00:26:17.630 [2024-04-26 15:36:34.833749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.834123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.834137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.630 qpair failed and we were unable to recover it. 00:26:17.630 [2024-04-26 15:36:34.834458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.834820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.834834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.630 qpair failed and we were unable to recover it. 00:26:17.630 [2024-04-26 15:36:34.835266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.835589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.835602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.630 qpair failed and we were unable to recover it. 00:26:17.630 [2024-04-26 15:36:34.835830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.836137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.836152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.630 qpair failed and we were unable to recover it. 00:26:17.630 [2024-04-26 15:36:34.836517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.836724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.836738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.630 qpair failed and we were unable to recover it. 00:26:17.630 [2024-04-26 15:36:34.837133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.837444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.837458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.630 qpair failed and we were unable to recover it. 00:26:17.630 [2024-04-26 15:36:34.837911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.838267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.838285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.630 qpair failed and we were unable to recover it. 00:26:17.630 [2024-04-26 15:36:34.838536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.838909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.838923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.630 qpair failed and we were unable to recover it. 00:26:17.630 [2024-04-26 15:36:34.839278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.839617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.839631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.630 qpair failed and we were unable to recover it. 00:26:17.630 [2024-04-26 15:36:34.839958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.840175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.840190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.630 qpair failed and we were unable to recover it. 00:26:17.630 [2024-04-26 15:36:34.840538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.840904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.840918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.630 qpair failed and we were unable to recover it. 00:26:17.630 [2024-04-26 15:36:34.841265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.841603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.841616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.630 qpair failed and we were unable to recover it. 00:26:17.630 [2024-04-26 15:36:34.841947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.842307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.842320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.630 qpair failed and we were unable to recover it. 00:26:17.630 [2024-04-26 15:36:34.842551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.842827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.842849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.630 qpair failed and we were unable to recover it. 00:26:17.630 [2024-04-26 15:36:34.843188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.843397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.843412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.630 qpair failed and we were unable to recover it. 00:26:17.630 [2024-04-26 15:36:34.843798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.844173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.844188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.630 qpair failed and we were unable to recover it. 00:26:17.630 [2024-04-26 15:36:34.844513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.844759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.844776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.630 qpair failed and we were unable to recover it. 00:26:17.630 [2024-04-26 15:36:34.845027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.845378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.845391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.630 qpair failed and we were unable to recover it. 00:26:17.630 [2024-04-26 15:36:34.845749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.846066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.846082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.630 qpair failed and we were unable to recover it. 00:26:17.630 [2024-04-26 15:36:34.846320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.846673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.846690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.630 qpair failed and we were unable to recover it. 00:26:17.630 [2024-04-26 15:36:34.847019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.847323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.847339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.630 qpair failed and we were unable to recover it. 00:26:17.630 [2024-04-26 15:36:34.847663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.848006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.848020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.630 qpair failed and we were unable to recover it. 00:26:17.630 [2024-04-26 15:36:34.848272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.848634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.848647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.630 qpair failed and we were unable to recover it. 00:26:17.630 [2024-04-26 15:36:34.848972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.849336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.849350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.630 qpair failed and we were unable to recover it. 00:26:17.630 [2024-04-26 15:36:34.849686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.850026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.850041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.630 qpair failed and we were unable to recover it. 00:26:17.630 [2024-04-26 15:36:34.850342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.850691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.850705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.630 qpair failed and we were unable to recover it. 00:26:17.630 [2024-04-26 15:36:34.850940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.851316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.851334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.630 qpair failed and we were unable to recover it. 00:26:17.630 [2024-04-26 15:36:34.851642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.851978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.851994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.630 qpair failed and we were unable to recover it. 00:26:17.630 [2024-04-26 15:36:34.852404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.852734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.852748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.630 qpair failed and we were unable to recover it. 00:26:17.630 [2024-04-26 15:36:34.853188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.853548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.853563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.630 qpair failed and we were unable to recover it. 00:26:17.630 [2024-04-26 15:36:34.853909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.854228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.854243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.630 qpair failed and we were unable to recover it. 00:26:17.630 [2024-04-26 15:36:34.854561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.854894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.854908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.630 qpair failed and we were unable to recover it. 00:26:17.630 [2024-04-26 15:36:34.855122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.855503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.855517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.630 qpair failed and we were unable to recover it. 00:26:17.630 [2024-04-26 15:36:34.855872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.856215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.856228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.630 qpair failed and we were unable to recover it. 00:26:17.630 [2024-04-26 15:36:34.856571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.856905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.856919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.630 qpair failed and we were unable to recover it. 00:26:17.630 [2024-04-26 15:36:34.857254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.857603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.857618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.630 qpair failed and we were unable to recover it. 00:26:17.630 [2024-04-26 15:36:34.857945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.858291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.858308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.630 qpair failed and we were unable to recover it. 00:26:17.630 [2024-04-26 15:36:34.858629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.858867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.858881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.630 qpair failed and we were unable to recover it. 00:26:17.630 [2024-04-26 15:36:34.859273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.859633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.859647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.630 qpair failed and we were unable to recover it. 00:26:17.630 [2024-04-26 15:36:34.859949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.860304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.860318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.630 qpair failed and we were unable to recover it. 00:26:17.630 [2024-04-26 15:36:34.860651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.860999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.861013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.630 qpair failed and we were unable to recover it. 00:26:17.630 [2024-04-26 15:36:34.861289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.861604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.861619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.630 qpair failed and we were unable to recover it. 00:26:17.630 [2024-04-26 15:36:34.862009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.862342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.862355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.630 qpair failed and we were unable to recover it. 00:26:17.630 [2024-04-26 15:36:34.862719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.863098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.863113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.630 qpair failed and we were unable to recover it. 00:26:17.630 [2024-04-26 15:36:34.863438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.863813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.863827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.630 qpair failed and we were unable to recover it. 00:26:17.630 [2024-04-26 15:36:34.864090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.864469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.864484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.630 qpair failed and we were unable to recover it. 00:26:17.630 [2024-04-26 15:36:34.864805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.865183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.865198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.630 qpair failed and we were unable to recover it. 00:26:17.630 [2024-04-26 15:36:34.865559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.865920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.865935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.630 qpair failed and we were unable to recover it. 00:26:17.630 [2024-04-26 15:36:34.866284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.866652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.866667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.630 qpair failed and we were unable to recover it. 00:26:17.630 [2024-04-26 15:36:34.866995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-04-26 15:36:34.867377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.867390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.631 qpair failed and we were unable to recover it. 00:26:17.631 [2024-04-26 15:36:34.867719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.868078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.868092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.631 qpair failed and we were unable to recover it. 00:26:17.631 [2024-04-26 15:36:34.868339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.868686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.868700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.631 qpair failed and we were unable to recover it. 00:26:17.631 [2024-04-26 15:36:34.869042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.869402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.869416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.631 qpair failed and we were unable to recover it. 00:26:17.631 [2024-04-26 15:36:34.869779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.870120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.870135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.631 qpair failed and we were unable to recover it. 00:26:17.631 [2024-04-26 15:36:34.870501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.870865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.870881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.631 qpair failed and we were unable to recover it. 00:26:17.631 [2024-04-26 15:36:34.871237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.871574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.871587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.631 qpair failed and we were unable to recover it. 00:26:17.631 [2024-04-26 15:36:34.871919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.872286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.872301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.631 qpair failed and we were unable to recover it. 00:26:17.631 [2024-04-26 15:36:34.872654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.873017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.873032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.631 qpair failed and we were unable to recover it. 00:26:17.631 [2024-04-26 15:36:34.873253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.873613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.873627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.631 qpair failed and we were unable to recover it. 00:26:17.631 [2024-04-26 15:36:34.873998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.874364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.874377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.631 qpair failed and we were unable to recover it. 00:26:17.631 [2024-04-26 15:36:34.874702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.875083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.875098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.631 qpair failed and we were unable to recover it. 00:26:17.631 [2024-04-26 15:36:34.875417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.875778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.875792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.631 qpair failed and we were unable to recover it. 00:26:17.631 [2024-04-26 15:36:34.876214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.876558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.876573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.631 qpair failed and we were unable to recover it. 00:26:17.631 [2024-04-26 15:36:34.876922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.877248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.877261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.631 qpair failed and we were unable to recover it. 00:26:17.631 [2024-04-26 15:36:34.877537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.877788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.877802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.631 qpair failed and we were unable to recover it. 00:26:17.631 [2024-04-26 15:36:34.878048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.878397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.878412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.631 qpair failed and we were unable to recover it. 00:26:17.631 [2024-04-26 15:36:34.878771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.878989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.879006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.631 qpair failed and we were unable to recover it. 00:26:17.631 [2024-04-26 15:36:34.879372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.879713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.879728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.631 qpair failed and we were unable to recover it. 00:26:17.631 [2024-04-26 15:36:34.880048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.880285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.880301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.631 qpair failed and we were unable to recover it. 00:26:17.631 [2024-04-26 15:36:34.880633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.881537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.881566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.631 qpair failed and we were unable to recover it. 00:26:17.631 [2024-04-26 15:36:34.881791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.882132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.882148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.631 qpair failed and we were unable to recover it. 00:26:17.631 [2024-04-26 15:36:34.882501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.882808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.882825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.631 qpair failed and we were unable to recover it. 00:26:17.631 [2024-04-26 15:36:34.883068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.883445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.883463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.631 qpair failed and we were unable to recover it. 00:26:17.631 [2024-04-26 15:36:34.883692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.883920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.883937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.631 qpair failed and we were unable to recover it. 00:26:17.631 [2024-04-26 15:36:34.884212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.884587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.884603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.631 qpair failed and we were unable to recover it. 00:26:17.631 [2024-04-26 15:36:34.884926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.885314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.885329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.631 qpair failed and we were unable to recover it. 00:26:17.631 [2024-04-26 15:36:34.885696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.886075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.886091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.631 qpair failed and we were unable to recover it. 00:26:17.631 [2024-04-26 15:36:34.886411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.886712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.886728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.631 qpair failed and we were unable to recover it. 00:26:17.631 [2024-04-26 15:36:34.887081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.887454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.887470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.631 qpair failed and we were unable to recover it. 00:26:17.631 [2024-04-26 15:36:34.887829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.888200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.888216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.631 qpair failed and we were unable to recover it. 00:26:17.631 [2024-04-26 15:36:34.888560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.888895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.888923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.631 qpair failed and we were unable to recover it. 00:26:17.631 [2024-04-26 15:36:34.889332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.889594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.889618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.631 qpair failed and we were unable to recover it. 00:26:17.631 [2024-04-26 15:36:34.889987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.890361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.890381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.631 qpair failed and we were unable to recover it. 00:26:17.631 [2024-04-26 15:36:34.890689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.891038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.891055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.631 qpair failed and we were unable to recover it. 00:26:17.631 [2024-04-26 15:36:34.891386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.891557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.891576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.631 qpair failed and we were unable to recover it. 00:26:17.631 [2024-04-26 15:36:34.891968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.892221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.892247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.631 qpair failed and we were unable to recover it. 00:26:17.631 [2024-04-26 15:36:34.892471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.892713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.892731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.631 qpair failed and we were unable to recover it. 00:26:17.631 [2024-04-26 15:36:34.893074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.893415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.893430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.631 qpair failed and we were unable to recover it. 00:26:17.631 [2024-04-26 15:36:34.893756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.894096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.894111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.631 qpair failed and we were unable to recover it. 00:26:17.631 [2024-04-26 15:36:34.894373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.894711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.894726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.631 qpair failed and we were unable to recover it. 00:26:17.631 [2024-04-26 15:36:34.895031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.895385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.895400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.631 qpair failed and we were unable to recover it. 00:26:17.631 [2024-04-26 15:36:34.895756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.896090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.896117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.631 qpair failed and we were unable to recover it. 00:26:17.631 [2024-04-26 15:36:34.896462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.896779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.896794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.631 qpair failed and we were unable to recover it. 00:26:17.631 [2024-04-26 15:36:34.897166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.897537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.897552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.631 qpair failed and we were unable to recover it. 00:26:17.631 [2024-04-26 15:36:34.897901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.898270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.898295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.631 qpair failed and we were unable to recover it. 00:26:17.631 [2024-04-26 15:36:34.898641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.899009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.899024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.631 qpair failed and we were unable to recover it. 00:26:17.631 [2024-04-26 15:36:34.899386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.899754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.899769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.631 qpair failed and we were unable to recover it. 00:26:17.631 [2024-04-26 15:36:34.900146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.900514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.900528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.631 qpair failed and we were unable to recover it. 00:26:17.631 [2024-04-26 15:36:34.900892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.901267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.901293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.631 qpair failed and we were unable to recover it. 00:26:17.631 [2024-04-26 15:36:34.901663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.901918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-04-26 15:36:34.901937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.631 qpair failed and we were unable to recover it. 00:26:17.631 [2024-04-26 15:36:34.902328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.902729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.902744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.632 qpair failed and we were unable to recover it. 00:26:17.632 [2024-04-26 15:36:34.903127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.903460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.903485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.632 qpair failed and we were unable to recover it. 00:26:17.632 [2024-04-26 15:36:34.903762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.904142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.904160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.632 qpair failed and we were unable to recover it. 00:26:17.632 [2024-04-26 15:36:34.904397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.904781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.904796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.632 qpair failed and we were unable to recover it. 00:26:17.632 [2024-04-26 15:36:34.905131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.905326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.905342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.632 qpair failed and we were unable to recover it. 00:26:17.632 [2024-04-26 15:36:34.905683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.906031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.906057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.632 qpair failed and we were unable to recover it. 00:26:17.632 [2024-04-26 15:36:34.906322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.906547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.906565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.632 qpair failed and we were unable to recover it. 00:26:17.632 [2024-04-26 15:36:34.906809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.907101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.907116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.632 qpair failed and we were unable to recover it. 00:26:17.632 [2024-04-26 15:36:34.907404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.907817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.907851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.632 qpair failed and we were unable to recover it. 00:26:17.632 [2024-04-26 15:36:34.908201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.908539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.908558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.632 qpair failed and we were unable to recover it. 00:26:17.632 [2024-04-26 15:36:34.908899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.909283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.909297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.632 qpair failed and we were unable to recover it. 00:26:17.632 [2024-04-26 15:36:34.909619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.909993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.910020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.632 qpair failed and we were unable to recover it. 00:26:17.632 [2024-04-26 15:36:34.910146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.910488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.910507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.632 qpair failed and we were unable to recover it. 00:26:17.632 [2024-04-26 15:36:34.910732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.911089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.911105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.632 qpair failed and we were unable to recover it. 00:26:17.632 [2024-04-26 15:36:34.911434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.911760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.911785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.632 qpair failed and we were unable to recover it. 00:26:17.632 [2024-04-26 15:36:34.912016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.912385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.912404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.632 qpair failed and we were unable to recover it. 00:26:17.632 [2024-04-26 15:36:34.912736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.913099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.913114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.632 qpair failed and we were unable to recover it. 00:26:17.632 [2024-04-26 15:36:34.913439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.913809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.913835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.632 qpair failed and we were unable to recover it. 00:26:17.632 [2024-04-26 15:36:34.914098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.914461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.914478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.632 qpair failed and we were unable to recover it. 00:26:17.632 [2024-04-26 15:36:34.914823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.915125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.915139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.632 qpair failed and we were unable to recover it. 00:26:17.632 [2024-04-26 15:36:34.915499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.915903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.915929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.632 qpair failed and we were unable to recover it. 00:26:17.632 [2024-04-26 15:36:34.916274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.916643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.916668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.632 qpair failed and we were unable to recover it. 00:26:17.632 [2024-04-26 15:36:34.916993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.917359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.917374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.632 qpair failed and we were unable to recover it. 00:26:17.632 [2024-04-26 15:36:34.917571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.917932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.917947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.632 qpair failed and we were unable to recover it. 00:26:17.632 [2024-04-26 15:36:34.918299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.918684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.918709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.632 qpair failed and we were unable to recover it. 00:26:17.632 [2024-04-26 15:36:34.919098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.919464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.919482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.632 qpair failed and we were unable to recover it. 00:26:17.632 [2024-04-26 15:36:34.919823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.920200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.920216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.632 qpair failed and we were unable to recover it. 00:26:17.632 [2024-04-26 15:36:34.920572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.920971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.920997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.632 qpair failed and we were unable to recover it. 00:26:17.632 [2024-04-26 15:36:34.921360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.921714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.921729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.632 qpair failed and we were unable to recover it. 00:26:17.632 [2024-04-26 15:36:34.921960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.922212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.922227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.632 qpair failed and we were unable to recover it. 00:26:17.632 [2024-04-26 15:36:34.922557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.922936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.922962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.632 qpair failed and we were unable to recover it. 00:26:17.632 [2024-04-26 15:36:34.923374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.923763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.923781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.632 qpair failed and we were unable to recover it. 00:26:17.632 [2024-04-26 15:36:34.924110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.924446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.924461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.632 qpair failed and we were unable to recover it. 00:26:17.632 [2024-04-26 15:36:34.924856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.925216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.925240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.632 qpair failed and we were unable to recover it. 00:26:17.632 [2024-04-26 15:36:34.925617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.925982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.926001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.632 qpair failed and we were unable to recover it. 00:26:17.632 [2024-04-26 15:36:34.926353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.926753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.926767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.632 qpair failed and we were unable to recover it. 00:26:17.632 [2024-04-26 15:36:34.927115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.927489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.927514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.632 qpair failed and we were unable to recover it. 00:26:17.632 [2024-04-26 15:36:34.927887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.928252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.928271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.632 qpair failed and we were unable to recover it. 00:26:17.632 [2024-04-26 15:36:34.928614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.928844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.928862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.632 qpair failed and we were unable to recover it. 00:26:17.632 [2024-04-26 15:36:34.929250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.929581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.929606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.632 qpair failed and we were unable to recover it. 00:26:17.632 [2024-04-26 15:36:34.929965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.930334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.930349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.632 qpair failed and we were unable to recover it. 00:26:17.632 [2024-04-26 15:36:34.930692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.931041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.931067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.632 qpair failed and we were unable to recover it. 00:26:17.632 [2024-04-26 15:36:34.931298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.931722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.931747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.632 qpair failed and we were unable to recover it. 00:26:17.632 [2024-04-26 15:36:34.932116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.932493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.932509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.632 qpair failed and we were unable to recover it. 00:26:17.632 [2024-04-26 15:36:34.932729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.932935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.932950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.632 qpair failed and we were unable to recover it. 00:26:17.632 [2024-04-26 15:36:34.933165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.933511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.933526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.632 qpair failed and we were unable to recover it. 00:26:17.632 [2024-04-26 15:36:34.933872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.934235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.934260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.632 qpair failed and we were unable to recover it. 00:26:17.632 [2024-04-26 15:36:34.934639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.935021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.935036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.632 qpair failed and we were unable to recover it. 00:26:17.632 [2024-04-26 15:36:34.935351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.935612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.935626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.632 qpair failed and we were unable to recover it. 00:26:17.632 [2024-04-26 15:36:34.935982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.936223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.936239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.632 qpair failed and we were unable to recover it. 00:26:17.632 [2024-04-26 15:36:34.936598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.936958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.936972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.632 qpair failed and we were unable to recover it. 00:26:17.632 [2024-04-26 15:36:34.937305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.937561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.937575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.632 qpair failed and we were unable to recover it. 00:26:17.632 [2024-04-26 15:36:34.937918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.938260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.938274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.632 qpair failed and we were unable to recover it. 00:26:17.632 [2024-04-26 15:36:34.938510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.938856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.938871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.632 qpair failed and we were unable to recover it. 00:26:17.632 [2024-04-26 15:36:34.939138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.939519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.632 [2024-04-26 15:36:34.939533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.633 qpair failed and we were unable to recover it. 00:26:17.633 [2024-04-26 15:36:34.939888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.940244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.940259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.633 qpair failed and we were unable to recover it. 00:26:17.633 [2024-04-26 15:36:34.940493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.940869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.940884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.633 qpair failed and we were unable to recover it. 00:26:17.633 [2024-04-26 15:36:34.941159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.941487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.941502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.633 qpair failed and we were unable to recover it. 00:26:17.633 [2024-04-26 15:36:34.941864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.942195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.942209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.633 qpair failed and we were unable to recover it. 00:26:17.633 [2024-04-26 15:36:34.942578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.942930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.942945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.633 qpair failed and we were unable to recover it. 00:26:17.633 [2024-04-26 15:36:34.943299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.943681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.943694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.633 qpair failed and we were unable to recover it. 00:26:17.633 [2024-04-26 15:36:34.944029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.944388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.944402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.633 qpair failed and we were unable to recover it. 00:26:17.633 [2024-04-26 15:36:34.944764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.945082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.945096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.633 qpair failed and we were unable to recover it. 00:26:17.633 [2024-04-26 15:36:34.945443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.945812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.945826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.633 qpair failed and we were unable to recover it. 00:26:17.633 [2024-04-26 15:36:34.946175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.946539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.946554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.633 qpair failed and we were unable to recover it. 00:26:17.633 [2024-04-26 15:36:34.946899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.947130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.947145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.633 qpair failed and we were unable to recover it. 00:26:17.633 [2024-04-26 15:36:34.947535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.948003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.948017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.633 qpair failed and we were unable to recover it. 00:26:17.633 [2024-04-26 15:36:34.948362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.948733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.948751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.633 qpair failed and we were unable to recover it. 00:26:17.633 [2024-04-26 15:36:34.949106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.949469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.949484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.633 qpair failed and we were unable to recover it. 00:26:17.633 [2024-04-26 15:36:34.949800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.950155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.950170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.633 qpair failed and we were unable to recover it. 00:26:17.633 [2024-04-26 15:36:34.950430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.950808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.950822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.633 qpair failed and we were unable to recover it. 00:26:17.633 [2024-04-26 15:36:34.951228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.951573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.951587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.633 qpair failed and we were unable to recover it. 00:26:17.633 [2024-04-26 15:36:34.951935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.952286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.952299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.633 qpair failed and we were unable to recover it. 00:26:17.633 [2024-04-26 15:36:34.952624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.952834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.952858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.633 qpair failed and we were unable to recover it. 00:26:17.633 [2024-04-26 15:36:34.953192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.953522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.953536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.633 qpair failed and we were unable to recover it. 00:26:17.633 [2024-04-26 15:36:34.953884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.954202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.954216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.633 qpair failed and we were unable to recover it. 00:26:17.633 [2024-04-26 15:36:34.954572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.954928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.954942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.633 qpair failed and we were unable to recover it. 00:26:17.633 [2024-04-26 15:36:34.955175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.955543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.955559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.633 qpair failed and we were unable to recover it. 00:26:17.633 [2024-04-26 15:36:34.955884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.956248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.956262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.633 qpair failed and we were unable to recover it. 00:26:17.633 [2024-04-26 15:36:34.956587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.956918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.956933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.633 qpair failed and we were unable to recover it. 00:26:17.633 [2024-04-26 15:36:34.957293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.957633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.957646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.633 qpair failed and we were unable to recover it. 00:26:17.633 [2024-04-26 15:36:34.957979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.958325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.958338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.633 qpair failed and we were unable to recover it. 00:26:17.633 [2024-04-26 15:36:34.958666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.959013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.959027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.633 qpair failed and we were unable to recover it. 00:26:17.633 [2024-04-26 15:36:34.959370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.959752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.959767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.633 qpair failed and we were unable to recover it. 00:26:17.633 [2024-04-26 15:36:34.960122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.960477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.960492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.633 qpair failed and we were unable to recover it. 00:26:17.633 [2024-04-26 15:36:34.960843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.961191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.961205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.633 qpair failed and we were unable to recover it. 00:26:17.633 [2024-04-26 15:36:34.961530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.961902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.961916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.633 qpair failed and we were unable to recover it. 00:26:17.633 [2024-04-26 15:36:34.962264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.962695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.962712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.633 qpair failed and we were unable to recover it. 00:26:17.633 [2024-04-26 15:36:34.963036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.963406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.963420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.633 qpair failed and we were unable to recover it. 00:26:17.633 [2024-04-26 15:36:34.963749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.964094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.964109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.633 qpair failed and we were unable to recover it. 00:26:17.633 [2024-04-26 15:36:34.964422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.964763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.964776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.633 qpair failed and we were unable to recover it. 00:26:17.633 [2024-04-26 15:36:34.965095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.965449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.965462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.633 qpair failed and we were unable to recover it. 00:26:17.633 [2024-04-26 15:36:34.965835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.966190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.966205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.633 qpair failed and we were unable to recover it. 00:26:17.633 [2024-04-26 15:36:34.966435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.966823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.966844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.633 qpair failed and we were unable to recover it. 00:26:17.633 [2024-04-26 15:36:34.967068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.967422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.967435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.633 qpair failed and we were unable to recover it. 00:26:17.633 [2024-04-26 15:36:34.967785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.968156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.968170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.633 qpair failed and we were unable to recover it. 00:26:17.633 [2024-04-26 15:36:34.968497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.968854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.968868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.633 qpair failed and we were unable to recover it. 00:26:17.633 [2024-04-26 15:36:34.969267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.969479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.969496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.633 qpair failed and we were unable to recover it. 00:26:17.633 [2024-04-26 15:36:34.969833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.970181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.970195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.633 qpair failed and we were unable to recover it. 00:26:17.633 [2024-04-26 15:36:34.970542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.970877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.970891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.633 qpair failed and we were unable to recover it. 00:26:17.633 [2024-04-26 15:36:34.971276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.971635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.971648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.633 qpair failed and we were unable to recover it. 00:26:17.633 [2024-04-26 15:36:34.971880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.972167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.972182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.633 qpair failed and we were unable to recover it. 00:26:17.633 [2024-04-26 15:36:34.972532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.972761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.972774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.633 qpair failed and we were unable to recover it. 00:26:17.633 [2024-04-26 15:36:34.973151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.973485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.973499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.633 qpair failed and we were unable to recover it. 00:26:17.633 [2024-04-26 15:36:34.973846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.974217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.633 [2024-04-26 15:36:34.974230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.634 qpair failed and we were unable to recover it. 00:26:17.634 [2024-04-26 15:36:34.974593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:34.974967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:34.974982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.634 qpair failed and we were unable to recover it. 00:26:17.634 [2024-04-26 15:36:34.975325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:34.975660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:34.975674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.634 qpair failed and we were unable to recover it. 00:26:17.634 [2024-04-26 15:36:34.975997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:34.976394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:34.976408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.634 qpair failed and we were unable to recover it. 00:26:17.634 [2024-04-26 15:36:34.976731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:34.977099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:34.977113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.634 qpair failed and we were unable to recover it. 00:26:17.634 [2024-04-26 15:36:34.977371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:34.977754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:34.977768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.634 qpair failed and we were unable to recover it. 00:26:17.634 [2024-04-26 15:36:34.978015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:34.978236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:34.978251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.634 qpair failed and we were unable to recover it. 00:26:17.634 [2024-04-26 15:36:34.978657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:34.979015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:34.979029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.634 qpair failed and we were unable to recover it. 00:26:17.634 [2024-04-26 15:36:34.979415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:34.979830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:34.979854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.634 qpair failed and we were unable to recover it. 00:26:17.634 [2024-04-26 15:36:34.980215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:34.980569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:34.980582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.634 qpair failed and we were unable to recover it. 00:26:17.634 [2024-04-26 15:36:34.980956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:34.981283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:34.981296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.634 qpair failed and we were unable to recover it. 00:26:17.634 [2024-04-26 15:36:34.981627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:34.981874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:34.981888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.634 qpair failed and we were unable to recover it. 00:26:17.634 [2024-04-26 15:36:34.982220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:34.982594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:34.982609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.634 qpair failed and we were unable to recover it. 00:26:17.634 [2024-04-26 15:36:34.982940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:34.983346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:34.983359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.634 qpair failed and we were unable to recover it. 00:26:17.634 [2024-04-26 15:36:34.983724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:34.983941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:34.983956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.634 qpair failed and we were unable to recover it. 00:26:17.634 [2024-04-26 15:36:34.984361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:34.984683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:34.984696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.634 qpair failed and we were unable to recover it. 00:26:17.634 [2024-04-26 15:36:34.985034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:34.985403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:34.985418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.634 qpair failed and we were unable to recover it. 00:26:17.634 [2024-04-26 15:36:34.985820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:34.986150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:34.986165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.634 qpair failed and we were unable to recover it. 00:26:17.634 [2024-04-26 15:36:34.986526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:34.986870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:34.986885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.634 qpair failed and we were unable to recover it. 00:26:17.634 [2024-04-26 15:36:34.987231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:34.987562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:34.987575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.634 qpair failed and we were unable to recover it. 00:26:17.634 [2024-04-26 15:36:34.988000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:34.988386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:34.988399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.634 qpair failed and we were unable to recover it. 00:26:17.634 [2024-04-26 15:36:34.988632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:34.988995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:34.989009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.634 qpair failed and we were unable to recover it. 00:26:17.634 [2024-04-26 15:36:34.989357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:34.989725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:34.989739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.634 qpair failed and we were unable to recover it. 00:26:17.634 [2024-04-26 15:36:34.990077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:34.990443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:34.990457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.634 qpair failed and we were unable to recover it. 00:26:17.634 [2024-04-26 15:36:34.990850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:34.991190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:34.991205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.634 qpair failed and we were unable to recover it. 00:26:17.634 [2024-04-26 15:36:34.991439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:34.991693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:34.991707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.634 qpair failed and we were unable to recover it. 00:26:17.634 [2024-04-26 15:36:34.992081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:34.992316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:34.992330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.634 qpair failed and we were unable to recover it. 00:26:17.634 [2024-04-26 15:36:34.992652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:34.992962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:34.992976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.634 qpair failed and we were unable to recover it. 00:26:17.634 [2024-04-26 15:36:34.993346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:34.993711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:34.993725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.634 qpair failed and we were unable to recover it. 00:26:17.634 [2024-04-26 15:36:34.994116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:34.994468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:34.994483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.634 qpair failed and we were unable to recover it. 00:26:17.634 [2024-04-26 15:36:34.994808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:34.995119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:34.995134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.634 qpair failed and we were unable to recover it. 00:26:17.634 [2024-04-26 15:36:34.995494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:34.995823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:34.995868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.634 qpair failed and we were unable to recover it. 00:26:17.634 [2024-04-26 15:36:34.996203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:34.996411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:34.996426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.634 qpair failed and we were unable to recover it. 00:26:17.634 [2024-04-26 15:36:34.996761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:34.997096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:34.997111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.634 qpair failed and we were unable to recover it. 00:26:17.634 [2024-04-26 15:36:34.997450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:34.997794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:34.997807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.634 qpair failed and we were unable to recover it. 00:26:17.634 [2024-04-26 15:36:34.998208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:34.998602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:34.998616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.634 qpair failed and we were unable to recover it. 00:26:17.634 [2024-04-26 15:36:34.999008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:34.999418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:34.999432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.634 qpair failed and we were unable to recover it. 00:26:17.634 [2024-04-26 15:36:34.999788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:35.000035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:35.000051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.634 qpair failed and we were unable to recover it. 00:26:17.634 [2024-04-26 15:36:35.000413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:35.000780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:35.000795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.634 qpair failed and we were unable to recover it. 00:26:17.634 [2024-04-26 15:36:35.001121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:35.001532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:35.001546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.634 qpair failed and we were unable to recover it. 00:26:17.634 [2024-04-26 15:36:35.001867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:35.002210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:35.002224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.634 qpair failed and we were unable to recover it. 00:26:17.634 [2024-04-26 15:36:35.002547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:35.002911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:35.002925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.634 qpair failed and we were unable to recover it. 00:26:17.634 [2024-04-26 15:36:35.003288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:35.003667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:35.003682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.634 qpair failed and we were unable to recover it. 00:26:17.634 [2024-04-26 15:36:35.004031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:35.004280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:35.004293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.634 qpair failed and we were unable to recover it. 00:26:17.634 [2024-04-26 15:36:35.004533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:35.004854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:35.004868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.634 qpair failed and we were unable to recover it. 00:26:17.634 [2024-04-26 15:36:35.005096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:35.005462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:35.005476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.634 qpair failed and we were unable to recover it. 00:26:17.634 [2024-04-26 15:36:35.005793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:35.006153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:35.006167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.634 qpair failed and we were unable to recover it. 00:26:17.634 [2024-04-26 15:36:35.006487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:35.006852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:35.006866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.634 qpair failed and we were unable to recover it. 00:26:17.634 [2024-04-26 15:36:35.007279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:35.007603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:35.007617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.634 qpair failed and we were unable to recover it. 00:26:17.634 [2024-04-26 15:36:35.007971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:35.008315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:35.008329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.634 qpair failed and we were unable to recover it. 00:26:17.634 [2024-04-26 15:36:35.008692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:35.009032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:35.009048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.634 qpair failed and we were unable to recover it. 00:26:17.634 [2024-04-26 15:36:35.009382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:35.009614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:35.009628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.634 qpair failed and we were unable to recover it. 00:26:17.634 [2024-04-26 15:36:35.009994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.634 [2024-04-26 15:36:35.010355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.010368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.635 qpair failed and we were unable to recover it. 00:26:17.635 [2024-04-26 15:36:35.010711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.010957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.010971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.635 qpair failed and we were unable to recover it. 00:26:17.635 [2024-04-26 15:36:35.011171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.011528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.011543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.635 qpair failed and we were unable to recover it. 00:26:17.635 [2024-04-26 15:36:35.011896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.012241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.012255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.635 qpair failed and we were unable to recover it. 00:26:17.635 [2024-04-26 15:36:35.012598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.012939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.012953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.635 qpair failed and we were unable to recover it. 00:26:17.635 [2024-04-26 15:36:35.013266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.013628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.013643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.635 qpair failed and we were unable to recover it. 00:26:17.635 [2024-04-26 15:36:35.013875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.014254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.014267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.635 qpair failed and we were unable to recover it. 00:26:17.635 [2024-04-26 15:36:35.014515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.014864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.014878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.635 qpair failed and we were unable to recover it. 00:26:17.635 [2024-04-26 15:36:35.015246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.015612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.015626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.635 qpair failed and we were unable to recover it. 00:26:17.635 [2024-04-26 15:36:35.015970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.016305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.016318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.635 qpair failed and we were unable to recover it. 00:26:17.635 [2024-04-26 15:36:35.016668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.017029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.017043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.635 qpair failed and we were unable to recover it. 00:26:17.635 [2024-04-26 15:36:35.017410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.017766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.017781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.635 qpair failed and we were unable to recover it. 00:26:17.635 [2024-04-26 15:36:35.018135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.018446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.018461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.635 qpair failed and we were unable to recover it. 00:26:17.635 [2024-04-26 15:36:35.018846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.019032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.019047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.635 qpair failed and we were unable to recover it. 00:26:17.635 [2024-04-26 15:36:35.019393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.019767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.019780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.635 qpair failed and we were unable to recover it. 00:26:17.635 [2024-04-26 15:36:35.020114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.020479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.020493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.635 qpair failed and we were unable to recover it. 00:26:17.635 [2024-04-26 15:36:35.020808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.021146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.021161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.635 qpair failed and we were unable to recover it. 00:26:17.635 [2024-04-26 15:36:35.021517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.021855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.021869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.635 qpair failed and we were unable to recover it. 00:26:17.635 [2024-04-26 15:36:35.022222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.022530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.022544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.635 qpair failed and we were unable to recover it. 00:26:17.635 [2024-04-26 15:36:35.022898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.023247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.023262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.635 qpair failed and we were unable to recover it. 00:26:17.635 [2024-04-26 15:36:35.023630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.023966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.023982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.635 qpair failed and we were unable to recover it. 00:26:17.635 [2024-04-26 15:36:35.024171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.024526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.024541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.635 qpair failed and we were unable to recover it. 00:26:17.635 [2024-04-26 15:36:35.024941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.025295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.025309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.635 qpair failed and we were unable to recover it. 00:26:17.635 [2024-04-26 15:36:35.025587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.025931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.025945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.635 qpair failed and we were unable to recover it. 00:26:17.635 [2024-04-26 15:36:35.026284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.026667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.026680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.635 qpair failed and we were unable to recover it. 00:26:17.635 [2024-04-26 15:36:35.026900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.027245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.027259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.635 qpair failed and we were unable to recover it. 00:26:17.635 [2024-04-26 15:36:35.027584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.027949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.027964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.635 qpair failed and we were unable to recover it. 00:26:17.635 [2024-04-26 15:36:35.028271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.028625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.028639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.635 qpair failed and we were unable to recover it. 00:26:17.635 [2024-04-26 15:36:35.029006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.029374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.029388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.635 qpair failed and we were unable to recover it. 00:26:17.635 [2024-04-26 15:36:35.029733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.029901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.029917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.635 qpair failed and we were unable to recover it. 00:26:17.635 [2024-04-26 15:36:35.030316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.030648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.030662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.635 qpair failed and we were unable to recover it. 00:26:17.635 [2024-04-26 15:36:35.031034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.031367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.031381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.635 qpair failed and we were unable to recover it. 00:26:17.635 [2024-04-26 15:36:35.031719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.032086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.032101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.635 qpair failed and we were unable to recover it. 00:26:17.635 [2024-04-26 15:36:35.032502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.032824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.032844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.635 qpair failed and we were unable to recover it. 00:26:17.635 [2024-04-26 15:36:35.033196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.033565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.033578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.635 qpair failed and we were unable to recover it. 00:26:17.635 [2024-04-26 15:36:35.033920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.034298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.034311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.635 qpair failed and we were unable to recover it. 00:26:17.635 [2024-04-26 15:36:35.034544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.034792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.034806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.635 qpair failed and we were unable to recover it. 00:26:17.635 [2024-04-26 15:36:35.035128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.035490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.035504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.635 qpair failed and we were unable to recover it. 00:26:17.635 [2024-04-26 15:36:35.035729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.036069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.036083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.635 qpair failed and we were unable to recover it. 00:26:17.635 [2024-04-26 15:36:35.036405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.036757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.036770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.635 qpair failed and we were unable to recover it. 00:26:17.635 [2024-04-26 15:36:35.037094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.037455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.037469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.635 qpair failed and we were unable to recover it. 00:26:17.635 [2024-04-26 15:36:35.037801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.038161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.038176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.635 qpair failed and we were unable to recover it. 00:26:17.635 [2024-04-26 15:36:35.038529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.038781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.038796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.635 qpair failed and we were unable to recover it. 00:26:17.635 [2024-04-26 15:36:35.039142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.039514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.039529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.635 qpair failed and we were unable to recover it. 00:26:17.635 [2024-04-26 15:36:35.039769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.040138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.040154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.635 qpair failed and we were unable to recover it. 00:26:17.635 [2024-04-26 15:36:35.040349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.040579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.040594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.635 qpair failed and we were unable to recover it. 00:26:17.635 [2024-04-26 15:36:35.040961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.041304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.041318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.635 qpair failed and we were unable to recover it. 00:26:17.635 [2024-04-26 15:36:35.041674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.042031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.042045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.635 qpair failed and we were unable to recover it. 00:26:17.635 [2024-04-26 15:36:35.042370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.042706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.042720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.635 qpair failed and we were unable to recover it. 00:26:17.635 [2024-04-26 15:36:35.042903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.043262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.043276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.635 qpair failed and we were unable to recover it. 00:26:17.635 [2024-04-26 15:36:35.043595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.635 [2024-04-26 15:36:35.043947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-26 15:36:35.043961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.636 qpair failed and we were unable to recover it. 00:26:17.636 [2024-04-26 15:36:35.044313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-26 15:36:35.044707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-26 15:36:35.044720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.636 qpair failed and we were unable to recover it. 00:26:17.636 [2024-04-26 15:36:35.045072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-26 15:36:35.045440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-26 15:36:35.045454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.636 qpair failed and we were unable to recover it. 00:26:17.636 [2024-04-26 15:36:35.045774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-26 15:36:35.046146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-26 15:36:35.046160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.636 qpair failed and we were unable to recover it. 00:26:17.636 [2024-04-26 15:36:35.046487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-26 15:36:35.046887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-26 15:36:35.046902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.636 qpair failed and we were unable to recover it. 00:26:17.636 [2024-04-26 15:36:35.047265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-26 15:36:35.047503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-26 15:36:35.047517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.636 qpair failed and we were unable to recover it. 00:26:17.636 [2024-04-26 15:36:35.047721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-26 15:36:35.047959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-26 15:36:35.047974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.636 qpair failed and we were unable to recover it. 00:26:17.636 [2024-04-26 15:36:35.048221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-26 15:36:35.048570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-26 15:36:35.048584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.636 qpair failed and we were unable to recover it. 00:26:17.636 [2024-04-26 15:36:35.048908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-26 15:36:35.049286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-26 15:36:35.049299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.636 qpair failed and we were unable to recover it. 00:26:17.636 [2024-04-26 15:36:35.049535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-26 15:36:35.049895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-26 15:36:35.049909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.636 qpair failed and we were unable to recover it. 00:26:17.636 [2024-04-26 15:36:35.050338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-26 15:36:35.050660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-26 15:36:35.050674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.636 qpair failed and we were unable to recover it. 00:26:17.636 [2024-04-26 15:36:35.050935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-26 15:36:35.051252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-26 15:36:35.051267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.636 qpair failed and we were unable to recover it. 00:26:17.636 [2024-04-26 15:36:35.051483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-26 15:36:35.051826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-26 15:36:35.051846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.636 qpair failed and we were unable to recover it. 00:26:17.636 [2024-04-26 15:36:35.052177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-26 15:36:35.052542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-26 15:36:35.052556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.636 qpair failed and we were unable to recover it. 00:26:17.636 [2024-04-26 15:36:35.052880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-26 15:36:35.053222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-26 15:36:35.053235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.636 qpair failed and we were unable to recover it. 00:26:17.636 [2024-04-26 15:36:35.053508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-26 15:36:35.053854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-26 15:36:35.053868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.636 qpair failed and we were unable to recover it. 00:26:17.636 [2024-04-26 15:36:35.054223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-26 15:36:35.054570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-26 15:36:35.054584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.636 qpair failed and we were unable to recover it. 00:26:17.636 [2024-04-26 15:36:35.054907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-26 15:36:35.055278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-26 15:36:35.055292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.636 qpair failed and we were unable to recover it. 00:26:17.636 [2024-04-26 15:36:35.055640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-26 15:36:35.056004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-26 15:36:35.056019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.636 qpair failed and we were unable to recover it. 00:26:17.636 [2024-04-26 15:36:35.056340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-26 15:36:35.056700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-26 15:36:35.056713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.636 qpair failed and we were unable to recover it. 00:26:17.636 [2024-04-26 15:36:35.057040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-26 15:36:35.057385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-26 15:36:35.057398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.636 qpair failed and we were unable to recover it. 00:26:17.636 [2024-04-26 15:36:35.057764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-26 15:36:35.058112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-26 15:36:35.058128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.636 qpair failed and we were unable to recover it. 00:26:17.636 [2024-04-26 15:36:35.058447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-26 15:36:35.058783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-26 15:36:35.058800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.636 qpair failed and we were unable to recover it. 00:26:17.636 [2024-04-26 15:36:35.059153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-26 15:36:35.059505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-26 15:36:35.059519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.636 qpair failed and we were unable to recover it. 00:26:17.636 [2024-04-26 15:36:35.059857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-26 15:36:35.060138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-26 15:36:35.060153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.636 qpair failed and we were unable to recover it. 00:26:17.636 [2024-04-26 15:36:35.060477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-26 15:36:35.060818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-26 15:36:35.060832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.636 qpair failed and we were unable to recover it. 00:26:17.636 [2024-04-26 15:36:35.061164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-26 15:36:35.061504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-26 15:36:35.061519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.636 qpair failed and we were unable to recover it. 00:26:17.636 [2024-04-26 15:36:35.061861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-26 15:36:35.062216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-26 15:36:35.062229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.636 qpair failed and we were unable to recover it. 00:26:17.636 [2024-04-26 15:36:35.062552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-26 15:36:35.062756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-26 15:36:35.062771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.636 qpair failed and we were unable to recover it. 00:26:17.636 [2024-04-26 15:36:35.063122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-26 15:36:35.063486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-26 15:36:35.063499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.636 qpair failed and we were unable to recover it. 00:26:17.636 [2024-04-26 15:36:35.063807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-26 15:36:35.064200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-26 15:36:35.064214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.636 qpair failed and we were unable to recover it. 00:26:17.636 [2024-04-26 15:36:35.064535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-26 15:36:35.064889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-26 15:36:35.064903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.636 qpair failed and we were unable to recover it. 00:26:17.636 [2024-04-26 15:36:35.065332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-26 15:36:35.065659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-26 15:36:35.065676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.636 qpair failed and we were unable to recover it. 00:26:17.636 [2024-04-26 15:36:35.066031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 15:36:35.066406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 15:36:35.066423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.909 qpair failed and we were unable to recover it. 00:26:17.909 [2024-04-26 15:36:35.066745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 15:36:35.067090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 15:36:35.067105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.909 qpair failed and we were unable to recover it. 00:26:17.909 [2024-04-26 15:36:35.067428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 15:36:35.067788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 15:36:35.067802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.909 qpair failed and we were unable to recover it. 00:26:17.909 [2024-04-26 15:36:35.068132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 15:36:35.068500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 15:36:35.068515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.909 qpair failed and we were unable to recover it. 00:26:17.909 [2024-04-26 15:36:35.068934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 15:36:35.069288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 15:36:35.069302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.909 qpair failed and we were unable to recover it. 00:26:17.909 [2024-04-26 15:36:35.069638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 15:36:35.069977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 15:36:35.069991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.909 qpair failed and we were unable to recover it. 00:26:17.909 [2024-04-26 15:36:35.070371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 15:36:35.070797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 15:36:35.070810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.909 qpair failed and we were unable to recover it. 00:26:17.909 [2024-04-26 15:36:35.071145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 15:36:35.071510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 15:36:35.071524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.909 qpair failed and we were unable to recover it. 00:26:17.909 [2024-04-26 15:36:35.071852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 15:36:35.072220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 15:36:35.072234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.909 qpair failed and we were unable to recover it. 00:26:17.909 [2024-04-26 15:36:35.072611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 15:36:35.072969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 15:36:35.072986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.909 qpair failed and we were unable to recover it. 00:26:17.909 [2024-04-26 15:36:35.073370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 15:36:35.073577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 15:36:35.073592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.909 qpair failed and we were unable to recover it. 00:26:17.909 [2024-04-26 15:36:35.073913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 15:36:35.074291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 15:36:35.074305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.909 qpair failed and we were unable to recover it. 00:26:17.909 [2024-04-26 15:36:35.074674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 15:36:35.075020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 15:36:35.075035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.909 qpair failed and we were unable to recover it. 00:26:17.909 [2024-04-26 15:36:35.075369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 15:36:35.075577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 15:36:35.075592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.909 qpair failed and we were unable to recover it. 00:26:17.909 [2024-04-26 15:36:35.075943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 15:36:35.076227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 15:36:35.076241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.909 qpair failed and we were unable to recover it. 00:26:17.909 [2024-04-26 15:36:35.076595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 15:36:35.076931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 15:36:35.076945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.909 qpair failed and we were unable to recover it. 00:26:17.909 [2024-04-26 15:36:35.077300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 15:36:35.077674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 15:36:35.077687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.909 qpair failed and we were unable to recover it. 00:26:17.909 [2024-04-26 15:36:35.078021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 15:36:35.078386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 15:36:35.078399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.909 qpair failed and we were unable to recover it. 00:26:17.909 [2024-04-26 15:36:35.078715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 15:36:35.079147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 15:36:35.079161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.909 qpair failed and we were unable to recover it. 00:26:17.909 [2024-04-26 15:36:35.079471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 15:36:35.079834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 15:36:35.079859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.909 qpair failed and we were unable to recover it. 00:26:17.909 [2024-04-26 15:36:35.080206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 15:36:35.080574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 15:36:35.080587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.909 qpair failed and we were unable to recover it. 00:26:17.909 [2024-04-26 15:36:35.080908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 15:36:35.081276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 15:36:35.081289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.909 qpair failed and we were unable to recover it. 00:26:17.909 [2024-04-26 15:36:35.081586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 15:36:35.081938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 15:36:35.081952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.909 qpair failed and we were unable to recover it. 00:26:17.909 [2024-04-26 15:36:35.082283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 15:36:35.082648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 15:36:35.082661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.909 qpair failed and we were unable to recover it. 00:26:17.909 [2024-04-26 15:36:35.082883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 15:36:35.083268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 15:36:35.083282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.909 qpair failed and we were unable to recover it. 00:26:17.909 [2024-04-26 15:36:35.083590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 15:36:35.083932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 15:36:35.083947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.909 qpair failed and we were unable to recover it. 00:26:17.909 [2024-04-26 15:36:35.084270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 15:36:35.084634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.909 [2024-04-26 15:36:35.084649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.910 qpair failed and we were unable to recover it. 00:26:17.910 [2024-04-26 15:36:35.084971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.085324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.085337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.910 qpair failed and we were unable to recover it. 00:26:17.910 [2024-04-26 15:36:35.085677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.085896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.085912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.910 qpair failed and we were unable to recover it. 00:26:17.910 [2024-04-26 15:36:35.086257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.086614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.086628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.910 qpair failed and we were unable to recover it. 00:26:17.910 [2024-04-26 15:36:35.087021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.087379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.087393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.910 qpair failed and we were unable to recover it. 00:26:17.910 [2024-04-26 15:36:35.087736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.088076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.088090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.910 qpair failed and we were unable to recover it. 00:26:17.910 [2024-04-26 15:36:35.088486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.088857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.088872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.910 qpair failed and we were unable to recover it. 00:26:17.910 [2024-04-26 15:36:35.089208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.089578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.089591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.910 qpair failed and we were unable to recover it. 00:26:17.910 [2024-04-26 15:36:35.089914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.090289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.090303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.910 qpair failed and we were unable to recover it. 00:26:17.910 [2024-04-26 15:36:35.090616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.090957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.090972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.910 qpair failed and we were unable to recover it. 00:26:17.910 [2024-04-26 15:36:35.091298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.091518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.091532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.910 qpair failed and we were unable to recover it. 00:26:17.910 [2024-04-26 15:36:35.091754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.092065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.092079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.910 qpair failed and we were unable to recover it. 00:26:17.910 [2024-04-26 15:36:35.092399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.092729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.092744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.910 qpair failed and we were unable to recover it. 00:26:17.910 [2024-04-26 15:36:35.093165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.093485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.093500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.910 qpair failed and we were unable to recover it. 00:26:17.910 [2024-04-26 15:36:35.093854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.094225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.094238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.910 qpair failed and we were unable to recover it. 00:26:17.910 [2024-04-26 15:36:35.094520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.094726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.094740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.910 qpair failed and we were unable to recover it. 00:26:17.910 [2024-04-26 15:36:35.095161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.095536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.095550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.910 qpair failed and we were unable to recover it. 00:26:17.910 [2024-04-26 15:36:35.095952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.096316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.096331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.910 qpair failed and we were unable to recover it. 00:26:17.910 [2024-04-26 15:36:35.096682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.097018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.097033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.910 qpair failed and we were unable to recover it. 00:26:17.910 [2024-04-26 15:36:35.097409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.097773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.097787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.910 qpair failed and we were unable to recover it. 00:26:17.910 [2024-04-26 15:36:35.098106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.098439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.098453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.910 qpair failed and we were unable to recover it. 00:26:17.910 [2024-04-26 15:36:35.098770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.099106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.099120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.910 qpair failed and we were unable to recover it. 00:26:17.910 [2024-04-26 15:36:35.099443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.099797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.099811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.910 qpair failed and we were unable to recover it. 00:26:17.910 [2024-04-26 15:36:35.100197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.100517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.100532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.910 qpair failed and we were unable to recover it. 00:26:17.910 [2024-04-26 15:36:35.100872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.101237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.101251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.910 qpair failed and we were unable to recover it. 00:26:17.910 [2024-04-26 15:36:35.101572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.101923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.101937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.910 qpair failed and we were unable to recover it. 00:26:17.910 [2024-04-26 15:36:35.102259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.102597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.102610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.910 qpair failed and we were unable to recover it. 00:26:17.910 [2024-04-26 15:36:35.102992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.103391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.103404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.910 qpair failed and we were unable to recover it. 00:26:17.910 [2024-04-26 15:36:35.103719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.104096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.104110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.910 qpair failed and we were unable to recover it. 00:26:17.910 [2024-04-26 15:36:35.104437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.104804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.104818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.910 qpair failed and we were unable to recover it. 00:26:17.910 [2024-04-26 15:36:35.105161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.105521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.105535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.910 qpair failed and we were unable to recover it. 00:26:17.910 [2024-04-26 15:36:35.105861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.106223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.106237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.910 qpair failed and we were unable to recover it. 00:26:17.910 [2024-04-26 15:36:35.106594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.106934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.106950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.910 qpair failed and we were unable to recover it. 00:26:17.910 [2024-04-26 15:36:35.107300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.107636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.107649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.910 qpair failed and we were unable to recover it. 00:26:17.910 [2024-04-26 15:36:35.108014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.108414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.108428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.910 qpair failed and we were unable to recover it. 00:26:17.910 [2024-04-26 15:36:35.108683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.108895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.108910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.910 qpair failed and we were unable to recover it. 00:26:17.910 [2024-04-26 15:36:35.109226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.109560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.109575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.910 qpair failed and we were unable to recover it. 00:26:17.910 [2024-04-26 15:36:35.109914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.110294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.110307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.910 qpair failed and we were unable to recover it. 00:26:17.910 [2024-04-26 15:36:35.110657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.110877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.110893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.910 qpair failed and we were unable to recover it. 00:26:17.910 [2024-04-26 15:36:35.111231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.111600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.111614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.910 qpair failed and we were unable to recover it. 00:26:17.910 [2024-04-26 15:36:35.112009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.112382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.112397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.910 qpair failed and we were unable to recover it. 00:26:17.910 [2024-04-26 15:36:35.112704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.113030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.113046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.910 qpair failed and we were unable to recover it. 00:26:17.910 [2024-04-26 15:36:35.113402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.910 [2024-04-26 15:36:35.113605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 15:36:35.113621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.911 qpair failed and we were unable to recover it. 00:26:17.911 [2024-04-26 15:36:35.113949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 15:36:35.114303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 15:36:35.114316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.911 qpair failed and we were unable to recover it. 00:26:17.911 [2024-04-26 15:36:35.114664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 15:36:35.115034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 15:36:35.115048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.911 qpair failed and we were unable to recover it. 00:26:17.911 [2024-04-26 15:36:35.115356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 15:36:35.115707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 15:36:35.115721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.911 qpair failed and we were unable to recover it. 00:26:17.911 [2024-04-26 15:36:35.115910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 15:36:35.116307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 15:36:35.116321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.911 qpair failed and we were unable to recover it. 00:26:17.911 [2024-04-26 15:36:35.116635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 15:36:35.116993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 15:36:35.117008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.911 qpair failed and we were unable to recover it. 00:26:17.911 [2024-04-26 15:36:35.117340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 15:36:35.117711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 15:36:35.117724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.911 qpair failed and we were unable to recover it. 00:26:17.911 [2024-04-26 15:36:35.117935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 15:36:35.118245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 15:36:35.118258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.911 qpair failed and we were unable to recover it. 00:26:17.911 [2024-04-26 15:36:35.118591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 15:36:35.118950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 15:36:35.118965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.911 qpair failed and we were unable to recover it. 00:26:17.911 [2024-04-26 15:36:35.119298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 15:36:35.119505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 15:36:35.119520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.911 qpair failed and we were unable to recover it. 00:26:17.911 [2024-04-26 15:36:35.119863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 15:36:35.120282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 15:36:35.120295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.911 qpair failed and we were unable to recover it. 00:26:17.911 [2024-04-26 15:36:35.120616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 15:36:35.120734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 15:36:35.120749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.911 qpair failed and we were unable to recover it. 00:26:17.911 [2024-04-26 15:36:35.121081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 15:36:35.121414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 15:36:35.121427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.911 qpair failed and we were unable to recover it. 00:26:17.911 [2024-04-26 15:36:35.121649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 15:36:35.122018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 15:36:35.122032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.911 qpair failed and we were unable to recover it. 00:26:17.911 [2024-04-26 15:36:35.122370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 15:36:35.122698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 15:36:35.122711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.911 qpair failed and we were unable to recover it. 00:26:17.911 [2024-04-26 15:36:35.123052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 15:36:35.123416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 15:36:35.123430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.911 qpair failed and we were unable to recover it. 00:26:17.911 [2024-04-26 15:36:35.123784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 15:36:35.124138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 15:36:35.124154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.911 qpair failed and we were unable to recover it. 00:26:17.911 [2024-04-26 15:36:35.124486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 15:36:35.124856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 15:36:35.124871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.911 qpair failed and we were unable to recover it. 00:26:17.911 [2024-04-26 15:36:35.125187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 15:36:35.125542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 15:36:35.125555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.911 qpair failed and we were unable to recover it. 00:26:17.911 [2024-04-26 15:36:35.125912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 15:36:35.126251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 15:36:35.126265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.911 qpair failed and we were unable to recover it. 00:26:17.911 [2024-04-26 15:36:35.126596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 15:36:35.127026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 15:36:35.127041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.911 qpair failed and we were unable to recover it. 00:26:17.911 [2024-04-26 15:36:35.127276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 15:36:35.127628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 15:36:35.127642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.911 qpair failed and we were unable to recover it. 00:26:17.911 [2024-04-26 15:36:35.127966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 15:36:35.128345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 15:36:35.128359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.911 qpair failed and we were unable to recover it. 00:26:17.911 [2024-04-26 15:36:35.128681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 15:36:35.129031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 15:36:35.129045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.911 qpair failed and we were unable to recover it. 00:26:17.911 [2024-04-26 15:36:35.129375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 15:36:35.129725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 15:36:35.129739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.911 qpair failed and we were unable to recover it. 00:26:17.911 [2024-04-26 15:36:35.130078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 15:36:35.130436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 15:36:35.130449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.911 qpair failed and we were unable to recover it. 00:26:17.911 [2024-04-26 15:36:35.130814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 15:36:35.131143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 15:36:35.131157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.911 qpair failed and we were unable to recover it. 00:26:17.911 [2024-04-26 15:36:35.131466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 15:36:35.131794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 15:36:35.131808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.911 qpair failed and we were unable to recover it. 00:26:17.911 [2024-04-26 15:36:35.132143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 15:36:35.132515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 15:36:35.132542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.911 qpair failed and we were unable to recover it. 00:26:17.911 [2024-04-26 15:36:35.132920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 15:36:35.133179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 15:36:35.133195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.911 qpair failed and we were unable to recover it. 00:26:17.911 [2024-04-26 15:36:35.133509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 15:36:35.133758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 15:36:35.133773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.911 qpair failed and we were unable to recover it. 00:26:17.911 [2024-04-26 15:36:35.134105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 15:36:35.134473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 15:36:35.134499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.911 qpair failed and we were unable to recover it. 00:26:17.911 [2024-04-26 15:36:35.134879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 15:36:35.135209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 15:36:35.135228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.911 qpair failed and we were unable to recover it. 00:26:17.911 [2024-04-26 15:36:35.135426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 15:36:35.135648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 15:36:35.135663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.911 qpair failed and we were unable to recover it. 00:26:17.911 [2024-04-26 15:36:35.136027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 15:36:35.136231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 15:36:35.136247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.911 qpair failed and we were unable to recover it. 00:26:17.911 [2024-04-26 15:36:35.136594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 15:36:35.136945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.911 [2024-04-26 15:36:35.136961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.911 qpair failed and we were unable to recover it. 00:26:17.912 [2024-04-26 15:36:35.137314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 15:36:35.137572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 15:36:35.137598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.912 qpair failed and we were unable to recover it. 00:26:17.912 [2024-04-26 15:36:35.137971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 15:36:35.138336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 15:36:35.138352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.912 qpair failed and we were unable to recover it. 00:26:17.912 [2024-04-26 15:36:35.138701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 15:36:35.138957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 15:36:35.138973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.912 qpair failed and we were unable to recover it. 00:26:17.912 [2024-04-26 15:36:35.139327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 15:36:35.139691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 15:36:35.139710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.912 qpair failed and we were unable to recover it. 00:26:17.912 [2024-04-26 15:36:35.140072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 15:36:35.140438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 15:36:35.140453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.912 qpair failed and we were unable to recover it. 00:26:17.912 [2024-04-26 15:36:35.140692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 15:36:35.140903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 15:36:35.140919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.912 qpair failed and we were unable to recover it. 00:26:17.912 [2024-04-26 15:36:35.141141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 15:36:35.141366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 15:36:35.141391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.912 qpair failed and we were unable to recover it. 00:26:17.912 [2024-04-26 15:36:35.141726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 15:36:35.142096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 15:36:35.142114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.912 qpair failed and we were unable to recover it. 00:26:17.912 [2024-04-26 15:36:35.142464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 15:36:35.142817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 15:36:35.142832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.912 qpair failed and we were unable to recover it. 00:26:17.912 [2024-04-26 15:36:35.143130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 15:36:35.143507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 15:36:35.143534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.912 qpair failed and we were unable to recover it. 00:26:17.912 [2024-04-26 15:36:35.143916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 15:36:35.144335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 15:36:35.144350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.912 qpair failed and we were unable to recover it. 00:26:17.912 [2024-04-26 15:36:35.144696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 15:36:35.145037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 15:36:35.145062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.912 qpair failed and we were unable to recover it. 00:26:17.912 [2024-04-26 15:36:35.145316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 15:36:35.145560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 15:36:35.145585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.912 qpair failed and we were unable to recover it. 00:26:17.912 [2024-04-26 15:36:35.145887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 15:36:35.146212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 15:36:35.146227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.912 qpair failed and we were unable to recover it. 00:26:17.912 [2024-04-26 15:36:35.146521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 15:36:35.146890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 15:36:35.146906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.912 qpair failed and we were unable to recover it. 00:26:17.912 [2024-04-26 15:36:35.147281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 15:36:35.147620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 15:36:35.147635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.912 qpair failed and we were unable to recover it. 00:26:17.912 [2024-04-26 15:36:35.147994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 15:36:35.148385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 15:36:35.148411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.912 qpair failed and we were unable to recover it. 00:26:17.912 [2024-04-26 15:36:35.148665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 15:36:35.149049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 15:36:35.149075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.912 qpair failed and we were unable to recover it. 00:26:17.912 [2024-04-26 15:36:35.149466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 15:36:35.149693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 15:36:35.149717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.912 qpair failed and we were unable to recover it. 00:26:17.912 [2024-04-26 15:36:35.150078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 15:36:35.150447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 15:36:35.150462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.912 qpair failed and we were unable to recover it. 00:26:17.912 [2024-04-26 15:36:35.150814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 15:36:35.151174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 15:36:35.151200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.912 qpair failed and we were unable to recover it. 00:26:17.912 [2024-04-26 15:36:35.151551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 15:36:35.151922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 15:36:35.151941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.912 qpair failed and we were unable to recover it. 00:26:17.912 [2024-04-26 15:36:35.152329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 15:36:35.152695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 15:36:35.152710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.912 qpair failed and we were unable to recover it. 00:26:17.912 [2024-04-26 15:36:35.152954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 15:36:35.153323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 15:36:35.153348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.912 qpair failed and we were unable to recover it. 00:26:17.912 [2024-04-26 15:36:35.153724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 15:36:35.154093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 15:36:35.154108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.912 qpair failed and we were unable to recover it. 00:26:17.912 [2024-04-26 15:36:35.154441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 15:36:35.154853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 15:36:35.154877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.912 qpair failed and we were unable to recover it. 00:26:17.912 [2024-04-26 15:36:35.155193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 15:36:35.155603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 15:36:35.155622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.912 qpair failed and we were unable to recover it. 00:26:17.912 [2024-04-26 15:36:35.155963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 15:36:35.156165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 15:36:35.156181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.912 qpair failed and we were unable to recover it. 00:26:17.912 [2024-04-26 15:36:35.156534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 15:36:35.156867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 15:36:35.156893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.912 qpair failed and we were unable to recover it. 00:26:17.912 [2024-04-26 15:36:35.157261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 15:36:35.157634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 15:36:35.157651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.912 qpair failed and we were unable to recover it. 00:26:17.912 [2024-04-26 15:36:35.158010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 15:36:35.158374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 15:36:35.158389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.912 qpair failed and we were unable to recover it. 00:26:17.912 [2024-04-26 15:36:35.158731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 15:36:35.159077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 15:36:35.159102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.912 qpair failed and we were unable to recover it. 00:26:17.912 [2024-04-26 15:36:35.159482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.912 [2024-04-26 15:36:35.159741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 15:36:35.159766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.913 qpair failed and we were unable to recover it. 00:26:17.913 [2024-04-26 15:36:35.160110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 15:36:35.160316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 15:36:35.160330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.913 qpair failed and we were unable to recover it. 00:26:17.913 [2024-04-26 15:36:35.160575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 15:36:35.160919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 15:36:35.160934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.913 qpair failed and we were unable to recover it. 00:26:17.913 [2024-04-26 15:36:35.161295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 15:36:35.161544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 15:36:35.161570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.913 qpair failed and we were unable to recover it. 00:26:17.913 [2024-04-26 15:36:35.161928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 15:36:35.162302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 15:36:35.162323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.913 qpair failed and we were unable to recover it. 00:26:17.913 [2024-04-26 15:36:35.162659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 15:36:35.163036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 15:36:35.163058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.913 qpair failed and we were unable to recover it. 00:26:17.913 [2024-04-26 15:36:35.163454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 15:36:35.163810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 15:36:35.163828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.913 qpair failed and we were unable to recover it. 00:26:17.913 [2024-04-26 15:36:35.164221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 15:36:35.164592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 15:36:35.164606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.913 qpair failed and we were unable to recover it. 00:26:17.913 [2024-04-26 15:36:35.164850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 15:36:35.165191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 15:36:35.165206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.913 qpair failed and we were unable to recover it. 00:26:17.913 [2024-04-26 15:36:35.165453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 15:36:35.165785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 15:36:35.165810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.913 qpair failed and we were unable to recover it. 00:26:17.913 [2024-04-26 15:36:35.166209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 15:36:35.166542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 15:36:35.166561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.913 qpair failed and we were unable to recover it. 00:26:17.913 [2024-04-26 15:36:35.166888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 15:36:35.167195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 15:36:35.167210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.913 qpair failed and we were unable to recover it. 00:26:17.913 [2024-04-26 15:36:35.167565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 15:36:35.167780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 15:36:35.167805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.913 qpair failed and we were unable to recover it. 00:26:17.913 [2024-04-26 15:36:35.168193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 15:36:35.168537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 15:36:35.168556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.913 qpair failed and we were unable to recover it. 00:26:17.913 [2024-04-26 15:36:35.168909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 15:36:35.169170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 15:36:35.169191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.913 qpair failed and we were unable to recover it. 00:26:17.913 [2024-04-26 15:36:35.169412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 15:36:35.169772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 15:36:35.169796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.913 qpair failed and we were unable to recover it. 00:26:17.913 [2024-04-26 15:36:35.170155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 15:36:35.170497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 15:36:35.170515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.913 qpair failed and we were unable to recover it. 00:26:17.913 [2024-04-26 15:36:35.170854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 15:36:35.171175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 15:36:35.171190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.913 qpair failed and we were unable to recover it. 00:26:17.913 [2024-04-26 15:36:35.171539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 15:36:35.171913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 15:36:35.171928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.913 qpair failed and we were unable to recover it. 00:26:17.913 [2024-04-26 15:36:35.172295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 15:36:35.172665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 15:36:35.172691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.913 qpair failed and we were unable to recover it. 00:26:17.913 [2024-04-26 15:36:35.173046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 15:36:35.173411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 15:36:35.173436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.913 qpair failed and we were unable to recover it. 00:26:17.913 [2024-04-26 15:36:35.173688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 15:36:35.173933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 15:36:35.173961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.913 qpair failed and we were unable to recover it. 00:26:17.913 [2024-04-26 15:36:35.174305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 15:36:35.174536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 15:36:35.174551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.913 qpair failed and we were unable to recover it. 00:26:17.913 [2024-04-26 15:36:35.174891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 15:36:35.175239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 15:36:35.175254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.913 qpair failed and we were unable to recover it. 00:26:17.913 [2024-04-26 15:36:35.175616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 15:36:35.175994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 15:36:35.176027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.913 qpair failed and we were unable to recover it. 00:26:17.913 [2024-04-26 15:36:35.176299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 15:36:35.176698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 15:36:35.176724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.913 qpair failed and we were unable to recover it. 00:26:17.913 [2024-04-26 15:36:35.177083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 15:36:35.177371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 15:36:35.177396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.913 qpair failed and we were unable to recover it. 00:26:17.913 [2024-04-26 15:36:35.177753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 15:36:35.178133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 15:36:35.178149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.913 qpair failed and we were unable to recover it. 00:26:17.913 [2024-04-26 15:36:35.178501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 15:36:35.178703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 15:36:35.178718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.913 qpair failed and we were unable to recover it. 00:26:17.913 [2024-04-26 15:36:35.179084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 15:36:35.179320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 15:36:35.179345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.913 qpair failed and we were unable to recover it. 00:26:17.913 [2024-04-26 15:36:35.179586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 15:36:35.179941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 15:36:35.179969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.913 qpair failed and we were unable to recover it. 00:26:17.913 [2024-04-26 15:36:35.180326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 15:36:35.180530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 15:36:35.180545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.913 qpair failed and we were unable to recover it. 00:26:17.913 [2024-04-26 15:36:35.180900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 15:36:35.181281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.913 [2024-04-26 15:36:35.181296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.913 qpair failed and we were unable to recover it. 00:26:17.913 [2024-04-26 15:36:35.181635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 15:36:35.181896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 15:36:35.181922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.914 qpair failed and we were unable to recover it. 00:26:17.914 [2024-04-26 15:36:35.182189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 15:36:35.182495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 15:36:35.182516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.914 qpair failed and we were unable to recover it. 00:26:17.914 [2024-04-26 15:36:35.182851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 15:36:35.183197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 15:36:35.183211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.914 qpair failed and we were unable to recover it. 00:26:17.914 [2024-04-26 15:36:35.183615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 15:36:35.183945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 15:36:35.183960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.914 qpair failed and we were unable to recover it. 00:26:17.914 [2024-04-26 15:36:35.184174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 15:36:35.184494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 15:36:35.184508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.914 qpair failed and we were unable to recover it. 00:26:17.914 [2024-04-26 15:36:35.184701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 15:36:35.185076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 15:36:35.185091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.914 qpair failed and we were unable to recover it. 00:26:17.914 [2024-04-26 15:36:35.185454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 15:36:35.185798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 15:36:35.185812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.914 qpair failed and we were unable to recover it. 00:26:17.914 [2024-04-26 15:36:35.186154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 15:36:35.186523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 15:36:35.186538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.914 qpair failed and we were unable to recover it. 00:26:17.914 [2024-04-26 15:36:35.186873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 15:36:35.187253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 15:36:35.187266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.914 qpair failed and we were unable to recover it. 00:26:17.914 [2024-04-26 15:36:35.187606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 15:36:35.187947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 15:36:35.187962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.914 qpair failed and we were unable to recover it. 00:26:17.914 [2024-04-26 15:36:35.188319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 15:36:35.188675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 15:36:35.188690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.914 qpair failed and we were unable to recover it. 00:26:17.914 [2024-04-26 15:36:35.189033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 15:36:35.189394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 15:36:35.189407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.914 qpair failed and we were unable to recover it. 00:26:17.914 [2024-04-26 15:36:35.189781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 15:36:35.190134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 15:36:35.190149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.914 qpair failed and we were unable to recover it. 00:26:17.914 [2024-04-26 15:36:35.190506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 15:36:35.190865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 15:36:35.190880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.914 qpair failed and we were unable to recover it. 00:26:17.914 [2024-04-26 15:36:35.191212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 15:36:35.191600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 15:36:35.191613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.914 qpair failed and we were unable to recover it. 00:26:17.914 [2024-04-26 15:36:35.191903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 15:36:35.192262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 15:36:35.192275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.914 qpair failed and we were unable to recover it. 00:26:17.914 [2024-04-26 15:36:35.192596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 15:36:35.192938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 15:36:35.192952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.914 qpair failed and we were unable to recover it. 00:26:17.914 [2024-04-26 15:36:35.193281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 15:36:35.193636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 15:36:35.193649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.914 qpair failed and we were unable to recover it. 00:26:17.914 [2024-04-26 15:36:35.193978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 15:36:35.194236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 15:36:35.194249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.914 qpair failed and we were unable to recover it. 00:26:17.914 [2024-04-26 15:36:35.194557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 15:36:35.194920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 15:36:35.194934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.914 qpair failed and we were unable to recover it. 00:26:17.914 [2024-04-26 15:36:35.195277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 15:36:35.195652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 15:36:35.195665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.914 qpair failed and we were unable to recover it. 00:26:17.914 [2024-04-26 15:36:35.196037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 15:36:35.196374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 15:36:35.196388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.914 qpair failed and we were unable to recover it. 00:26:17.914 [2024-04-26 15:36:35.196783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 15:36:35.196986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 15:36:35.197002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.914 qpair failed and we were unable to recover it. 00:26:17.914 [2024-04-26 15:36:35.197421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 15:36:35.197626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 15:36:35.197641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.914 qpair failed and we were unable to recover it. 00:26:17.914 [2024-04-26 15:36:35.198054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 15:36:35.198382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 15:36:35.198396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.914 qpair failed and we were unable to recover it. 00:26:17.914 [2024-04-26 15:36:35.198747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 15:36:35.199090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 15:36:35.199105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.914 qpair failed and we were unable to recover it. 00:26:17.914 [2024-04-26 15:36:35.199456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 15:36:35.199662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 15:36:35.199677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.914 qpair failed and we were unable to recover it. 00:26:17.914 [2024-04-26 15:36:35.199978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 15:36:35.200331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 15:36:35.200345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.914 qpair failed and we were unable to recover it. 00:26:17.914 [2024-04-26 15:36:35.200670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 15:36:35.201030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 15:36:35.201044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.914 qpair failed and we were unable to recover it. 00:26:17.914 [2024-04-26 15:36:35.201372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 15:36:35.201734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 15:36:35.201747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.914 qpair failed and we were unable to recover it. 00:26:17.914 [2024-04-26 15:36:35.201972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 15:36:35.202328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 15:36:35.202342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.914 qpair failed and we were unable to recover it. 00:26:17.914 [2024-04-26 15:36:35.202667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 15:36:35.203030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 15:36:35.203044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.914 qpair failed and we were unable to recover it. 00:26:17.914 [2024-04-26 15:36:35.203406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 15:36:35.203705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 15:36:35.203719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.914 qpair failed and we were unable to recover it. 00:26:17.914 [2024-04-26 15:36:35.204079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 15:36:35.204405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 15:36:35.204419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.914 qpair failed and we were unable to recover it. 00:26:17.914 [2024-04-26 15:36:35.204777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 15:36:35.205118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 15:36:35.205133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.914 qpair failed and we were unable to recover it. 00:26:17.914 [2024-04-26 15:36:35.205455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 15:36:35.205829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 15:36:35.205870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.914 qpair failed and we were unable to recover it. 00:26:17.914 [2024-04-26 15:36:35.206215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 15:36:35.206582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 15:36:35.206596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.914 qpair failed and we were unable to recover it. 00:26:17.914 [2024-04-26 15:36:35.206954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 15:36:35.207314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.914 [2024-04-26 15:36:35.207329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.914 qpair failed and we were unable to recover it. 00:26:17.914 [2024-04-26 15:36:35.207684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.207959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.207974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.915 qpair failed and we were unable to recover it. 00:26:17.915 [2024-04-26 15:36:35.208255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.208588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.208603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.915 qpair failed and we were unable to recover it. 00:26:17.915 [2024-04-26 15:36:35.209007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.209354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.209368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.915 qpair failed and we were unable to recover it. 00:26:17.915 [2024-04-26 15:36:35.209620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.209959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.209974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.915 qpair failed and we were unable to recover it. 00:26:17.915 [2024-04-26 15:36:35.210311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.210658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.210671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.915 qpair failed and we were unable to recover it. 00:26:17.915 [2024-04-26 15:36:35.210992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.211292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.211306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.915 qpair failed and we were unable to recover it. 00:26:17.915 [2024-04-26 15:36:35.211633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.212005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.212019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.915 qpair failed and we were unable to recover it. 00:26:17.915 [2024-04-26 15:36:35.212338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.212706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.212720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.915 qpair failed and we were unable to recover it. 00:26:17.915 [2024-04-26 15:36:35.213026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.213381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.213394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.915 qpair failed and we were unable to recover it. 00:26:17.915 [2024-04-26 15:36:35.213728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.214091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.214105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.915 qpair failed and we were unable to recover it. 00:26:17.915 [2024-04-26 15:36:35.214369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.214755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.214768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.915 qpair failed and we were unable to recover it. 00:26:17.915 [2024-04-26 15:36:35.215094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.215343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.215357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.915 qpair failed and we were unable to recover it. 00:26:17.915 [2024-04-26 15:36:35.215700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.216078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.216092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.915 qpair failed and we were unable to recover it. 00:26:17.915 [2024-04-26 15:36:35.216413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.216754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.216768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.915 qpair failed and we were unable to recover it. 00:26:17.915 [2024-04-26 15:36:35.217097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.217466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.217482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.915 qpair failed and we were unable to recover it. 00:26:17.915 [2024-04-26 15:36:35.217706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.217979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.217994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.915 qpair failed and we were unable to recover it. 00:26:17.915 [2024-04-26 15:36:35.218331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.218707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.218722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.915 qpair failed and we were unable to recover it. 00:26:17.915 [2024-04-26 15:36:35.219029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.219264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.219278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.915 qpair failed and we were unable to recover it. 00:26:17.915 [2024-04-26 15:36:35.219624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.219975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.219990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.915 qpair failed and we were unable to recover it. 00:26:17.915 [2024-04-26 15:36:35.220426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.220771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.220786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.915 qpair failed and we were unable to recover it. 00:26:17.915 [2024-04-26 15:36:35.221138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.221481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.221495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.915 qpair failed and we were unable to recover it. 00:26:17.915 [2024-04-26 15:36:35.221818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.222168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.222184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.915 qpair failed and we were unable to recover it. 00:26:17.915 [2024-04-26 15:36:35.222407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.222783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.222799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.915 qpair failed and we were unable to recover it. 00:26:17.915 [2024-04-26 15:36:35.223027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.223404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.223420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.915 qpair failed and we were unable to recover it. 00:26:17.915 [2024-04-26 15:36:35.223761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.224125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.224142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.915 qpair failed and we were unable to recover it. 00:26:17.915 [2024-04-26 15:36:35.224491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.224734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.224749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.915 qpair failed and we were unable to recover it. 00:26:17.915 [2024-04-26 15:36:35.225108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.225476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.225491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.915 qpair failed and we were unable to recover it. 00:26:17.915 [2024-04-26 15:36:35.225834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.226168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.226182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.915 qpair failed and we were unable to recover it. 00:26:17.915 [2024-04-26 15:36:35.226382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.226621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.226636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.915 qpair failed and we were unable to recover it. 00:26:17.915 [2024-04-26 15:36:35.226981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.227359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.227373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.915 qpair failed and we were unable to recover it. 00:26:17.915 [2024-04-26 15:36:35.227746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.228124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.228139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.915 qpair failed and we were unable to recover it. 00:26:17.915 [2024-04-26 15:36:35.228488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.228855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.228869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.915 qpair failed and we were unable to recover it. 00:26:17.915 [2024-04-26 15:36:35.229268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.229684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.229698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.915 qpair failed and we were unable to recover it. 00:26:17.915 [2024-04-26 15:36:35.230017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.230344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.230358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.915 qpair failed and we were unable to recover it. 00:26:17.915 [2024-04-26 15:36:35.230682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.231016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.231031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.915 qpair failed and we were unable to recover it. 00:26:17.915 [2024-04-26 15:36:35.231348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.231697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.231711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.915 qpair failed and we were unable to recover it. 00:26:17.915 [2024-04-26 15:36:35.232112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.232486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.232501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.915 qpair failed and we were unable to recover it. 00:26:17.915 [2024-04-26 15:36:35.232856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.233202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.233216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.915 qpair failed and we were unable to recover it. 00:26:17.915 [2024-04-26 15:36:35.233545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.233859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.233873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.915 qpair failed and we were unable to recover it. 00:26:17.915 [2024-04-26 15:36:35.234223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.234424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.234438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.915 qpair failed and we were unable to recover it. 00:26:17.915 [2024-04-26 15:36:35.234779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.235125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.235139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.915 qpair failed and we were unable to recover it. 00:26:17.915 [2024-04-26 15:36:35.235574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.235928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.235943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.915 qpair failed and we were unable to recover it. 00:26:17.915 [2024-04-26 15:36:35.236139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.915 [2024-04-26 15:36:35.236337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 15:36:35.236351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.916 qpair failed and we were unable to recover it. 00:26:17.916 [2024-04-26 15:36:35.236668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 15:36:35.237046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 15:36:35.237062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.916 qpair failed and we were unable to recover it. 00:26:17.916 [2024-04-26 15:36:35.237399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 15:36:35.237778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 15:36:35.237791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.916 qpair failed and we were unable to recover it. 00:26:17.916 [2024-04-26 15:36:35.238118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 15:36:35.238473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 15:36:35.238487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.916 qpair failed and we were unable to recover it. 00:26:17.916 [2024-04-26 15:36:35.238811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 15:36:35.239165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 15:36:35.239180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.916 qpair failed and we were unable to recover it. 00:26:17.916 [2024-04-26 15:36:35.239377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 15:36:35.239743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 15:36:35.239758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.916 qpair failed and we were unable to recover it. 00:26:17.916 [2024-04-26 15:36:35.240097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 15:36:35.240474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 15:36:35.240488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.916 qpair failed and we were unable to recover it. 00:26:17.916 [2024-04-26 15:36:35.240845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 15:36:35.241221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 15:36:35.241235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.916 qpair failed and we were unable to recover it. 00:26:17.916 [2024-04-26 15:36:35.241468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 15:36:35.241682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 15:36:35.241697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.916 qpair failed and we were unable to recover it. 00:26:17.916 [2024-04-26 15:36:35.242038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 15:36:35.242417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 15:36:35.242431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.916 qpair failed and we were unable to recover it. 00:26:17.916 [2024-04-26 15:36:35.242753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 15:36:35.242931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 15:36:35.242947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.916 qpair failed and we were unable to recover it. 00:26:17.916 [2024-04-26 15:36:35.243323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 15:36:35.243648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 15:36:35.243661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.916 qpair failed and we were unable to recover it. 00:26:17.916 [2024-04-26 15:36:35.244029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 15:36:35.244372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 15:36:35.244386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.916 qpair failed and we were unable to recover it. 00:26:17.916 [2024-04-26 15:36:35.244642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 15:36:35.244990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 15:36:35.245004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.916 qpair failed and we were unable to recover it. 00:26:17.916 [2024-04-26 15:36:35.245194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 15:36:35.245514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 15:36:35.245528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.916 qpair failed and we were unable to recover it. 00:26:17.916 [2024-04-26 15:36:35.245882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 15:36:35.246236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 15:36:35.246249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.916 qpair failed and we were unable to recover it. 00:26:17.916 [2024-04-26 15:36:35.246583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 15:36:35.246932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 15:36:35.246947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.916 qpair failed and we were unable to recover it. 00:26:17.916 [2024-04-26 15:36:35.247176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 15:36:35.247467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 15:36:35.247481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.916 qpair failed and we were unable to recover it. 00:26:17.916 [2024-04-26 15:36:35.247818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 15:36:35.248196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 15:36:35.248210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.916 qpair failed and we were unable to recover it. 00:26:17.916 [2024-04-26 15:36:35.248557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 15:36:35.248853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 15:36:35.248868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.916 qpair failed and we were unable to recover it. 00:26:17.916 [2024-04-26 15:36:35.249198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 15:36:35.249534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 15:36:35.249548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.916 qpair failed and we were unable to recover it. 00:26:17.916 [2024-04-26 15:36:35.249905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 15:36:35.250264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 15:36:35.250278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.916 qpair failed and we were unable to recover it. 00:26:17.916 [2024-04-26 15:36:35.250598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 15:36:35.250967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 15:36:35.250982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.916 qpair failed and we were unable to recover it. 00:26:17.916 [2024-04-26 15:36:35.251315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 15:36:35.251663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 15:36:35.251677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.916 qpair failed and we were unable to recover it. 00:26:17.916 [2024-04-26 15:36:35.252009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 15:36:35.252361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 15:36:35.252376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.916 qpair failed and we were unable to recover it. 00:26:17.916 [2024-04-26 15:36:35.252698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 15:36:35.253030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 15:36:35.253045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.916 qpair failed and we were unable to recover it. 00:26:17.916 [2024-04-26 15:36:35.253450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 15:36:35.253797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 15:36:35.253811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.916 qpair failed and we were unable to recover it. 00:26:17.916 [2024-04-26 15:36:35.254223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 15:36:35.254558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 15:36:35.254572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.916 qpair failed and we were unable to recover it. 00:26:17.916 [2024-04-26 15:36:35.254925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 15:36:35.255263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 15:36:35.255278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.916 qpair failed and we were unable to recover it. 00:26:17.916 [2024-04-26 15:36:35.255626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 15:36:35.255961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 15:36:35.255975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.916 qpair failed and we were unable to recover it. 00:26:17.916 [2024-04-26 15:36:35.256205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 15:36:35.256570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 15:36:35.256584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.916 qpair failed and we were unable to recover it. 00:26:17.916 [2024-04-26 15:36:35.256946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 15:36:35.257325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 15:36:35.257339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.916 qpair failed and we were unable to recover it. 00:26:17.916 [2024-04-26 15:36:35.257701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 15:36:35.258078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 15:36:35.258092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.916 qpair failed and we were unable to recover it. 00:26:17.916 [2024-04-26 15:36:35.258430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 15:36:35.258789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 15:36:35.258803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.916 qpair failed and we were unable to recover it. 00:26:17.916 [2024-04-26 15:36:35.259202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 15:36:35.259568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 15:36:35.259582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.916 qpair failed and we were unable to recover it. 00:26:17.916 [2024-04-26 15:36:35.259935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 15:36:35.260246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 15:36:35.260259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.916 qpair failed and we were unable to recover it. 00:26:17.916 [2024-04-26 15:36:35.260467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 15:36:35.260699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 15:36:35.260715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.916 qpair failed and we were unable to recover it. 00:26:17.916 [2024-04-26 15:36:35.261078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 15:36:35.261317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.916 [2024-04-26 15:36:35.261331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.916 qpair failed and we were unable to recover it. 00:26:17.917 [2024-04-26 15:36:35.261681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 15:36:35.262057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 15:36:35.262071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.917 qpair failed and we were unable to recover it. 00:26:17.917 [2024-04-26 15:36:35.262434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 15:36:35.262873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 15:36:35.262888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.917 qpair failed and we were unable to recover it. 00:26:17.917 [2024-04-26 15:36:35.263106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 15:36:35.263417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 15:36:35.263431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.917 qpair failed and we were unable to recover it. 00:26:17.917 [2024-04-26 15:36:35.263791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 15:36:35.263996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 15:36:35.264012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.917 qpair failed and we were unable to recover it. 00:26:17.917 [2024-04-26 15:36:35.264362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 15:36:35.264735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 15:36:35.264749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.917 qpair failed and we were unable to recover it. 00:26:17.917 [2024-04-26 15:36:35.265091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 15:36:35.265435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 15:36:35.265449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.917 qpair failed and we were unable to recover it. 00:26:17.917 [2024-04-26 15:36:35.265774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 15:36:35.266133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 15:36:35.266147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.917 qpair failed and we were unable to recover it. 00:26:17.917 [2024-04-26 15:36:35.266507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 15:36:35.266872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 15:36:35.266888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.917 qpair failed and we were unable to recover it. 00:26:17.917 [2024-04-26 15:36:35.267251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 15:36:35.267589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 15:36:35.267603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.917 qpair failed and we were unable to recover it. 00:26:17.917 [2024-04-26 15:36:35.267928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 15:36:35.268272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 15:36:35.268286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.917 qpair failed and we were unable to recover it. 00:26:17.917 [2024-04-26 15:36:35.268608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 15:36:35.268928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 15:36:35.268942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.917 qpair failed and we were unable to recover it. 00:26:17.917 [2024-04-26 15:36:35.269214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 15:36:35.269582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 15:36:35.269595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.917 qpair failed and we were unable to recover it. 00:26:17.917 [2024-04-26 15:36:35.269899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 15:36:35.270271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 15:36:35.270284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.917 qpair failed and we were unable to recover it. 00:26:17.917 [2024-04-26 15:36:35.270643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 15:36:35.270976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 15:36:35.270990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.917 qpair failed and we were unable to recover it. 00:26:17.917 [2024-04-26 15:36:35.271367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 15:36:35.271571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 15:36:35.271589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.917 qpair failed and we were unable to recover it. 00:26:17.917 [2024-04-26 15:36:35.271929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 15:36:35.272270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 15:36:35.272284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.917 qpair failed and we were unable to recover it. 00:26:17.917 [2024-04-26 15:36:35.272629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 15:36:35.272982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 15:36:35.272996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.917 qpair failed and we were unable to recover it. 00:26:17.917 [2024-04-26 15:36:35.273346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 15:36:35.273683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 15:36:35.273697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.917 qpair failed and we were unable to recover it. 00:26:17.917 [2024-04-26 15:36:35.274011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 15:36:35.274379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 15:36:35.274392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.917 qpair failed and we were unable to recover it. 00:26:17.917 [2024-04-26 15:36:35.274715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 15:36:35.275067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 15:36:35.275081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.917 qpair failed and we were unable to recover it. 00:26:17.917 [2024-04-26 15:36:35.275402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 15:36:35.275776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 15:36:35.275790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.917 qpair failed and we were unable to recover it. 00:26:17.917 [2024-04-26 15:36:35.276115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 15:36:35.276445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 15:36:35.276459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.917 qpair failed and we were unable to recover it. 00:26:17.917 [2024-04-26 15:36:35.276697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 15:36:35.277029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 15:36:35.277044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.917 qpair failed and we were unable to recover it. 00:26:17.917 [2024-04-26 15:36:35.277374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 15:36:35.277582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 15:36:35.277597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.917 qpair failed and we were unable to recover it. 00:26:17.917 [2024-04-26 15:36:35.277925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 15:36:35.278337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 15:36:35.278354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.917 qpair failed and we were unable to recover it. 00:26:17.917 [2024-04-26 15:36:35.278618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 15:36:35.278957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 15:36:35.278971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.917 qpair failed and we were unable to recover it. 00:26:17.917 [2024-04-26 15:36:35.279313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 15:36:35.279679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 15:36:35.279692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.917 qpair failed and we were unable to recover it. 00:26:17.917 [2024-04-26 15:36:35.279895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 15:36:35.280262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 15:36:35.280276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.917 qpair failed and we were unable to recover it. 00:26:17.917 [2024-04-26 15:36:35.280684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 15:36:35.281033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 15:36:35.281048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.917 qpair failed and we were unable to recover it. 00:26:17.917 [2024-04-26 15:36:35.281365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 15:36:35.281729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 15:36:35.281742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.917 qpair failed and we were unable to recover it. 00:26:17.917 [2024-04-26 15:36:35.282079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 15:36:35.282449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.917 [2024-04-26 15:36:35.282463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.917 qpair failed and we were unable to recover it. 00:26:17.917 [2024-04-26 15:36:35.282786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 15:36:35.283147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 15:36:35.283161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.918 qpair failed and we were unable to recover it. 00:26:17.918 [2024-04-26 15:36:35.283379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 15:36:35.283745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 15:36:35.283759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.918 qpair failed and we were unable to recover it. 00:26:17.918 [2024-04-26 15:36:35.284102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 15:36:35.284454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 15:36:35.284468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.918 qpair failed and we were unable to recover it. 00:26:17.918 [2024-04-26 15:36:35.284787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 15:36:35.285039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 15:36:35.285056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.918 qpair failed and we were unable to recover it. 00:26:17.918 [2024-04-26 15:36:35.285435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 15:36:35.285798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 15:36:35.285812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.918 qpair failed and we were unable to recover it. 00:26:17.918 [2024-04-26 15:36:35.286015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 15:36:35.286364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 15:36:35.286378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.918 qpair failed and we were unable to recover it. 00:26:17.918 [2024-04-26 15:36:35.286735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 15:36:35.287074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 15:36:35.287088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.918 qpair failed and we were unable to recover it. 00:26:17.918 [2024-04-26 15:36:35.287287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 15:36:35.287587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 15:36:35.287602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.918 qpair failed and we were unable to recover it. 00:26:17.918 [2024-04-26 15:36:35.287927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 15:36:35.288298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 15:36:35.288312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.918 qpair failed and we were unable to recover it. 00:26:17.918 [2024-04-26 15:36:35.288662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 15:36:35.289001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 15:36:35.289016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.918 qpair failed and we were unable to recover it. 00:26:17.918 [2024-04-26 15:36:35.289370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 15:36:35.289781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 15:36:35.289796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.918 qpair failed and we were unable to recover it. 00:26:17.918 [2024-04-26 15:36:35.290128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 15:36:35.290493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 15:36:35.290508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.918 qpair failed and we were unable to recover it. 00:26:17.918 [2024-04-26 15:36:35.290748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 15:36:35.291115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 15:36:35.291131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.918 qpair failed and we were unable to recover it. 00:26:17.918 [2024-04-26 15:36:35.291465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 15:36:35.291846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 15:36:35.291866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.918 qpair failed and we were unable to recover it. 00:26:17.918 [2024-04-26 15:36:35.292223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 15:36:35.292558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 15:36:35.292572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.918 qpair failed and we were unable to recover it. 00:26:17.918 [2024-04-26 15:36:35.292908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 15:36:35.293230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 15:36:35.293244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.918 qpair failed and we were unable to recover it. 00:26:17.918 [2024-04-26 15:36:35.293529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 15:36:35.293905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 15:36:35.293919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.918 qpair failed and we were unable to recover it. 00:26:17.918 [2024-04-26 15:36:35.294264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 15:36:35.294546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 15:36:35.294560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.918 qpair failed and we were unable to recover it. 00:26:17.918 [2024-04-26 15:36:35.294818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 15:36:35.295205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 15:36:35.295220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.918 qpair failed and we were unable to recover it. 00:26:17.918 [2024-04-26 15:36:35.295539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 15:36:35.295914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 15:36:35.295928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.918 qpair failed and we were unable to recover it. 00:26:17.918 [2024-04-26 15:36:35.296286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 15:36:35.296648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 15:36:35.296661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.918 qpair failed and we were unable to recover it. 00:26:17.918 [2024-04-26 15:36:35.296983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 15:36:35.297357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 15:36:35.297370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.918 qpair failed and we were unable to recover it. 00:26:17.918 [2024-04-26 15:36:35.297774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 15:36:35.298122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 15:36:35.298136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.918 qpair failed and we were unable to recover it. 00:26:17.918 [2024-04-26 15:36:35.298476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 15:36:35.298816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 15:36:35.298829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.918 qpair failed and we were unable to recover it. 00:26:17.918 [2024-04-26 15:36:35.299193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 15:36:35.299530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 15:36:35.299544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.918 qpair failed and we were unable to recover it. 00:26:17.918 [2024-04-26 15:36:35.299891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 15:36:35.300223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 15:36:35.300237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.918 qpair failed and we were unable to recover it. 00:26:17.918 [2024-04-26 15:36:35.300569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 15:36:35.300885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 15:36:35.300900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.918 qpair failed and we were unable to recover it. 00:26:17.918 [2024-04-26 15:36:35.301108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 15:36:35.301500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 15:36:35.301514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.918 qpair failed and we were unable to recover it. 00:26:17.918 [2024-04-26 15:36:35.301949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 15:36:35.302281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 15:36:35.302295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.918 qpair failed and we were unable to recover it. 00:26:17.918 [2024-04-26 15:36:35.302700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 15:36:35.303051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 15:36:35.303067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.918 qpair failed and we were unable to recover it. 00:26:17.918 [2024-04-26 15:36:35.303419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 15:36:35.303772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 15:36:35.303786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.918 qpair failed and we were unable to recover it. 00:26:17.918 [2024-04-26 15:36:35.304083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 15:36:35.304418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 15:36:35.304433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.918 qpair failed and we were unable to recover it. 00:26:17.918 [2024-04-26 15:36:35.304776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 15:36:35.305146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 15:36:35.305161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.918 qpair failed and we were unable to recover it. 00:26:17.918 [2024-04-26 15:36:35.305387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.918 [2024-04-26 15:36:35.305716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.919 [2024-04-26 15:36:35.305730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.919 qpair failed and we were unable to recover it. 00:26:17.919 [2024-04-26 15:36:35.306108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.919 [2024-04-26 15:36:35.306476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.919 [2024-04-26 15:36:35.306490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.919 qpair failed and we were unable to recover it. 00:26:17.919 [2024-04-26 15:36:35.306828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.919 [2024-04-26 15:36:35.307183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.919 [2024-04-26 15:36:35.307197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.919 qpair failed and we were unable to recover it. 00:26:17.919 [2024-04-26 15:36:35.307571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.919 [2024-04-26 15:36:35.307900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.919 [2024-04-26 15:36:35.307915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.919 qpair failed and we were unable to recover it. 00:26:17.919 [2024-04-26 15:36:35.308258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.919 [2024-04-26 15:36:35.308440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.919 [2024-04-26 15:36:35.308453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.919 qpair failed and we were unable to recover it. 00:26:17.919 [2024-04-26 15:36:35.308689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.919 [2024-04-26 15:36:35.308913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.919 [2024-04-26 15:36:35.308930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.919 qpair failed and we were unable to recover it. 00:26:17.919 [2024-04-26 15:36:35.309266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.919 [2024-04-26 15:36:35.309643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.919 [2024-04-26 15:36:35.309656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.919 qpair failed and we were unable to recover it. 00:26:17.919 [2024-04-26 15:36:35.309977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.919 [2024-04-26 15:36:35.310349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.919 [2024-04-26 15:36:35.310363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.919 qpair failed and we were unable to recover it. 00:26:17.919 [2024-04-26 15:36:35.310725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.919 [2024-04-26 15:36:35.311070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.919 [2024-04-26 15:36:35.311085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.919 qpair failed and we were unable to recover it. 00:26:17.919 [2024-04-26 15:36:35.311441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.919 [2024-04-26 15:36:35.311775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.919 [2024-04-26 15:36:35.311789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.919 qpair failed and we were unable to recover it. 00:26:17.919 [2024-04-26 15:36:35.312048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.919 [2024-04-26 15:36:35.312426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.919 [2024-04-26 15:36:35.312439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.919 qpair failed and we were unable to recover it. 00:26:17.919 [2024-04-26 15:36:35.312767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.919 [2024-04-26 15:36:35.313003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.919 [2024-04-26 15:36:35.313018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.919 qpair failed and we were unable to recover it. 00:26:17.919 [2024-04-26 15:36:35.313325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.919 [2024-04-26 15:36:35.313725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.919 [2024-04-26 15:36:35.313739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.919 qpair failed and we were unable to recover it. 00:26:17.919 [2024-04-26 15:36:35.314076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.919 [2024-04-26 15:36:35.314295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.919 [2024-04-26 15:36:35.314311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.919 qpair failed and we were unable to recover it. 00:26:17.919 [2024-04-26 15:36:35.314692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.919 [2024-04-26 15:36:35.315067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.919 [2024-04-26 15:36:35.315082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.919 qpair failed and we were unable to recover it. 00:26:17.919 [2024-04-26 15:36:35.315402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.919 [2024-04-26 15:36:35.315764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.919 [2024-04-26 15:36:35.315778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.919 qpair failed and we were unable to recover it. 00:26:17.919 [2024-04-26 15:36:35.316104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.919 [2024-04-26 15:36:35.316469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.919 [2024-04-26 15:36:35.316483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.919 qpair failed and we were unable to recover it. 00:26:17.919 [2024-04-26 15:36:35.316803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.919 [2024-04-26 15:36:35.317173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.919 [2024-04-26 15:36:35.317188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.919 qpair failed and we were unable to recover it. 00:26:17.919 [2024-04-26 15:36:35.317522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.919 [2024-04-26 15:36:35.317869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.919 [2024-04-26 15:36:35.317884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.919 qpair failed and we were unable to recover it. 00:26:17.919 [2024-04-26 15:36:35.318290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.919 [2024-04-26 15:36:35.318632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.919 [2024-04-26 15:36:35.318645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.919 qpair failed and we were unable to recover it. 00:26:17.919 [2024-04-26 15:36:35.318996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.919 [2024-04-26 15:36:35.319351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.919 [2024-04-26 15:36:35.319364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.919 qpair failed and we were unable to recover it. 00:26:17.919 [2024-04-26 15:36:35.319684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.919 [2024-04-26 15:36:35.320022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.919 [2024-04-26 15:36:35.320036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.919 qpair failed and we were unable to recover it. 00:26:17.919 [2024-04-26 15:36:35.320112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.919 [2024-04-26 15:36:35.320422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.919 [2024-04-26 15:36:35.320437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.919 qpair failed and we were unable to recover it. 00:26:17.919 [2024-04-26 15:36:35.320755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.919 [2024-04-26 15:36:35.321114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.919 [2024-04-26 15:36:35.321128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.919 qpair failed and we were unable to recover it. 00:26:17.919 [2024-04-26 15:36:35.321472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.919 [2024-04-26 15:36:35.321846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.919 [2024-04-26 15:36:35.321861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.919 qpair failed and we were unable to recover it. 00:26:17.919 [2024-04-26 15:36:35.322208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.919 [2024-04-26 15:36:35.322577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.919 [2024-04-26 15:36:35.322592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.919 qpair failed and we were unable to recover it. 00:26:17.919 [2024-04-26 15:36:35.322955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.919 [2024-04-26 15:36:35.323378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.919 [2024-04-26 15:36:35.323392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.919 qpair failed and we were unable to recover it. 00:26:17.919 [2024-04-26 15:36:35.323728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.919 [2024-04-26 15:36:35.324085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.919 [2024-04-26 15:36:35.324100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.919 qpair failed and we were unable to recover it. 00:26:17.919 [2024-04-26 15:36:35.324422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.919 [2024-04-26 15:36:35.324750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.919 [2024-04-26 15:36:35.324763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.919 qpair failed and we were unable to recover it. 00:26:17.919 [2024-04-26 15:36:35.325123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.919 [2024-04-26 15:36:35.325487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.919 [2024-04-26 15:36:35.325502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.919 qpair failed and we were unable to recover it. 00:26:17.919 [2024-04-26 15:36:35.325849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.919 [2024-04-26 15:36:35.326180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.919 [2024-04-26 15:36:35.326193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.920 qpair failed and we were unable to recover it. 00:26:17.920 [2024-04-26 15:36:35.326542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.920 [2024-04-26 15:36:35.326894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.920 [2024-04-26 15:36:35.326909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.920 qpair failed and we were unable to recover it. 00:26:17.920 [2024-04-26 15:36:35.327240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.920 [2024-04-26 15:36:35.327588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.920 [2024-04-26 15:36:35.327602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.920 qpair failed and we were unable to recover it. 00:26:17.920 [2024-04-26 15:36:35.327921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.920 [2024-04-26 15:36:35.328294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.920 [2024-04-26 15:36:35.328307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.920 qpair failed and we were unable to recover it. 00:26:17.920 [2024-04-26 15:36:35.328630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.920 [2024-04-26 15:36:35.328967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.920 [2024-04-26 15:36:35.328981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.920 qpair failed and we were unable to recover it. 00:26:17.920 [2024-04-26 15:36:35.329326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.920 [2024-04-26 15:36:35.329692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.920 [2024-04-26 15:36:35.329707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.920 qpair failed and we were unable to recover it. 00:26:17.920 [2024-04-26 15:36:35.330076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.920 [2024-04-26 15:36:35.330347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.920 [2024-04-26 15:36:35.330360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.920 qpair failed and we were unable to recover it. 00:26:17.920 [2024-04-26 15:36:35.330719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.920 [2024-04-26 15:36:35.331078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.920 [2024-04-26 15:36:35.331093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.920 qpair failed and we were unable to recover it. 00:26:17.920 [2024-04-26 15:36:35.331504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.920 [2024-04-26 15:36:35.331708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.920 [2024-04-26 15:36:35.331724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.920 qpair failed and we were unable to recover it. 00:26:17.920 [2024-04-26 15:36:35.332125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.920 [2024-04-26 15:36:35.332460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.920 [2024-04-26 15:36:35.332474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.920 qpair failed and we were unable to recover it. 00:26:17.920 [2024-04-26 15:36:35.332802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.920 [2024-04-26 15:36:35.333177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.920 [2024-04-26 15:36:35.333192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.920 qpair failed and we were unable to recover it. 00:26:17.920 [2024-04-26 15:36:35.333518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.920 [2024-04-26 15:36:35.333880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.920 [2024-04-26 15:36:35.333894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.920 qpair failed and we were unable to recover it. 00:26:17.920 [2024-04-26 15:36:35.334273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.920 [2024-04-26 15:36:35.334612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.920 [2024-04-26 15:36:35.334625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.920 qpair failed and we were unable to recover it. 00:26:17.920 [2024-04-26 15:36:35.334946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.920 [2024-04-26 15:36:35.335323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.920 [2024-04-26 15:36:35.335337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.920 qpair failed and we were unable to recover it. 00:26:17.920 [2024-04-26 15:36:35.335542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.920 [2024-04-26 15:36:35.335873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.920 [2024-04-26 15:36:35.335888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.920 qpair failed and we were unable to recover it. 00:26:17.920 [2024-04-26 15:36:35.336312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.920 [2024-04-26 15:36:35.336640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.920 [2024-04-26 15:36:35.336655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.920 qpair failed and we were unable to recover it. 00:26:17.920 [2024-04-26 15:36:35.336979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.920 [2024-04-26 15:36:35.337336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.920 [2024-04-26 15:36:35.337350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.920 qpair failed and we were unable to recover it. 00:26:17.920 [2024-04-26 15:36:35.337743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.920 [2024-04-26 15:36:35.338113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.920 [2024-04-26 15:36:35.338127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.920 qpair failed and we were unable to recover it. 00:26:17.920 [2024-04-26 15:36:35.338496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.920 [2024-04-26 15:36:35.338857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.920 [2024-04-26 15:36:35.338872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.920 qpair failed and we were unable to recover it. 00:26:17.920 [2024-04-26 15:36:35.339237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.920 [2024-04-26 15:36:35.339575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.920 [2024-04-26 15:36:35.339589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.920 qpair failed and we were unable to recover it. 00:26:17.920 [2024-04-26 15:36:35.339929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.920 [2024-04-26 15:36:35.340280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.920 [2024-04-26 15:36:35.340293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.920 qpair failed and we were unable to recover it. 00:26:17.920 [2024-04-26 15:36:35.340627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.920 [2024-04-26 15:36:35.341005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.920 [2024-04-26 15:36:35.341021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.920 qpair failed and we were unable to recover it. 00:26:17.920 [2024-04-26 15:36:35.341346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.920 [2024-04-26 15:36:35.341716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.920 [2024-04-26 15:36:35.341731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.920 qpair failed and we were unable to recover it. 00:26:17.920 [2024-04-26 15:36:35.342074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.920 [2024-04-26 15:36:35.342436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.920 [2024-04-26 15:36:35.342450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.920 qpair failed and we were unable to recover it. 00:26:17.920 [2024-04-26 15:36:35.342773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.920 [2024-04-26 15:36:35.343038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.920 [2024-04-26 15:36:35.343053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.920 qpair failed and we were unable to recover it. 00:26:17.920 [2024-04-26 15:36:35.343369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.920 [2024-04-26 15:36:35.343740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.920 [2024-04-26 15:36:35.343755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.920 qpair failed and we were unable to recover it. 00:26:17.920 [2024-04-26 15:36:35.344101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.920 [2024-04-26 15:36:35.344302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.920 [2024-04-26 15:36:35.344317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.920 qpair failed and we were unable to recover it. 00:26:17.920 [2024-04-26 15:36:35.344674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.920 [2024-04-26 15:36:35.345077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.920 [2024-04-26 15:36:35.345093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:17.920 qpair failed and we were unable to recover it. 00:26:18.188 [2024-04-26 15:36:35.345415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.188 [2024-04-26 15:36:35.345696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.188 [2024-04-26 15:36:35.345713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.188 qpair failed and we were unable to recover it. 00:26:18.188 [2024-04-26 15:36:35.346037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.189 [2024-04-26 15:36:35.346408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.189 [2024-04-26 15:36:35.346422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.189 qpair failed and we were unable to recover it. 00:26:18.189 [2024-04-26 15:36:35.346645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.189 [2024-04-26 15:36:35.347002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.189 [2024-04-26 15:36:35.347017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.189 qpair failed and we were unable to recover it. 00:26:18.189 [2024-04-26 15:36:35.347313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.189 [2024-04-26 15:36:35.347679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.189 [2024-04-26 15:36:35.347694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.189 qpair failed and we were unable to recover it. 00:26:18.189 [2024-04-26 15:36:35.347894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.189 [2024-04-26 15:36:35.348258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.189 [2024-04-26 15:36:35.348274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.189 qpair failed and we were unable to recover it. 00:26:18.189 [2024-04-26 15:36:35.348680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.189 [2024-04-26 15:36:35.349030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.189 [2024-04-26 15:36:35.349046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.189 qpair failed and we were unable to recover it. 00:26:18.189 [2024-04-26 15:36:35.349388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.189 [2024-04-26 15:36:35.349731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.189 [2024-04-26 15:36:35.349744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.189 qpair failed and we were unable to recover it. 00:26:18.189 [2024-04-26 15:36:35.350099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.189 [2024-04-26 15:36:35.350469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.189 [2024-04-26 15:36:35.350484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.189 qpair failed and we were unable to recover it. 00:26:18.189 [2024-04-26 15:36:35.350846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.189 [2024-04-26 15:36:35.351188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.189 [2024-04-26 15:36:35.351203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.189 qpair failed and we were unable to recover it. 00:26:18.189 [2024-04-26 15:36:35.351523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.189 [2024-04-26 15:36:35.351886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.189 [2024-04-26 15:36:35.351901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.189 qpair failed and we were unable to recover it. 00:26:18.189 [2024-04-26 15:36:35.352247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.189 [2024-04-26 15:36:35.352598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.189 [2024-04-26 15:36:35.352612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.189 qpair failed and we were unable to recover it. 00:26:18.189 [2024-04-26 15:36:35.352936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.189 [2024-04-26 15:36:35.353280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.189 [2024-04-26 15:36:35.353294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.189 qpair failed and we were unable to recover it. 00:26:18.189 [2024-04-26 15:36:35.353624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.189 [2024-04-26 15:36:35.353993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.189 [2024-04-26 15:36:35.354008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.189 qpair failed and we were unable to recover it. 00:26:18.189 [2024-04-26 15:36:35.354345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.189 [2024-04-26 15:36:35.354726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.189 [2024-04-26 15:36:35.354740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.189 qpair failed and we were unable to recover it. 00:26:18.189 [2024-04-26 15:36:35.355082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.189 [2024-04-26 15:36:35.355298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.189 [2024-04-26 15:36:35.355311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.189 qpair failed and we were unable to recover it. 00:26:18.189 [2024-04-26 15:36:35.355652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.189 [2024-04-26 15:36:35.355993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.189 [2024-04-26 15:36:35.356008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.189 qpair failed and we were unable to recover it. 00:26:18.189 [2024-04-26 15:36:35.356245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.189 [2024-04-26 15:36:35.356582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.189 [2024-04-26 15:36:35.356596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.189 qpair failed and we were unable to recover it. 00:26:18.189 [2024-04-26 15:36:35.356925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.189 [2024-04-26 15:36:35.357311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.189 [2024-04-26 15:36:35.357325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.189 qpair failed and we were unable to recover it. 00:26:18.189 [2024-04-26 15:36:35.357646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.189 [2024-04-26 15:36:35.357993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.189 [2024-04-26 15:36:35.358008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.189 qpair failed and we were unable to recover it. 00:26:18.189 [2024-04-26 15:36:35.358214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.189 [2024-04-26 15:36:35.358528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.189 [2024-04-26 15:36:35.358543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.189 qpair failed and we were unable to recover it. 00:26:18.189 [2024-04-26 15:36:35.358864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.189 [2024-04-26 15:36:35.359072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.189 [2024-04-26 15:36:35.359087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.189 qpair failed and we were unable to recover it. 00:26:18.189 [2024-04-26 15:36:35.359425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.189 [2024-04-26 15:36:35.359764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.189 [2024-04-26 15:36:35.359778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.189 qpair failed and we were unable to recover it. 00:26:18.189 [2024-04-26 15:36:35.360107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.189 [2024-04-26 15:36:35.360478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.189 [2024-04-26 15:36:35.360493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.189 qpair failed and we were unable to recover it. 00:26:18.189 [2024-04-26 15:36:35.360853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.189 [2024-04-26 15:36:35.361207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.189 [2024-04-26 15:36:35.361221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.189 qpair failed and we were unable to recover it. 00:26:18.189 [2024-04-26 15:36:35.361545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.189 [2024-04-26 15:36:35.361890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.189 [2024-04-26 15:36:35.361905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.189 qpair failed and we were unable to recover it. 00:26:18.189 [2024-04-26 15:36:35.362253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.189 [2024-04-26 15:36:35.362553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.189 [2024-04-26 15:36:35.362567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.189 qpair failed and we were unable to recover it. 00:26:18.189 [2024-04-26 15:36:35.362890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.189 [2024-04-26 15:36:35.363254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.189 [2024-04-26 15:36:35.363268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.189 qpair failed and we were unable to recover it. 00:26:18.189 [2024-04-26 15:36:35.363590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.189 [2024-04-26 15:36:35.363924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.189 [2024-04-26 15:36:35.363939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.189 qpair failed and we were unable to recover it. 00:26:18.189 [2024-04-26 15:36:35.364232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.189 [2024-04-26 15:36:35.364597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.189 [2024-04-26 15:36:35.364612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.189 qpair failed and we were unable to recover it. 00:26:18.189 [2024-04-26 15:36:35.364823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.189 [2024-04-26 15:36:35.365015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.189 [2024-04-26 15:36:35.365031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.190 qpair failed and we were unable to recover it. 00:26:18.190 [2024-04-26 15:36:35.365383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.190 [2024-04-26 15:36:35.365734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.190 [2024-04-26 15:36:35.365748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.190 qpair failed and we were unable to recover it. 00:26:18.190 [2024-04-26 15:36:35.366083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.190 [2024-04-26 15:36:35.366433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.190 [2024-04-26 15:36:35.366447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.190 qpair failed and we were unable to recover it. 00:26:18.190 [2024-04-26 15:36:35.366774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.190 [2024-04-26 15:36:35.367115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.190 [2024-04-26 15:36:35.367130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.190 qpair failed and we were unable to recover it. 00:26:18.190 [2024-04-26 15:36:35.367363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.190 [2024-04-26 15:36:35.367731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.190 [2024-04-26 15:36:35.367745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.190 qpair failed and we were unable to recover it. 00:26:18.190 [2024-04-26 15:36:35.367932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.190 [2024-04-26 15:36:35.368326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.190 [2024-04-26 15:36:35.368340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.190 qpair failed and we were unable to recover it. 00:26:18.190 [2024-04-26 15:36:35.368658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.190 [2024-04-26 15:36:35.369024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.190 [2024-04-26 15:36:35.369039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.190 qpair failed and we were unable to recover it. 00:26:18.190 [2024-04-26 15:36:35.369441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.190 [2024-04-26 15:36:35.369813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.190 [2024-04-26 15:36:35.369828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.190 qpair failed and we were unable to recover it. 00:26:18.190 [2024-04-26 15:36:35.370193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.190 [2024-04-26 15:36:35.370525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.190 [2024-04-26 15:36:35.370540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.190 qpair failed and we were unable to recover it. 00:26:18.190 [2024-04-26 15:36:35.370978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.190 [2024-04-26 15:36:35.371362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.190 [2024-04-26 15:36:35.371378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.190 qpair failed and we were unable to recover it. 00:26:18.190 [2024-04-26 15:36:35.371717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.190 [2024-04-26 15:36:35.371934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.190 [2024-04-26 15:36:35.371950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.190 qpair failed and we were unable to recover it. 00:26:18.190 [2024-04-26 15:36:35.372254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.190 [2024-04-26 15:36:35.372679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.190 [2024-04-26 15:36:35.372693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.190 qpair failed and we were unable to recover it. 00:26:18.190 [2024-04-26 15:36:35.373008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.190 [2024-04-26 15:36:35.373255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.190 [2024-04-26 15:36:35.373268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.190 qpair failed and we were unable to recover it. 00:26:18.190 [2024-04-26 15:36:35.373664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.190 [2024-04-26 15:36:35.373970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.190 [2024-04-26 15:36:35.373986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.190 qpair failed and we were unable to recover it. 00:26:18.190 [2024-04-26 15:36:35.374339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.190 [2024-04-26 15:36:35.374547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.190 [2024-04-26 15:36:35.374562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.190 qpair failed and we were unable to recover it. 00:26:18.190 [2024-04-26 15:36:35.374935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.190 [2024-04-26 15:36:35.375290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.190 [2024-04-26 15:36:35.375304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.190 qpair failed and we were unable to recover it. 00:26:18.190 [2024-04-26 15:36:35.375631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.190 [2024-04-26 15:36:35.375850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.190 [2024-04-26 15:36:35.375866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.190 qpair failed and we were unable to recover it. 00:26:18.190 [2024-04-26 15:36:35.376197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.190 [2024-04-26 15:36:35.376517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.190 [2024-04-26 15:36:35.376532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.190 qpair failed and we were unable to recover it. 00:26:18.190 [2024-04-26 15:36:35.376870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.190 [2024-04-26 15:36:35.377251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.190 [2024-04-26 15:36:35.377266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.190 qpair failed and we were unable to recover it. 00:26:18.190 [2024-04-26 15:36:35.377533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.190 [2024-04-26 15:36:35.377881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.190 [2024-04-26 15:36:35.377896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.190 qpair failed and we were unable to recover it. 00:26:18.190 [2024-04-26 15:36:35.378255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.190 [2024-04-26 15:36:35.378608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.190 [2024-04-26 15:36:35.378622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.190 qpair failed and we were unable to recover it. 00:26:18.190 [2024-04-26 15:36:35.378930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.190 [2024-04-26 15:36:35.379263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.190 [2024-04-26 15:36:35.379277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.190 qpair failed and we were unable to recover it. 00:26:18.190 [2024-04-26 15:36:35.379605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.190 [2024-04-26 15:36:35.379979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.190 [2024-04-26 15:36:35.379994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.190 qpair failed and we were unable to recover it. 00:26:18.190 [2024-04-26 15:36:35.380418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.190 [2024-04-26 15:36:35.380732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.190 [2024-04-26 15:36:35.380747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.190 qpair failed and we were unable to recover it. 00:26:18.190 [2024-04-26 15:36:35.381087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.190 [2024-04-26 15:36:35.381298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.190 [2024-04-26 15:36:35.381317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.190 qpair failed and we were unable to recover it. 00:26:18.190 [2024-04-26 15:36:35.381558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.190 [2024-04-26 15:36:35.381931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.190 [2024-04-26 15:36:35.381947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.190 qpair failed and we were unable to recover it. 00:26:18.190 [2024-04-26 15:36:35.382311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.190 [2024-04-26 15:36:35.382632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.190 [2024-04-26 15:36:35.382647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.190 qpair failed and we were unable to recover it. 00:26:18.190 [2024-04-26 15:36:35.382992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.190 [2024-04-26 15:36:35.383364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.190 [2024-04-26 15:36:35.383379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.190 qpair failed and we were unable to recover it. 00:26:18.190 [2024-04-26 15:36:35.383727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.190 [2024-04-26 15:36:35.384103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.190 [2024-04-26 15:36:35.384124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.190 qpair failed and we were unable to recover it. 00:26:18.191 [2024-04-26 15:36:35.384394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 15:36:35.384650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 15:36:35.384675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.191 qpair failed and we were unable to recover it. 00:26:18.191 [2024-04-26 15:36:35.385035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 15:36:35.385403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 15:36:35.385419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.191 qpair failed and we were unable to recover it. 00:26:18.191 [2024-04-26 15:36:35.385799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 15:36:35.386119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 15:36:35.386135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.191 qpair failed and we were unable to recover it. 00:26:18.191 [2024-04-26 15:36:35.386469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 15:36:35.386721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 15:36:35.386747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.191 qpair failed and we were unable to recover it. 00:26:18.191 [2024-04-26 15:36:35.387003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 15:36:35.387381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 15:36:35.387397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.191 qpair failed and we were unable to recover it. 00:26:18.191 [2024-04-26 15:36:35.387758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 15:36:35.388132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 15:36:35.388155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.191 qpair failed and we were unable to recover it. 00:26:18.191 [2024-04-26 15:36:35.388381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 15:36:35.388746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 15:36:35.388762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.191 qpair failed and we were unable to recover it. 00:26:18.191 [2024-04-26 15:36:35.389102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 15:36:35.389472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 15:36:35.389499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.191 qpair failed and we were unable to recover it. 00:26:18.191 [2024-04-26 15:36:35.389835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 15:36:35.390062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 15:36:35.390078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.191 qpair failed and we were unable to recover it. 00:26:18.191 [2024-04-26 15:36:35.390474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 15:36:35.390697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 15:36:35.390713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.191 qpair failed and we were unable to recover it. 00:26:18.191 [2024-04-26 15:36:35.391050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 15:36:35.391422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 15:36:35.391436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.191 qpair failed and we were unable to recover it. 00:26:18.191 [2024-04-26 15:36:35.391806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 15:36:35.392015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 15:36:35.392033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.191 qpair failed and we were unable to recover it. 00:26:18.191 [2024-04-26 15:36:35.392373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 15:36:35.392613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 15:36:35.392628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.191 qpair failed and we were unable to recover it. 00:26:18.191 [2024-04-26 15:36:35.392986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 15:36:35.393357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 15:36:35.393381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.191 qpair failed and we were unable to recover it. 00:26:18.191 [2024-04-26 15:36:35.393759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 15:36:35.394109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 15:36:35.394129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.191 qpair failed and we were unable to recover it. 00:26:18.191 [2024-04-26 15:36:35.394470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 15:36:35.394836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 15:36:35.394868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.191 qpair failed and we were unable to recover it. 00:26:18.191 [2024-04-26 15:36:35.395235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 15:36:35.395601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 15:36:35.395616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.191 qpair failed and we were unable to recover it. 00:26:18.191 [2024-04-26 15:36:35.395956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 15:36:35.396308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 15:36:35.396335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.191 qpair failed and we were unable to recover it. 00:26:18.191 [2024-04-26 15:36:35.396594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 15:36:35.396968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 15:36:35.396987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.191 qpair failed and we were unable to recover it. 00:26:18.191 [2024-04-26 15:36:35.397286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 15:36:35.397654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 15:36:35.397669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.191 qpair failed and we were unable to recover it. 00:26:18.191 [2024-04-26 15:36:35.397867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 15:36:35.398289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 15:36:35.398313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.191 qpair failed and we were unable to recover it. 00:26:18.191 [2024-04-26 15:36:35.398672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 15:36:35.399033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 15:36:35.399052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.191 qpair failed and we were unable to recover it. 00:26:18.191 [2024-04-26 15:36:35.399285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 15:36:35.399642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 15:36:35.399657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.191 qpair failed and we were unable to recover it. 00:26:18.191 [2024-04-26 15:36:35.400005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 15:36:35.400314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 15:36:35.400333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.191 qpair failed and we were unable to recover it. 00:26:18.191 [2024-04-26 15:36:35.400710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 15:36:35.401075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 15:36:35.401095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.191 qpair failed and we were unable to recover it. 00:26:18.191 [2024-04-26 15:36:35.401422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 15:36:35.401791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 15:36:35.401817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.191 qpair failed and we were unable to recover it. 00:26:18.191 [2024-04-26 15:36:35.402186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 15:36:35.402560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 15:36:35.402584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.191 qpair failed and we were unable to recover it. 00:26:18.191 [2024-04-26 15:36:35.402959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 15:36:35.403317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 15:36:35.403344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.191 qpair failed and we were unable to recover it. 00:26:18.191 [2024-04-26 15:36:35.403637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.191 [2024-04-26 15:36:35.404009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 15:36:35.404026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.192 qpair failed and we were unable to recover it. 00:26:18.192 [2024-04-26 15:36:35.404389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 15:36:35.404754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 15:36:35.404778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.192 qpair failed and we were unable to recover it. 00:26:18.192 [2024-04-26 15:36:35.405168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 15:36:35.405552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 15:36:35.405570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.192 qpair failed and we were unable to recover it. 00:26:18.192 [2024-04-26 15:36:35.405896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 15:36:35.406253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 15:36:35.406267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.192 qpair failed and we were unable to recover it. 00:26:18.192 [2024-04-26 15:36:35.406658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 15:36:35.407017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 15:36:35.407034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.192 qpair failed and we were unable to recover it. 00:26:18.192 [2024-04-26 15:36:35.407408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 15:36:35.407649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 15:36:35.407677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.192 qpair failed and we were unable to recover it. 00:26:18.192 [2024-04-26 15:36:35.408046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 15:36:35.408413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 15:36:35.408428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.192 qpair failed and we were unable to recover it. 00:26:18.192 [2024-04-26 15:36:35.408767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 15:36:35.409162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 15:36:35.409190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.192 qpair failed and we were unable to recover it. 00:26:18.192 [2024-04-26 15:36:35.409563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 15:36:35.409792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 15:36:35.409809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.192 qpair failed and we were unable to recover it. 00:26:18.192 [2024-04-26 15:36:35.410161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 15:36:35.410506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 15:36:35.410521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.192 qpair failed and we were unable to recover it. 00:26:18.192 [2024-04-26 15:36:35.410859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 15:36:35.411206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 15:36:35.411233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.192 qpair failed and we were unable to recover it. 00:26:18.192 [2024-04-26 15:36:35.411588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 15:36:35.411945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 15:36:35.411964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.192 qpair failed and we were unable to recover it. 00:26:18.192 [2024-04-26 15:36:35.412334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 15:36:35.412712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 15:36:35.412727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.192 qpair failed and we were unable to recover it. 00:26:18.192 [2024-04-26 15:36:35.413094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 15:36:35.413309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 15:36:35.413324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.192 qpair failed and we were unable to recover it. 00:26:18.192 [2024-04-26 15:36:35.413607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 15:36:35.413948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 15:36:35.413975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.192 qpair failed and we were unable to recover it. 00:26:18.192 [2024-04-26 15:36:35.414336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 15:36:35.414701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 15:36:35.414716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.192 qpair failed and we were unable to recover it. 00:26:18.192 [2024-04-26 15:36:35.415076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 15:36:35.415331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 15:36:35.415353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.192 qpair failed and we were unable to recover it. 00:26:18.192 [2024-04-26 15:36:35.415718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 15:36:35.416083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 15:36:35.416103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.192 qpair failed and we were unable to recover it. 00:26:18.192 [2024-04-26 15:36:35.416489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 15:36:35.416860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 15:36:35.416875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.192 qpair failed and we were unable to recover it. 00:26:18.192 [2024-04-26 15:36:35.417219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 15:36:35.417596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 15:36:35.417621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.192 qpair failed and we were unable to recover it. 00:26:18.192 [2024-04-26 15:36:35.418041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 15:36:35.418379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 15:36:35.418405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.192 qpair failed and we were unable to recover it. 00:26:18.192 [2024-04-26 15:36:35.418775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 15:36:35.419155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 15:36:35.419171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.192 qpair failed and we were unable to recover it. 00:26:18.192 [2024-04-26 15:36:35.419509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 15:36:35.419881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 15:36:35.419907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.192 qpair failed and we were unable to recover it. 00:26:18.192 [2024-04-26 15:36:35.420237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 15:36:35.420599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 15:36:35.420617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.192 qpair failed and we were unable to recover it. 00:26:18.192 [2024-04-26 15:36:35.420959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 15:36:35.421305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 15:36:35.421328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.192 qpair failed and we were unable to recover it. 00:26:18.192 [2024-04-26 15:36:35.421708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 15:36:35.422076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 15:36:35.422102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.192 qpair failed and we were unable to recover it. 00:26:18.192 [2024-04-26 15:36:35.422531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 15:36:35.422828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 15:36:35.422857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.192 qpair failed and we were unable to recover it. 00:26:18.192 [2024-04-26 15:36:35.423221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 15:36:35.423586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.192 [2024-04-26 15:36:35.423601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.192 qpair failed and we were unable to recover it. 00:26:18.192 [2024-04-26 15:36:35.423968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 15:36:35.424347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 15:36:35.424373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.193 qpair failed and we were unable to recover it. 00:26:18.193 [2024-04-26 15:36:35.424632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 15:36:35.425000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 15:36:35.425016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.193 qpair failed and we were unable to recover it. 00:26:18.193 [2024-04-26 15:36:35.425394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 15:36:35.425731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 15:36:35.425756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.193 qpair failed and we were unable to recover it. 00:26:18.193 [2024-04-26 15:36:35.426109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 15:36:35.426453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 15:36:35.426478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.193 qpair failed and we were unable to recover it. 00:26:18.193 [2024-04-26 15:36:35.426784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 15:36:35.427138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 15:36:35.427165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.193 qpair failed and we were unable to recover it. 00:26:18.193 [2024-04-26 15:36:35.427392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 15:36:35.427782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 15:36:35.427800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.193 qpair failed and we were unable to recover it. 00:26:18.193 [2024-04-26 15:36:35.428177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 15:36:35.428543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 15:36:35.428560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.193 qpair failed and we were unable to recover it. 00:26:18.193 [2024-04-26 15:36:35.428958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 15:36:35.429319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 15:36:35.429338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.193 qpair failed and we were unable to recover it. 00:26:18.193 [2024-04-26 15:36:35.429677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 15:36:35.430029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 15:36:35.430044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.193 qpair failed and we were unable to recover it. 00:26:18.193 [2024-04-26 15:36:35.430398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 15:36:35.430759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 15:36:35.430784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.193 qpair failed and we were unable to recover it. 00:26:18.193 [2024-04-26 15:36:35.431180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 15:36:35.431552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 15:36:35.431567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.193 qpair failed and we were unable to recover it. 00:26:18.193 [2024-04-26 15:36:35.431896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 15:36:35.432277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 15:36:35.432302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.193 qpair failed and we were unable to recover it. 00:26:18.193 [2024-04-26 15:36:35.432678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 15:36:35.433046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 15:36:35.433063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.193 qpair failed and we were unable to recover it. 00:26:18.193 [2024-04-26 15:36:35.433414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 15:36:35.433619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 15:36:35.433634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.193 qpair failed and we were unable to recover it. 00:26:18.193 [2024-04-26 15:36:35.433829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 15:36:35.434044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 15:36:35.434059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.193 qpair failed and we were unable to recover it. 00:26:18.193 [2024-04-26 15:36:35.434429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 15:36:35.434802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 15:36:35.434816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.193 qpair failed and we were unable to recover it. 00:26:18.193 [2024-04-26 15:36:35.435179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 15:36:35.435529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 15:36:35.435543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.193 qpair failed and we were unable to recover it. 00:26:18.193 [2024-04-26 15:36:35.435884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 15:36:35.436109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 15:36:35.436124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.193 qpair failed and we were unable to recover it. 00:26:18.193 [2024-04-26 15:36:35.436466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 15:36:35.436847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 15:36:35.436862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.193 qpair failed and we were unable to recover it. 00:26:18.193 [2024-04-26 15:36:35.437219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 15:36:35.437589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.193 [2024-04-26 15:36:35.437603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.193 qpair failed and we were unable to recover it. 00:26:18.193 [2024-04-26 15:36:35.437943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 15:36:35.438141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 15:36:35.438156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.194 qpair failed and we were unable to recover it. 00:26:18.194 [2024-04-26 15:36:35.438474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 15:36:35.438859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 15:36:35.438875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.194 qpair failed and we were unable to recover it. 00:26:18.194 [2024-04-26 15:36:35.439234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 15:36:35.439581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 15:36:35.439594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.194 qpair failed and we were unable to recover it. 00:26:18.194 [2024-04-26 15:36:35.439927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 15:36:35.440142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 15:36:35.440158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.194 qpair failed and we were unable to recover it. 00:26:18.194 [2024-04-26 15:36:35.440531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 15:36:35.440875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 15:36:35.440890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.194 qpair failed and we were unable to recover it. 00:26:18.194 [2024-04-26 15:36:35.441242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 15:36:35.441576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 15:36:35.441589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.194 qpair failed and we were unable to recover it. 00:26:18.194 [2024-04-26 15:36:35.441858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 15:36:35.442235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 15:36:35.442249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.194 qpair failed and we were unable to recover it. 00:26:18.194 [2024-04-26 15:36:35.442571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 15:36:35.442939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 15:36:35.442953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.194 qpair failed and we were unable to recover it. 00:26:18.194 [2024-04-26 15:36:35.443165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 15:36:35.443520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 15:36:35.443534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.194 qpair failed and we were unable to recover it. 00:26:18.194 [2024-04-26 15:36:35.443856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 15:36:35.444231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 15:36:35.444245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.194 qpair failed and we were unable to recover it. 00:26:18.194 [2024-04-26 15:36:35.444566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 15:36:35.444924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 15:36:35.444939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.194 qpair failed and we were unable to recover it. 00:26:18.194 [2024-04-26 15:36:35.445304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 15:36:35.445668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 15:36:35.445683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.194 qpair failed and we were unable to recover it. 00:26:18.194 [2024-04-26 15:36:35.446038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 15:36:35.446341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 15:36:35.446356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.194 qpair failed and we were unable to recover it. 00:26:18.194 [2024-04-26 15:36:35.446713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 15:36:35.447076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 15:36:35.447090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.194 qpair failed and we were unable to recover it. 00:26:18.194 [2024-04-26 15:36:35.447439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 15:36:35.447628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 15:36:35.447643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.194 qpair failed and we were unable to recover it. 00:26:18.194 [2024-04-26 15:36:35.448030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 15:36:35.448246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 15:36:35.448260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.194 qpair failed and we were unable to recover it. 00:26:18.194 [2024-04-26 15:36:35.448586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 15:36:35.448927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 15:36:35.448941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.194 qpair failed and we were unable to recover it. 00:26:18.194 [2024-04-26 15:36:35.449260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 15:36:35.449626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 15:36:35.449640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.194 qpair failed and we were unable to recover it. 00:26:18.194 [2024-04-26 15:36:35.450009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 15:36:35.450208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 15:36:35.450223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.194 qpair failed and we were unable to recover it. 00:26:18.194 [2024-04-26 15:36:35.450618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 15:36:35.451027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 15:36:35.451041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.194 qpair failed and we were unable to recover it. 00:26:18.194 [2024-04-26 15:36:35.451308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 15:36:35.451676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 15:36:35.451689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.194 qpair failed and we were unable to recover it. 00:26:18.194 [2024-04-26 15:36:35.452010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 15:36:35.452361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 15:36:35.452375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.194 qpair failed and we were unable to recover it. 00:26:18.194 [2024-04-26 15:36:35.452739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 15:36:35.453079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 15:36:35.453095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.194 qpair failed and we were unable to recover it. 00:26:18.194 [2024-04-26 15:36:35.453438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 15:36:35.453676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 15:36:35.453691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.194 qpair failed and we were unable to recover it. 00:26:18.194 [2024-04-26 15:36:35.454035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 15:36:35.454367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 15:36:35.454381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.194 qpair failed and we were unable to recover it. 00:26:18.194 [2024-04-26 15:36:35.454731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 15:36:35.455078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 15:36:35.455093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.194 qpair failed and we were unable to recover it. 00:26:18.194 [2024-04-26 15:36:35.455415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 15:36:35.455776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 15:36:35.455790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.194 qpair failed and we were unable to recover it. 00:26:18.194 [2024-04-26 15:36:35.456083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 15:36:35.456389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 15:36:35.456403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.194 qpair failed and we were unable to recover it. 00:26:18.194 [2024-04-26 15:36:35.456721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.194 [2024-04-26 15:36:35.457097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 15:36:35.457111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.195 qpair failed and we were unable to recover it. 00:26:18.195 [2024-04-26 15:36:35.457430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 15:36:35.457781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 15:36:35.457794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.195 qpair failed and we were unable to recover it. 00:26:18.195 [2024-04-26 15:36:35.458132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 15:36:35.458337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 15:36:35.458353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.195 qpair failed and we were unable to recover it. 00:26:18.195 [2024-04-26 15:36:35.458705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 15:36:35.459083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 15:36:35.459098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.195 qpair failed and we were unable to recover it. 00:26:18.195 [2024-04-26 15:36:35.459437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 15:36:35.459776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 15:36:35.459789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.195 qpair failed and we were unable to recover it. 00:26:18.195 [2024-04-26 15:36:35.460118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 15:36:35.460476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 15:36:35.460490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.195 qpair failed and we were unable to recover it. 00:26:18.195 [2024-04-26 15:36:35.460849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 15:36:35.461204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 15:36:35.461219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.195 qpair failed and we were unable to recover it. 00:26:18.195 [2024-04-26 15:36:35.461541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 15:36:35.461875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 15:36:35.461890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.195 qpair failed and we were unable to recover it. 00:26:18.195 [2024-04-26 15:36:35.462241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 15:36:35.462616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 15:36:35.462629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.195 qpair failed and we were unable to recover it. 00:26:18.195 [2024-04-26 15:36:35.462992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 15:36:35.463365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 15:36:35.463380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.195 qpair failed and we were unable to recover it. 00:26:18.195 [2024-04-26 15:36:35.463737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 15:36:35.464091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 15:36:35.464106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.195 qpair failed and we were unable to recover it. 00:26:18.195 [2024-04-26 15:36:35.464459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 15:36:35.464822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 15:36:35.464835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.195 qpair failed and we were unable to recover it. 00:26:18.195 [2024-04-26 15:36:35.465133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 15:36:35.465500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 15:36:35.465514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.195 qpair failed and we were unable to recover it. 00:26:18.195 [2024-04-26 15:36:35.465712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 15:36:35.466083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 15:36:35.466098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.195 qpair failed and we were unable to recover it. 00:26:18.195 [2024-04-26 15:36:35.466425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 15:36:35.466774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 15:36:35.466788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.195 qpair failed and we were unable to recover it. 00:26:18.195 [2024-04-26 15:36:35.467126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 15:36:35.467481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 15:36:35.467495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.195 qpair failed and we were unable to recover it. 00:26:18.195 [2024-04-26 15:36:35.467712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 15:36:35.468032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 15:36:35.468046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.195 qpair failed and we were unable to recover it. 00:26:18.195 [2024-04-26 15:36:35.468363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 15:36:35.468733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 15:36:35.468748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.195 qpair failed and we were unable to recover it. 00:26:18.195 [2024-04-26 15:36:35.469097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 15:36:35.469406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 15:36:35.469420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.195 qpair failed and we were unable to recover it. 00:26:18.195 [2024-04-26 15:36:35.469793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 15:36:35.470148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 15:36:35.470162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.195 qpair failed and we were unable to recover it. 00:26:18.195 [2024-04-26 15:36:35.470552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 15:36:35.470930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 15:36:35.470944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.195 qpair failed and we were unable to recover it. 00:26:18.195 [2024-04-26 15:36:35.471152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 15:36:35.471387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 15:36:35.471401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.195 qpair failed and we were unable to recover it. 00:26:18.195 [2024-04-26 15:36:35.471736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 15:36:35.471951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 15:36:35.471967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.195 qpair failed and we were unable to recover it. 00:26:18.195 [2024-04-26 15:36:35.472367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 15:36:35.472700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 15:36:35.472714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.195 qpair failed and we were unable to recover it. 00:26:18.195 [2024-04-26 15:36:35.473027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 15:36:35.473391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 15:36:35.473405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.195 qpair failed and we were unable to recover it. 00:26:18.195 [2024-04-26 15:36:35.473730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 15:36:35.474117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 15:36:35.474131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.195 qpair failed and we were unable to recover it. 00:26:18.195 [2024-04-26 15:36:35.474456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 15:36:35.474815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 15:36:35.474828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.195 qpair failed and we were unable to recover it. 00:26:18.195 [2024-04-26 15:36:35.475155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 15:36:35.475452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 15:36:35.475466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.195 qpair failed and we were unable to recover it. 00:26:18.195 [2024-04-26 15:36:35.475785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 15:36:35.476037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.195 [2024-04-26 15:36:35.476051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.195 qpair failed and we were unable to recover it. 00:26:18.196 [2024-04-26 15:36:35.476399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 15:36:35.476767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 15:36:35.476780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.196 qpair failed and we were unable to recover it. 00:26:18.196 [2024-04-26 15:36:35.477112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 15:36:35.477488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 15:36:35.477502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.196 qpair failed and we were unable to recover it. 00:26:18.196 [2024-04-26 15:36:35.477875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 15:36:35.478213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 15:36:35.478226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.196 qpair failed and we were unable to recover it. 00:26:18.196 [2024-04-26 15:36:35.478420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 15:36:35.478764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 15:36:35.478778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.196 qpair failed and we were unable to recover it. 00:26:18.196 [2024-04-26 15:36:35.479107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 15:36:35.479459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 15:36:35.479473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.196 qpair failed and we were unable to recover it. 00:26:18.196 [2024-04-26 15:36:35.479798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 15:36:35.480166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 15:36:35.480181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.196 qpair failed and we were unable to recover it. 00:26:18.196 [2024-04-26 15:36:35.480504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 15:36:35.480757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 15:36:35.480770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.196 qpair failed and we were unable to recover it. 00:26:18.196 [2024-04-26 15:36:35.481097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 15:36:35.481450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 15:36:35.481464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.196 qpair failed and we were unable to recover it. 00:26:18.196 [2024-04-26 15:36:35.481708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 15:36:35.482073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 15:36:35.482087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.196 qpair failed and we were unable to recover it. 00:26:18.196 [2024-04-26 15:36:35.482422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 15:36:35.482793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 15:36:35.482806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.196 qpair failed and we were unable to recover it. 00:26:18.196 [2024-04-26 15:36:35.483188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 15:36:35.483543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 15:36:35.483557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.196 qpair failed and we were unable to recover it. 00:26:18.196 [2024-04-26 15:36:35.483904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 15:36:35.484247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 15:36:35.484262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.196 qpair failed and we were unable to recover it. 00:26:18.196 [2024-04-26 15:36:35.484618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 15:36:35.484952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 15:36:35.484966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.196 qpair failed and we were unable to recover it. 00:26:18.196 [2024-04-26 15:36:35.485317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 15:36:35.485500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 15:36:35.485518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.196 qpair failed and we were unable to recover it. 00:26:18.196 [2024-04-26 15:36:35.485729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 15:36:35.485960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 15:36:35.485977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.196 qpair failed and we were unable to recover it. 00:26:18.196 [2024-04-26 15:36:35.486354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 15:36:35.486683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 15:36:35.486697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.196 qpair failed and we were unable to recover it. 00:26:18.196 [2024-04-26 15:36:35.486991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 15:36:35.487267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 15:36:35.487281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.196 qpair failed and we were unable to recover it. 00:26:18.196 [2024-04-26 15:36:35.487603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 15:36:35.487938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 15:36:35.487952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.196 qpair failed and we were unable to recover it. 00:26:18.196 [2024-04-26 15:36:35.488351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 15:36:35.488710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 15:36:35.488724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.196 qpair failed and we were unable to recover it. 00:26:18.196 [2024-04-26 15:36:35.489075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 15:36:35.489409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 15:36:35.489422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.196 qpair failed and we were unable to recover it. 00:26:18.196 [2024-04-26 15:36:35.489675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 15:36:35.490024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 15:36:35.490038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.196 qpair failed and we were unable to recover it. 00:26:18.196 [2024-04-26 15:36:35.490447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 15:36:35.490820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 15:36:35.490835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.196 qpair failed and we were unable to recover it. 00:26:18.196 [2024-04-26 15:36:35.491151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 15:36:35.491521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 15:36:35.491536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.196 qpair failed and we were unable to recover it. 00:26:18.196 [2024-04-26 15:36:35.491880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 15:36:35.492313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 15:36:35.492331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.196 qpair failed and we were unable to recover it. 00:26:18.196 [2024-04-26 15:36:35.492692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 15:36:35.492901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 15:36:35.492916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.196 qpair failed and we were unable to recover it. 00:26:18.196 [2024-04-26 15:36:35.493274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 15:36:35.493624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 15:36:35.493637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.196 qpair failed and we were unable to recover it. 00:26:18.196 [2024-04-26 15:36:35.493974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 15:36:35.494332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 15:36:35.494345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.196 qpair failed and we were unable to recover it. 00:26:18.196 [2024-04-26 15:36:35.494704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 15:36:35.494907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.196 [2024-04-26 15:36:35.494922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.196 qpair failed and we were unable to recover it. 00:26:18.196 [2024-04-26 15:36:35.495325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 15:36:35.495664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 15:36:35.495678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.197 qpair failed and we were unable to recover it. 00:26:18.197 [2024-04-26 15:36:35.495895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 15:36:35.496303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 15:36:35.496316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.197 qpair failed and we were unable to recover it. 00:26:18.197 [2024-04-26 15:36:35.496647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 15:36:35.497046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 15:36:35.497061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.197 qpair failed and we were unable to recover it. 00:26:18.197 [2024-04-26 15:36:35.497470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 15:36:35.497814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 15:36:35.497827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.197 qpair failed and we were unable to recover it. 00:26:18.197 [2024-04-26 15:36:35.498235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 15:36:35.498448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 15:36:35.498463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.197 qpair failed and we were unable to recover it. 00:26:18.197 [2024-04-26 15:36:35.498802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 15:36:35.499130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 15:36:35.499148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.197 qpair failed and we were unable to recover it. 00:26:18.197 [2024-04-26 15:36:35.499482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 15:36:35.499852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 15:36:35.499867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.197 qpair failed and we were unable to recover it. 00:26:18.197 [2024-04-26 15:36:35.500222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 15:36:35.500554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 15:36:35.500567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.197 qpair failed and we were unable to recover it. 00:26:18.197 [2024-04-26 15:36:35.500897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 15:36:35.501269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 15:36:35.501283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.197 qpair failed and we were unable to recover it. 00:26:18.197 [2024-04-26 15:36:35.501506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 15:36:35.501850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 15:36:35.501864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.197 qpair failed and we were unable to recover it. 00:26:18.197 [2024-04-26 15:36:35.502267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 15:36:35.502631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 15:36:35.502644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.197 qpair failed and we were unable to recover it. 00:26:18.197 [2024-04-26 15:36:35.503011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 15:36:35.503228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 15:36:35.503242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.197 qpair failed and we were unable to recover it. 00:26:18.197 [2024-04-26 15:36:35.503606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 15:36:35.504007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 15:36:35.504022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.197 qpair failed and we were unable to recover it. 00:26:18.197 [2024-04-26 15:36:35.504373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 15:36:35.504613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 15:36:35.504627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.197 qpair failed and we were unable to recover it. 00:26:18.197 [2024-04-26 15:36:35.504951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 15:36:35.505306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 15:36:35.505320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.197 qpair failed and we were unable to recover it. 00:26:18.197 [2024-04-26 15:36:35.505773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 15:36:35.506189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 15:36:35.506210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.197 qpair failed and we were unable to recover it. 00:26:18.197 [2024-04-26 15:36:35.506568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 15:36:35.506945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 15:36:35.506959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.197 qpair failed and we were unable to recover it. 00:26:18.197 [2024-04-26 15:36:35.507282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 15:36:35.507650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 15:36:35.507664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.197 qpair failed and we were unable to recover it. 00:26:18.197 [2024-04-26 15:36:35.507842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 15:36:35.508190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 15:36:35.508204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.197 qpair failed and we were unable to recover it. 00:26:18.197 [2024-04-26 15:36:35.508529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 15:36:35.508784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 15:36:35.508797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.197 qpair failed and we were unable to recover it. 00:26:18.197 [2024-04-26 15:36:35.509145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 15:36:35.509498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 15:36:35.509512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.197 qpair failed and we were unable to recover it. 00:26:18.197 [2024-04-26 15:36:35.509856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 15:36:35.510221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 15:36:35.510235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.197 qpair failed and we were unable to recover it. 00:26:18.197 [2024-04-26 15:36:35.510465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 15:36:35.510667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 15:36:35.510681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.197 qpair failed and we were unable to recover it. 00:26:18.197 [2024-04-26 15:36:35.511058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 15:36:35.511390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 15:36:35.511404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.197 qpair failed and we were unable to recover it. 00:26:18.197 [2024-04-26 15:36:35.511768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 15:36:35.512109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 15:36:35.512123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.197 qpair failed and we were unable to recover it. 00:26:18.197 [2024-04-26 15:36:35.512456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 15:36:35.512802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 15:36:35.512815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.197 qpair failed and we were unable to recover it. 00:26:18.197 [2024-04-26 15:36:35.513141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 15:36:35.513471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 15:36:35.513486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.197 qpair failed and we were unable to recover it. 00:26:18.197 [2024-04-26 15:36:35.513891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 15:36:35.514075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 15:36:35.514089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.197 qpair failed and we were unable to recover it. 00:26:18.197 [2024-04-26 15:36:35.514387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.197 [2024-04-26 15:36:35.514733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 15:36:35.514747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.198 qpair failed and we were unable to recover it. 00:26:18.198 [2024-04-26 15:36:35.515061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 15:36:35.515393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 15:36:35.515408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.198 qpair failed and we were unable to recover it. 00:26:18.198 [2024-04-26 15:36:35.515760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 15:36:35.516126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 15:36:35.516141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.198 qpair failed and we were unable to recover it. 00:26:18.198 [2024-04-26 15:36:35.516464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 15:36:35.516827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 15:36:35.516847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.198 qpair failed and we were unable to recover it. 00:26:18.198 [2024-04-26 15:36:35.517074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 15:36:35.517303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 15:36:35.517316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.198 qpair failed and we were unable to recover it. 00:26:18.198 [2024-04-26 15:36:35.517653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 15:36:35.518005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 15:36:35.518019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.198 qpair failed and we were unable to recover it. 00:26:18.198 [2024-04-26 15:36:35.518414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 15:36:35.518760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 15:36:35.518775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.198 qpair failed and we were unable to recover it. 00:26:18.198 [2024-04-26 15:36:35.519126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 15:36:35.519449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 15:36:35.519463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.198 qpair failed and we were unable to recover it. 00:26:18.198 [2024-04-26 15:36:35.519763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 15:36:35.520110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 15:36:35.520124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.198 qpair failed and we were unable to recover it. 00:26:18.198 [2024-04-26 15:36:35.520331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 15:36:35.520689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 15:36:35.520703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.198 qpair failed and we were unable to recover it. 00:26:18.198 [2024-04-26 15:36:35.520934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 15:36:35.521307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 15:36:35.521321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.198 qpair failed and we were unable to recover it. 00:26:18.198 [2024-04-26 15:36:35.521653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 15:36:35.521986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 15:36:35.522000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.198 qpair failed and we were unable to recover it. 00:26:18.198 [2024-04-26 15:36:35.522199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 15:36:35.522563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 15:36:35.522577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.198 qpair failed and we were unable to recover it. 00:26:18.198 [2024-04-26 15:36:35.522918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 15:36:35.523272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 15:36:35.523285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.198 qpair failed and we were unable to recover it. 00:26:18.198 [2024-04-26 15:36:35.523602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 15:36:35.523970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 15:36:35.523985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.198 qpair failed and we were unable to recover it. 00:26:18.198 [2024-04-26 15:36:35.524349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 15:36:35.524584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 15:36:35.524598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.198 qpair failed and we were unable to recover it. 00:26:18.198 [2024-04-26 15:36:35.525005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 15:36:35.525184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 15:36:35.525197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.198 qpair failed and we were unable to recover it. 00:26:18.198 [2024-04-26 15:36:35.525609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 15:36:35.525849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 15:36:35.525864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.198 qpair failed and we were unable to recover it. 00:26:18.198 [2024-04-26 15:36:35.526209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 15:36:35.526423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 15:36:35.526439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.198 qpair failed and we were unable to recover it. 00:26:18.198 [2024-04-26 15:36:35.526744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 15:36:35.527090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 15:36:35.527105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.198 qpair failed and we were unable to recover it. 00:26:18.198 [2024-04-26 15:36:35.527340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 15:36:35.527704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 15:36:35.527718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.198 qpair failed and we were unable to recover it. 00:26:18.198 [2024-04-26 15:36:35.528039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 15:36:35.528416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 15:36:35.528430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.198 qpair failed and we were unable to recover it. 00:26:18.198 [2024-04-26 15:36:35.528646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 15:36:35.528857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 15:36:35.528871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.198 qpair failed and we were unable to recover it. 00:26:18.198 [2024-04-26 15:36:35.529231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 15:36:35.529600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 15:36:35.529613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.198 qpair failed and we were unable to recover it. 00:26:18.198 [2024-04-26 15:36:35.529921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 15:36:35.530337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.198 [2024-04-26 15:36:35.530351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.198 qpair failed and we were unable to recover it. 00:26:18.199 [2024-04-26 15:36:35.530716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.199 [2024-04-26 15:36:35.531023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.199 [2024-04-26 15:36:35.531037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.199 qpair failed and we were unable to recover it. 00:26:18.199 [2024-04-26 15:36:35.531331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.199 [2024-04-26 15:36:35.531658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.199 [2024-04-26 15:36:35.531673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.199 qpair failed and we were unable to recover it. 00:26:18.199 [2024-04-26 15:36:35.532008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.199 [2024-04-26 15:36:35.532349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.199 [2024-04-26 15:36:35.532363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.199 qpair failed and we were unable to recover it. 00:26:18.199 [2024-04-26 15:36:35.532718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.199 [2024-04-26 15:36:35.533064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.199 [2024-04-26 15:36:35.533078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.199 qpair failed and we were unable to recover it. 00:26:18.199 [2024-04-26 15:36:35.533403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.199 [2024-04-26 15:36:35.533775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.199 [2024-04-26 15:36:35.533788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.199 qpair failed and we were unable to recover it. 00:26:18.199 [2024-04-26 15:36:35.533973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.199 [2024-04-26 15:36:35.534286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.199 [2024-04-26 15:36:35.534300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.199 qpair failed and we were unable to recover it. 00:26:18.199 [2024-04-26 15:36:35.534613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.199 [2024-04-26 15:36:35.534941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.199 [2024-04-26 15:36:35.534956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.199 qpair failed and we were unable to recover it. 00:26:18.199 [2024-04-26 15:36:35.535321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.199 [2024-04-26 15:36:35.535619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.199 [2024-04-26 15:36:35.535633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.199 qpair failed and we were unable to recover it. 00:26:18.199 [2024-04-26 15:36:35.535857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.199 [2024-04-26 15:36:35.536313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.199 [2024-04-26 15:36:35.536327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.199 qpair failed and we were unable to recover it. 00:26:18.199 [2024-04-26 15:36:35.536645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.199 [2024-04-26 15:36:35.537017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.199 [2024-04-26 15:36:35.537031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.199 qpair failed and we were unable to recover it. 00:26:18.199 [2024-04-26 15:36:35.537417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.199 [2024-04-26 15:36:35.537770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.199 [2024-04-26 15:36:35.537785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.200 qpair failed and we were unable to recover it. 00:26:18.200 [2024-04-26 15:36:35.538129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.200 [2024-04-26 15:36:35.538548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.200 [2024-04-26 15:36:35.538563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.200 qpair failed and we were unable to recover it. 00:26:18.200 [2024-04-26 15:36:35.538752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.200 [2024-04-26 15:36:35.539163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.200 [2024-04-26 15:36:35.539177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.200 qpair failed and we were unable to recover it. 00:26:18.200 [2024-04-26 15:36:35.539581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.200 [2024-04-26 15:36:35.539825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.200 [2024-04-26 15:36:35.539850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.200 qpair failed and we were unable to recover it. 00:26:18.200 [2024-04-26 15:36:35.540230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.200 [2024-04-26 15:36:35.540554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.200 [2024-04-26 15:36:35.540568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.200 qpair failed and we were unable to recover it. 00:26:18.200 [2024-04-26 15:36:35.540922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.200 [2024-04-26 15:36:35.541246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.200 [2024-04-26 15:36:35.541259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.200 qpair failed and we were unable to recover it. 00:26:18.200 [2024-04-26 15:36:35.541677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.200 [2024-04-26 15:36:35.542031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.200 [2024-04-26 15:36:35.542047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.200 qpair failed and we were unable to recover it. 00:26:18.200 [2024-04-26 15:36:35.542394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.200 [2024-04-26 15:36:35.542776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.200 [2024-04-26 15:36:35.542789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.200 qpair failed and we were unable to recover it. 00:26:18.200 [2024-04-26 15:36:35.543121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.200 [2024-04-26 15:36:35.543502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.200 [2024-04-26 15:36:35.543515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.200 qpair failed and we were unable to recover it. 00:26:18.200 [2024-04-26 15:36:35.543850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.200 [2024-04-26 15:36:35.544193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.200 [2024-04-26 15:36:35.544207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.200 qpair failed and we were unable to recover it. 00:26:18.200 [2024-04-26 15:36:35.544427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.200 [2024-04-26 15:36:35.544773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.200 [2024-04-26 15:36:35.544787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.200 qpair failed and we were unable to recover it. 00:26:18.200 [2024-04-26 15:36:35.545157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.200 [2024-04-26 15:36:35.545534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.200 [2024-04-26 15:36:35.545549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.200 qpair failed and we were unable to recover it. 00:26:18.200 [2024-04-26 15:36:35.545908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.200 [2024-04-26 15:36:35.546254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.200 [2024-04-26 15:36:35.546267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.200 qpair failed and we were unable to recover it. 00:26:18.200 [2024-04-26 15:36:35.546586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.200 [2024-04-26 15:36:35.546929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.200 [2024-04-26 15:36:35.546944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.200 qpair failed and we were unable to recover it. 00:26:18.200 [2024-04-26 15:36:35.547283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.200 [2024-04-26 15:36:35.547602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.200 [2024-04-26 15:36:35.547616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.200 qpair failed and we were unable to recover it. 00:26:18.200 [2024-04-26 15:36:35.547811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.200 [2024-04-26 15:36:35.548181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 15:36:35.548196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.201 qpair failed and we were unable to recover it. 00:26:18.201 [2024-04-26 15:36:35.548524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 15:36:35.548877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 15:36:35.548891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.201 qpair failed and we were unable to recover it. 00:26:18.201 [2024-04-26 15:36:35.549260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 15:36:35.549473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 15:36:35.549488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.201 qpair failed and we were unable to recover it. 00:26:18.201 [2024-04-26 15:36:35.549868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 15:36:35.550211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 15:36:35.550224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.201 qpair failed and we were unable to recover it. 00:26:18.201 [2024-04-26 15:36:35.550562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 15:36:35.550929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 15:36:35.550962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.201 qpair failed and we were unable to recover it. 00:26:18.201 [2024-04-26 15:36:35.551297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 15:36:35.551655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 15:36:35.551669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.201 qpair failed and we were unable to recover it. 00:26:18.201 [2024-04-26 15:36:35.551890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 15:36:35.552238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 15:36:35.552251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.201 qpair failed and we were unable to recover it. 00:26:18.201 [2024-04-26 15:36:35.552616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 15:36:35.553004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 15:36:35.553020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.201 qpair failed and we were unable to recover it. 00:26:18.201 [2024-04-26 15:36:35.553241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 15:36:35.553504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 15:36:35.553518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.201 qpair failed and we were unable to recover it. 00:26:18.201 [2024-04-26 15:36:35.553858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 15:36:35.554164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 15:36:35.554178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.201 qpair failed and we were unable to recover it. 00:26:18.201 [2024-04-26 15:36:35.554534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 15:36:35.554766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 15:36:35.554779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.201 qpair failed and we were unable to recover it. 00:26:18.201 [2024-04-26 15:36:35.555160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 15:36:35.555495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 15:36:35.555508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.201 qpair failed and we were unable to recover it. 00:26:18.201 [2024-04-26 15:36:35.555886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 15:36:35.556270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 15:36:35.556284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.201 qpair failed and we were unable to recover it. 00:26:18.201 [2024-04-26 15:36:35.556607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 15:36:35.556961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 15:36:35.556975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.201 qpair failed and we were unable to recover it. 00:26:18.201 [2024-04-26 15:36:35.557321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 15:36:35.557691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 15:36:35.557704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.201 qpair failed and we were unable to recover it. 00:26:18.201 [2024-04-26 15:36:35.558120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 15:36:35.558474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 15:36:35.558488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.201 qpair failed and we were unable to recover it. 00:26:18.201 [2024-04-26 15:36:35.558875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 15:36:35.559222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 15:36:35.559236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.201 qpair failed and we were unable to recover it. 00:26:18.201 [2024-04-26 15:36:35.559560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 15:36:35.559910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 15:36:35.559925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.201 qpair failed and we were unable to recover it. 00:26:18.201 [2024-04-26 15:36:35.560185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 15:36:35.560399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 15:36:35.560414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.201 qpair failed and we were unable to recover it. 00:26:18.201 [2024-04-26 15:36:35.560818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 15:36:35.561156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 15:36:35.561171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.201 qpair failed and we were unable to recover it. 00:26:18.201 [2024-04-26 15:36:35.561488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 15:36:35.561856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 15:36:35.561871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.201 qpair failed and we were unable to recover it. 00:26:18.201 [2024-04-26 15:36:35.562243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 15:36:35.562657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 15:36:35.562670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.201 qpair failed and we were unable to recover it. 00:26:18.201 [2024-04-26 15:36:35.562916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 15:36:35.563250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 15:36:35.563264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.201 qpair failed and we were unable to recover it. 00:26:18.201 [2024-04-26 15:36:35.563581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 15:36:35.563924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 15:36:35.563938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.201 qpair failed and we were unable to recover it. 00:26:18.201 [2024-04-26 15:36:35.564267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 15:36:35.564626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 15:36:35.564639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.201 qpair failed and we were unable to recover it. 00:26:18.201 [2024-04-26 15:36:35.564967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 15:36:35.565333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 15:36:35.565347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.201 qpair failed and we were unable to recover it. 00:26:18.201 [2024-04-26 15:36:35.565674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 15:36:35.566037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 15:36:35.566052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.201 qpair failed and we were unable to recover it. 00:26:18.201 [2024-04-26 15:36:35.566389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 15:36:35.566624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 15:36:35.566637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.201 qpair failed and we were unable to recover it. 00:26:18.201 [2024-04-26 15:36:35.566999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.201 [2024-04-26 15:36:35.567355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 15:36:35.567369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.202 qpair failed and we were unable to recover it. 00:26:18.202 [2024-04-26 15:36:35.567713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 15:36:35.567947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 15:36:35.567961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.202 qpair failed and we were unable to recover it. 00:26:18.202 [2024-04-26 15:36:35.568327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 15:36:35.568604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 15:36:35.568628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.202 qpair failed and we were unable to recover it. 00:26:18.202 [2024-04-26 15:36:35.568991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 15:36:35.569366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 15:36:35.569379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.202 qpair failed and we were unable to recover it. 00:26:18.202 [2024-04-26 15:36:35.569755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 15:36:35.570108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 15:36:35.570122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.202 qpair failed and we were unable to recover it. 00:26:18.202 [2024-04-26 15:36:35.570486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 15:36:35.570849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 15:36:35.570863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.202 qpair failed and we were unable to recover it. 00:26:18.202 [2024-04-26 15:36:35.571215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 15:36:35.571554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 15:36:35.571567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.202 qpair failed and we were unable to recover it. 00:26:18.202 [2024-04-26 15:36:35.571895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 15:36:35.572268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 15:36:35.572282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.202 qpair failed and we were unable to recover it. 00:26:18.202 [2024-04-26 15:36:35.572635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 15:36:35.573007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 15:36:35.573022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.202 qpair failed and we were unable to recover it. 00:26:18.202 [2024-04-26 15:36:35.573342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 15:36:35.573705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 15:36:35.573719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.202 qpair failed and we were unable to recover it. 00:26:18.202 [2024-04-26 15:36:35.574121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 15:36:35.574486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 15:36:35.574501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.202 qpair failed and we were unable to recover it. 00:26:18.202 [2024-04-26 15:36:35.574843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 15:36:35.575180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 15:36:35.575195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.202 qpair failed and we were unable to recover it. 00:26:18.202 [2024-04-26 15:36:35.575542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 15:36:35.575916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 15:36:35.575931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.202 qpair failed and we were unable to recover it. 00:26:18.202 [2024-04-26 15:36:35.576275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 15:36:35.576652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 15:36:35.576666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.202 qpair failed and we were unable to recover it. 00:26:18.202 [2024-04-26 15:36:35.576887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 15:36:35.577258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 15:36:35.577272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.202 qpair failed and we were unable to recover it. 00:26:18.202 [2024-04-26 15:36:35.577598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 15:36:35.577964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 15:36:35.577979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.202 qpair failed and we were unable to recover it. 00:26:18.202 [2024-04-26 15:36:35.578384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 15:36:35.578721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 15:36:35.578734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.202 qpair failed and we were unable to recover it. 00:26:18.202 [2024-04-26 15:36:35.579089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 15:36:35.579453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 15:36:35.579467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.202 qpair failed and we were unable to recover it. 00:26:18.202 [2024-04-26 15:36:35.579790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 15:36:35.580159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 15:36:35.580174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.202 qpair failed and we were unable to recover it. 00:26:18.202 [2024-04-26 15:36:35.580542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 15:36:35.580740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 15:36:35.580754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.202 qpair failed and we were unable to recover it. 00:26:18.202 [2024-04-26 15:36:35.581070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 15:36:35.581439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 15:36:35.581454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.202 qpair failed and we were unable to recover it. 00:26:18.202 [2024-04-26 15:36:35.581796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 15:36:35.582143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 15:36:35.582157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.202 qpair failed and we were unable to recover it. 00:26:18.202 [2024-04-26 15:36:35.582512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 15:36:35.582876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 15:36:35.582891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.202 qpair failed and we were unable to recover it. 00:26:18.202 [2024-04-26 15:36:35.583120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 15:36:35.583479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 15:36:35.583494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.202 qpair failed and we were unable to recover it. 00:26:18.202 [2024-04-26 15:36:35.583818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 15:36:35.584170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 15:36:35.584185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.202 qpair failed and we were unable to recover it. 00:26:18.202 [2024-04-26 15:36:35.584503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 15:36:35.584722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 15:36:35.584738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.202 qpair failed and we were unable to recover it. 00:26:18.202 [2024-04-26 15:36:35.585093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 15:36:35.585393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 15:36:35.585407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.202 qpair failed and we were unable to recover it. 00:26:18.202 [2024-04-26 15:36:35.585747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 15:36:35.586115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.202 [2024-04-26 15:36:35.586129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.202 qpair failed and we were unable to recover it. 00:26:18.203 [2024-04-26 15:36:35.586453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 15:36:35.586818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 15:36:35.586831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.203 qpair failed and we were unable to recover it. 00:26:18.203 [2024-04-26 15:36:35.587188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 15:36:35.587563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 15:36:35.587577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.203 qpair failed and we were unable to recover it. 00:26:18.203 [2024-04-26 15:36:35.587930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 15:36:35.588301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 15:36:35.588315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.203 qpair failed and we were unable to recover it. 00:26:18.203 [2024-04-26 15:36:35.588647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 15:36:35.588990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 15:36:35.589004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.203 qpair failed and we were unable to recover it. 00:26:18.203 [2024-04-26 15:36:35.589318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 15:36:35.589671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 15:36:35.589684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.203 qpair failed and we were unable to recover it. 00:26:18.203 [2024-04-26 15:36:35.590061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 15:36:35.590376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 15:36:35.590389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.203 qpair failed and we were unable to recover it. 00:26:18.203 [2024-04-26 15:36:35.590707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 15:36:35.590948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 15:36:35.590963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.203 qpair failed and we were unable to recover it. 00:26:18.203 [2024-04-26 15:36:35.591347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 15:36:35.591723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 15:36:35.591738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.203 qpair failed and we were unable to recover it. 00:26:18.203 [2024-04-26 15:36:35.591979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 15:36:35.592173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 15:36:35.592188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.203 qpair failed and we were unable to recover it. 00:26:18.203 [2024-04-26 15:36:35.592431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 15:36:35.592774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 15:36:35.592790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.203 qpair failed and we were unable to recover it. 00:26:18.203 [2024-04-26 15:36:35.593139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 15:36:35.593476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 15:36:35.593490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.203 qpair failed and we were unable to recover it. 00:26:18.203 [2024-04-26 15:36:35.593831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 15:36:35.594062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 15:36:35.594076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.203 qpair failed and we were unable to recover it. 00:26:18.203 [2024-04-26 15:36:35.594436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 15:36:35.594748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 15:36:35.594765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.203 qpair failed and we were unable to recover it. 00:26:18.203 [2024-04-26 15:36:35.595094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 15:36:35.595401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 15:36:35.595415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.203 qpair failed and we were unable to recover it. 00:26:18.203 [2024-04-26 15:36:35.595768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 15:36:35.596091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 15:36:35.596105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.203 qpair failed and we were unable to recover it. 00:26:18.203 [2024-04-26 15:36:35.596472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 15:36:35.596810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 15:36:35.596824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.203 qpair failed and we were unable to recover it. 00:26:18.203 [2024-04-26 15:36:35.597178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 15:36:35.597488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 15:36:35.597502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.203 qpair failed and we were unable to recover it. 00:26:18.203 [2024-04-26 15:36:35.597827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 15:36:35.598194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 15:36:35.598210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.203 qpair failed and we were unable to recover it. 00:26:18.203 [2024-04-26 15:36:35.598563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 15:36:35.598931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 15:36:35.598945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.203 qpair failed and we were unable to recover it. 00:26:18.203 [2024-04-26 15:36:35.599313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 15:36:35.599660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 15:36:35.599673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.203 qpair failed and we were unable to recover it. 00:26:18.203 [2024-04-26 15:36:35.600034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 15:36:35.600383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 15:36:35.600398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.203 qpair failed and we were unable to recover it. 00:26:18.203 [2024-04-26 15:36:35.600729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 15:36:35.601005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 15:36:35.601019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.203 qpair failed and we were unable to recover it. 00:26:18.203 [2024-04-26 15:36:35.601349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 15:36:35.601760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 15:36:35.601777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.203 qpair failed and we were unable to recover it. 00:26:18.203 [2024-04-26 15:36:35.602175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 15:36:35.602525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 15:36:35.602538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.203 qpair failed and we were unable to recover it. 00:26:18.203 [2024-04-26 15:36:35.602873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 15:36:35.603248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 15:36:35.603262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.203 qpair failed and we were unable to recover it. 00:26:18.203 [2024-04-26 15:36:35.603447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 15:36:35.603861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 15:36:35.603875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.203 qpair failed and we were unable to recover it. 00:26:18.203 [2024-04-26 15:36:35.604324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 15:36:35.604688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.203 [2024-04-26 15:36:35.604703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.203 qpair failed and we were unable to recover it. 00:26:18.204 [2024-04-26 15:36:35.605067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 15:36:35.605442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 15:36:35.605457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.204 qpair failed and we were unable to recover it. 00:26:18.204 [2024-04-26 15:36:35.605815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 15:36:35.606162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 15:36:35.606176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.204 qpair failed and we were unable to recover it. 00:26:18.204 [2024-04-26 15:36:35.606498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 15:36:35.606865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 15:36:35.606880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.204 qpair failed and we were unable to recover it. 00:26:18.204 [2024-04-26 15:36:35.607224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 15:36:35.607548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 15:36:35.607561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.204 qpair failed and we were unable to recover it. 00:26:18.204 [2024-04-26 15:36:35.607917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 15:36:35.608289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 15:36:35.608303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.204 qpair failed and we were unable to recover it. 00:26:18.204 [2024-04-26 15:36:35.608629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 15:36:35.608996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 15:36:35.609017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.204 qpair failed and we were unable to recover it. 00:26:18.204 [2024-04-26 15:36:35.609331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 15:36:35.609679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 15:36:35.609693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.204 qpair failed and we were unable to recover it. 00:26:18.204 [2024-04-26 15:36:35.610012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 15:36:35.610367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 15:36:35.610381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.204 qpair failed and we were unable to recover it. 00:26:18.204 [2024-04-26 15:36:35.610716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 15:36:35.611118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 15:36:35.611132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.204 qpair failed and we were unable to recover it. 00:26:18.204 [2024-04-26 15:36:35.611497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 15:36:35.611848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 15:36:35.611862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.204 qpair failed and we were unable to recover it. 00:26:18.204 [2024-04-26 15:36:35.612259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 15:36:35.612457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 15:36:35.612472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.204 qpair failed and we were unable to recover it. 00:26:18.204 [2024-04-26 15:36:35.612822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 15:36:35.613181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 15:36:35.613195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.204 qpair failed and we were unable to recover it. 00:26:18.204 [2024-04-26 15:36:35.613614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 15:36:35.613981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 15:36:35.613995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.204 qpair failed and we were unable to recover it. 00:26:18.204 [2024-04-26 15:36:35.614331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 15:36:35.614688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 15:36:35.614701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.204 qpair failed and we were unable to recover it. 00:26:18.204 [2024-04-26 15:36:35.615023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 15:36:35.615401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 15:36:35.615415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.204 qpair failed and we were unable to recover it. 00:26:18.204 [2024-04-26 15:36:35.615734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 15:36:35.616081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 15:36:35.616099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.204 qpair failed and we were unable to recover it. 00:26:18.204 [2024-04-26 15:36:35.616424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 15:36:35.616789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 15:36:35.616802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.204 qpair failed and we were unable to recover it. 00:26:18.204 [2024-04-26 15:36:35.617205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 15:36:35.617510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 15:36:35.617524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.204 qpair failed and we were unable to recover it. 00:26:18.204 [2024-04-26 15:36:35.617745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 15:36:35.617961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 15:36:35.617976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.204 qpair failed and we were unable to recover it. 00:26:18.204 [2024-04-26 15:36:35.618397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 15:36:35.618731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 15:36:35.618745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.204 qpair failed and we were unable to recover it. 00:26:18.204 [2024-04-26 15:36:35.619105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 15:36:35.619470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 15:36:35.619483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.204 qpair failed and we were unable to recover it. 00:26:18.204 [2024-04-26 15:36:35.619817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 15:36:35.620036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.204 [2024-04-26 15:36:35.620051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.204 qpair failed and we were unable to recover it. 00:26:18.205 [2024-04-26 15:36:35.620406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.205 [2024-04-26 15:36:35.620769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.205 [2024-04-26 15:36:35.620784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.205 qpair failed and we were unable to recover it. 00:26:18.205 [2024-04-26 15:36:35.621143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.205 [2024-04-26 15:36:35.621512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.205 [2024-04-26 15:36:35.621527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.205 qpair failed and we were unable to recover it. 00:26:18.205 [2024-04-26 15:36:35.621866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.205 [2024-04-26 15:36:35.622219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.205 [2024-04-26 15:36:35.622233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.205 qpair failed and we were unable to recover it. 00:26:18.205 [2024-04-26 15:36:35.622560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.205 [2024-04-26 15:36:35.622920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.205 [2024-04-26 15:36:35.622934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.205 qpair failed and we were unable to recover it. 00:26:18.205 [2024-04-26 15:36:35.623269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.205 [2024-04-26 15:36:35.623622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.205 [2024-04-26 15:36:35.623636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.205 qpair failed and we were unable to recover it. 00:26:18.205 [2024-04-26 15:36:35.623959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.205 [2024-04-26 15:36:35.624340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.205 [2024-04-26 15:36:35.624354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.205 qpair failed and we were unable to recover it. 00:26:18.205 [2024-04-26 15:36:35.624675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.205 [2024-04-26 15:36:35.625039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.205 [2024-04-26 15:36:35.625053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.205 qpair failed and we were unable to recover it. 00:26:18.205 [2024-04-26 15:36:35.625395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.205 [2024-04-26 15:36:35.625755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.205 [2024-04-26 15:36:35.625769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.205 qpair failed and we were unable to recover it. 00:26:18.205 [2024-04-26 15:36:35.626123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.205 [2024-04-26 15:36:35.626473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.205 [2024-04-26 15:36:35.626487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.205 qpair failed and we were unable to recover it. 00:26:18.205 [2024-04-26 15:36:35.626803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.205 [2024-04-26 15:36:35.627140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.205 [2024-04-26 15:36:35.627154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.205 qpair failed and we were unable to recover it. 00:26:18.205 [2024-04-26 15:36:35.627480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.205 [2024-04-26 15:36:35.627685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.205 [2024-04-26 15:36:35.627700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.205 qpair failed and we were unable to recover it. 00:26:18.205 [2024-04-26 15:36:35.628076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.205 [2024-04-26 15:36:35.628387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.205 [2024-04-26 15:36:35.628402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.205 qpair failed and we were unable to recover it. 00:26:18.205 [2024-04-26 15:36:35.628746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.205 [2024-04-26 15:36:35.629097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.205 [2024-04-26 15:36:35.629111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.205 qpair failed and we were unable to recover it. 00:26:18.474 [2024-04-26 15:36:35.629432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.474 [2024-04-26 15:36:35.629801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.474 [2024-04-26 15:36:35.629817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.474 qpair failed and we were unable to recover it. 00:26:18.474 [2024-04-26 15:36:35.630182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.474 [2024-04-26 15:36:35.630550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.474 [2024-04-26 15:36:35.630565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.474 qpair failed and we were unable to recover it. 00:26:18.474 [2024-04-26 15:36:35.630911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.474 [2024-04-26 15:36:35.631249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.474 [2024-04-26 15:36:35.631264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.474 qpair failed and we were unable to recover it. 00:26:18.474 [2024-04-26 15:36:35.631689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.474 [2024-04-26 15:36:35.632026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.474 [2024-04-26 15:36:35.632041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.474 qpair failed and we were unable to recover it. 00:26:18.474 [2024-04-26 15:36:35.632406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.474 [2024-04-26 15:36:35.632754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.475 [2024-04-26 15:36:35.632780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.475 qpair failed and we were unable to recover it. 00:26:18.475 [2024-04-26 15:36:35.633146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.475 [2024-04-26 15:36:35.633476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.475 [2024-04-26 15:36:35.633492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.475 qpair failed and we were unable to recover it. 00:26:18.475 [2024-04-26 15:36:35.633856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.475 [2024-04-26 15:36:35.634238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.475 [2024-04-26 15:36:35.634254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.475 qpair failed and we were unable to recover it. 00:26:18.475 [2024-04-26 15:36:35.634609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.475 [2024-04-26 15:36:35.634979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.475 [2024-04-26 15:36:35.635008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.475 qpair failed and we were unable to recover it. 00:26:18.475 [2024-04-26 15:36:35.635384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.475 [2024-04-26 15:36:35.635762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.475 [2024-04-26 15:36:35.635787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.475 qpair failed and we were unable to recover it. 00:26:18.475 [2024-04-26 15:36:35.636159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.475 [2024-04-26 15:36:35.636489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.475 [2024-04-26 15:36:35.636514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.475 qpair failed and we were unable to recover it. 00:26:18.475 [2024-04-26 15:36:35.636807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.475 [2024-04-26 15:36:35.637154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.475 [2024-04-26 15:36:35.637180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.475 qpair failed and we were unable to recover it. 00:26:18.475 [2024-04-26 15:36:35.637547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.475 [2024-04-26 15:36:35.637917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.475 [2024-04-26 15:36:35.637945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.475 qpair failed and we were unable to recover it. 00:26:18.475 [2024-04-26 15:36:35.638331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.475 [2024-04-26 15:36:35.638702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.475 [2024-04-26 15:36:35.638728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.475 qpair failed and we were unable to recover it. 00:26:18.475 [2024-04-26 15:36:35.638990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.475 [2024-04-26 15:36:35.639399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.475 [2024-04-26 15:36:35.639413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.475 qpair failed and we were unable to recover it. 00:26:18.475 [2024-04-26 15:36:35.639744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.475 [2024-04-26 15:36:35.640075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.475 [2024-04-26 15:36:35.640101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.475 qpair failed and we were unable to recover it. 00:26:18.475 [2024-04-26 15:36:35.640393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.475 [2024-04-26 15:36:35.640789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.475 [2024-04-26 15:36:35.640815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.475 qpair failed and we were unable to recover it. 00:26:18.475 [2024-04-26 15:36:35.641206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.475 [2024-04-26 15:36:35.641568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.475 [2024-04-26 15:36:35.641595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.475 qpair failed and we were unable to recover it. 00:26:18.475 [2024-04-26 15:36:35.641970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.475 [2024-04-26 15:36:35.642336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.475 [2024-04-26 15:36:35.642362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.475 qpair failed and we were unable to recover it. 00:26:18.475 [2024-04-26 15:36:35.642784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.475 [2024-04-26 15:36:35.643127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.475 [2024-04-26 15:36:35.643154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.475 qpair failed and we were unable to recover it. 00:26:18.475 [2024-04-26 15:36:35.643553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.475 [2024-04-26 15:36:35.643913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.475 [2024-04-26 15:36:35.643940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.475 qpair failed and we were unable to recover it. 00:26:18.475 [2024-04-26 15:36:35.644326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.475 [2024-04-26 15:36:35.644661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.475 [2024-04-26 15:36:35.644686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.475 qpair failed and we were unable to recover it. 00:26:18.475 [2024-04-26 15:36:35.645060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.475 [2024-04-26 15:36:35.645311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.475 [2024-04-26 15:36:35.645335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.475 qpair failed and we were unable to recover it. 00:26:18.475 [2024-04-26 15:36:35.645732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.475 [2024-04-26 15:36:35.646087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.475 [2024-04-26 15:36:35.646115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.475 qpair failed and we were unable to recover it. 00:26:18.475 [2024-04-26 15:36:35.646510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.475 [2024-04-26 15:36:35.646737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.475 [2024-04-26 15:36:35.646764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.475 qpair failed and we were unable to recover it. 00:26:18.475 [2024-04-26 15:36:35.647136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.475 [2024-04-26 15:36:35.647350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.475 [2024-04-26 15:36:35.647369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.475 qpair failed and we were unable to recover it. 00:26:18.475 [2024-04-26 15:36:35.647744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.475 [2024-04-26 15:36:35.647875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.475 [2024-04-26 15:36:35.647890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.475 qpair failed and we were unable to recover it. 00:26:18.475 [2024-04-26 15:36:35.648201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.475 [2024-04-26 15:36:35.648589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.475 [2024-04-26 15:36:35.648616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.475 qpair failed and we were unable to recover it. 00:26:18.475 [2024-04-26 15:36:35.649006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.475 [2024-04-26 15:36:35.649373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.475 [2024-04-26 15:36:35.649398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.475 qpair failed and we were unable to recover it. 00:26:18.475 [2024-04-26 15:36:35.649771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.475 [2024-04-26 15:36:35.650142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.475 [2024-04-26 15:36:35.650169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.475 qpair failed and we were unable to recover it. 00:26:18.475 [2024-04-26 15:36:35.650433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.475 [2024-04-26 15:36:35.650787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.475 [2024-04-26 15:36:35.650812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.475 qpair failed and we were unable to recover it. 00:26:18.475 [2024-04-26 15:36:35.651200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.475 [2024-04-26 15:36:35.651432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.475 [2024-04-26 15:36:35.651458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.475 qpair failed and we were unable to recover it. 00:26:18.475 [2024-04-26 15:36:35.651722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.475 [2024-04-26 15:36:35.652058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.475 [2024-04-26 15:36:35.652085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.475 qpair failed and we were unable to recover it. 00:26:18.475 [2024-04-26 15:36:35.652437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.475 [2024-04-26 15:36:35.652798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.475 [2024-04-26 15:36:35.652823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.475 qpair failed and we were unable to recover it. 00:26:18.476 [2024-04-26 15:36:35.653149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.476 [2024-04-26 15:36:35.653534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.476 [2024-04-26 15:36:35.653560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.476 qpair failed and we were unable to recover it. 00:26:18.476 [2024-04-26 15:36:35.653946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.476 [2024-04-26 15:36:35.654300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.476 [2024-04-26 15:36:35.654325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.476 qpair failed and we were unable to recover it. 00:26:18.476 [2024-04-26 15:36:35.654685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.476 [2024-04-26 15:36:35.655079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.476 [2024-04-26 15:36:35.655106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.476 qpair failed and we were unable to recover it. 00:26:18.476 [2024-04-26 15:36:35.655474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.476 [2024-04-26 15:36:35.655691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.476 [2024-04-26 15:36:35.655718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.476 qpair failed and we were unable to recover it. 00:26:18.476 [2024-04-26 15:36:35.656121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.476 [2024-04-26 15:36:35.656500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.476 [2024-04-26 15:36:35.656527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.476 qpair failed and we were unable to recover it. 00:26:18.476 [2024-04-26 15:36:35.656699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.476 [2024-04-26 15:36:35.657110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.476 [2024-04-26 15:36:35.657129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.476 qpair failed and we were unable to recover it. 00:26:18.476 [2024-04-26 15:36:35.657459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.476 [2024-04-26 15:36:35.657825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.476 [2024-04-26 15:36:35.657849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.476 qpair failed and we were unable to recover it. 00:26:18.476 [2024-04-26 15:36:35.658121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.476 [2024-04-26 15:36:35.658529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.476 [2024-04-26 15:36:35.658556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.476 qpair failed and we were unable to recover it. 00:26:18.476 [2024-04-26 15:36:35.658985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.476 [2024-04-26 15:36:35.659452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.476 [2024-04-26 15:36:35.659478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.476 qpair failed and we were unable to recover it. 00:26:18.476 [2024-04-26 15:36:35.659723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.476 [2024-04-26 15:36:35.659991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.476 [2024-04-26 15:36:35.660025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.476 qpair failed and we were unable to recover it. 00:26:18.476 [2024-04-26 15:36:35.660409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.476 [2024-04-26 15:36:35.660829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.476 [2024-04-26 15:36:35.660864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.476 qpair failed and we were unable to recover it. 00:26:18.476 [2024-04-26 15:36:35.661233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.476 [2024-04-26 15:36:35.661655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.476 [2024-04-26 15:36:35.661681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.476 qpair failed and we were unable to recover it. 00:26:18.476 [2024-04-26 15:36:35.662094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.476 [2024-04-26 15:36:35.662437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.476 [2024-04-26 15:36:35.662462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.476 qpair failed and we were unable to recover it. 00:26:18.476 [2024-04-26 15:36:35.662862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.476 [2024-04-26 15:36:35.663093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.476 [2024-04-26 15:36:35.663119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.476 qpair failed and we were unable to recover it. 00:26:18.476 [2024-04-26 15:36:35.663505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.476 [2024-04-26 15:36:35.663917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.476 [2024-04-26 15:36:35.663944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.476 qpair failed and we were unable to recover it. 00:26:18.476 [2024-04-26 15:36:35.664330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.476 [2024-04-26 15:36:35.664703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.476 [2024-04-26 15:36:35.664730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.476 qpair failed and we were unable to recover it. 00:26:18.476 [2024-04-26 15:36:35.665084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.476 [2024-04-26 15:36:35.665426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.476 [2024-04-26 15:36:35.665454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.476 qpair failed and we were unable to recover it. 00:26:18.476 [2024-04-26 15:36:35.665874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.476 [2024-04-26 15:36:35.666243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.476 [2024-04-26 15:36:35.666258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.476 qpair failed and we were unable to recover it. 00:26:18.476 [2024-04-26 15:36:35.666624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.476 [2024-04-26 15:36:35.666966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.476 [2024-04-26 15:36:35.666982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.476 qpair failed and we were unable to recover it. 00:26:18.476 [2024-04-26 15:36:35.667226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.476 [2024-04-26 15:36:35.667577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.476 [2024-04-26 15:36:35.667590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.476 qpair failed and we were unable to recover it. 00:26:18.476 [2024-04-26 15:36:35.667950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.476 [2024-04-26 15:36:35.668302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.476 [2024-04-26 15:36:35.668316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.476 qpair failed and we were unable to recover it. 00:26:18.476 [2024-04-26 15:36:35.668576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.476 [2024-04-26 15:36:35.668933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.476 [2024-04-26 15:36:35.668947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.476 qpair failed and we were unable to recover it. 00:26:18.476 [2024-04-26 15:36:35.669283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.476 [2024-04-26 15:36:35.669492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.476 [2024-04-26 15:36:35.669507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.476 qpair failed and we were unable to recover it. 00:26:18.476 [2024-04-26 15:36:35.669868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.476 [2024-04-26 15:36:35.670206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.476 [2024-04-26 15:36:35.670220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.476 qpair failed and we were unable to recover it. 00:26:18.476 [2024-04-26 15:36:35.670552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.476 [2024-04-26 15:36:35.670759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.476 [2024-04-26 15:36:35.670773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.476 qpair failed and we were unable to recover it. 00:26:18.476 [2024-04-26 15:36:35.671078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.477 [2024-04-26 15:36:35.671443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.477 [2024-04-26 15:36:35.671457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.477 qpair failed and we were unable to recover it. 00:26:18.477 [2024-04-26 15:36:35.671865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.477 [2024-04-26 15:36:35.672236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.477 [2024-04-26 15:36:35.672251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.477 qpair failed and we were unable to recover it. 00:26:18.477 [2024-04-26 15:36:35.672665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.477 [2024-04-26 15:36:35.673039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.477 [2024-04-26 15:36:35.673054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.477 qpair failed and we were unable to recover it. 00:26:18.477 [2024-04-26 15:36:35.673390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.477 [2024-04-26 15:36:35.673737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.477 [2024-04-26 15:36:35.673752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.477 qpair failed and we were unable to recover it. 00:26:18.477 [2024-04-26 15:36:35.674092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.477 [2024-04-26 15:36:35.674442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.477 [2024-04-26 15:36:35.674457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.477 qpair failed and we were unable to recover it. 00:26:18.477 [2024-04-26 15:36:35.674650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.477 [2024-04-26 15:36:35.674900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.477 [2024-04-26 15:36:35.674918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.477 qpair failed and we were unable to recover it. 00:26:18.477 [2024-04-26 15:36:35.675274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.477 [2024-04-26 15:36:35.675619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.477 [2024-04-26 15:36:35.675633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.477 qpair failed and we were unable to recover it. 00:26:18.477 [2024-04-26 15:36:35.675868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.477 [2024-04-26 15:36:35.676238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.477 [2024-04-26 15:36:35.676253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.477 qpair failed and we were unable to recover it. 00:26:18.477 [2024-04-26 15:36:35.676615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.477 [2024-04-26 15:36:35.677049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.477 [2024-04-26 15:36:35.677065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.477 qpair failed and we were unable to recover it. 00:26:18.477 [2024-04-26 15:36:35.677273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.477 [2024-04-26 15:36:35.677669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.477 [2024-04-26 15:36:35.677683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.477 qpair failed and we were unable to recover it. 00:26:18.477 [2024-04-26 15:36:35.678020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.477 [2024-04-26 15:36:35.678361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.477 [2024-04-26 15:36:35.678375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.477 qpair failed and we were unable to recover it. 00:26:18.477 [2024-04-26 15:36:35.678697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.477 [2024-04-26 15:36:35.678926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.477 [2024-04-26 15:36:35.678942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.477 qpair failed and we were unable to recover it. 00:26:18.477 [2024-04-26 15:36:35.679302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.477 [2024-04-26 15:36:35.679662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.477 [2024-04-26 15:36:35.679678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.477 qpair failed and we were unable to recover it. 00:26:18.477 [2024-04-26 15:36:35.680024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.477 [2024-04-26 15:36:35.680372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.477 [2024-04-26 15:36:35.680387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.477 qpair failed and we were unable to recover it. 00:26:18.477 [2024-04-26 15:36:35.680816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.477 [2024-04-26 15:36:35.681168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.477 [2024-04-26 15:36:35.681183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.477 qpair failed and we were unable to recover it. 00:26:18.477 [2024-04-26 15:36:35.681509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.477 [2024-04-26 15:36:35.681862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.477 [2024-04-26 15:36:35.681877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.477 qpair failed and we were unable to recover it. 00:26:18.477 [2024-04-26 15:36:35.682109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.477 [2024-04-26 15:36:35.682497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.477 [2024-04-26 15:36:35.682510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.477 qpair failed and we were unable to recover it. 00:26:18.477 [2024-04-26 15:36:35.682833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.477 [2024-04-26 15:36:35.683198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.477 [2024-04-26 15:36:35.683212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.477 qpair failed and we were unable to recover it. 00:26:18.477 [2024-04-26 15:36:35.683537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.477 [2024-04-26 15:36:35.683742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.477 [2024-04-26 15:36:35.683757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.477 qpair failed and we were unable to recover it. 00:26:18.477 [2024-04-26 15:36:35.684095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.477 [2024-04-26 15:36:35.684434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.477 [2024-04-26 15:36:35.684447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.477 qpair failed and we were unable to recover it. 00:26:18.477 [2024-04-26 15:36:35.684774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.477 [2024-04-26 15:36:35.685147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.477 [2024-04-26 15:36:35.685162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.477 qpair failed and we were unable to recover it. 00:26:18.477 [2024-04-26 15:36:35.685480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.477 [2024-04-26 15:36:35.685852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.477 [2024-04-26 15:36:35.685869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.477 qpair failed and we were unable to recover it. 00:26:18.477 [2024-04-26 15:36:35.686231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.477 [2024-04-26 15:36:35.686560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.477 [2024-04-26 15:36:35.686575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.477 qpair failed and we were unable to recover it. 00:26:18.477 [2024-04-26 15:36:35.686949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.477 [2024-04-26 15:36:35.687250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.477 [2024-04-26 15:36:35.687265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.477 qpair failed and we were unable to recover it. 00:26:18.477 [2024-04-26 15:36:35.687592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.477 [2024-04-26 15:36:35.687959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.477 [2024-04-26 15:36:35.687974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.477 qpair failed and we were unable to recover it. 00:26:18.477 [2024-04-26 15:36:35.688211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.477 [2024-04-26 15:36:35.688578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.477 [2024-04-26 15:36:35.688592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.477 qpair failed and we were unable to recover it. 00:26:18.477 [2024-04-26 15:36:35.688920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.477 [2024-04-26 15:36:35.689297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.477 [2024-04-26 15:36:35.689310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.477 qpair failed and we were unable to recover it. 00:26:18.477 [2024-04-26 15:36:35.689728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.477 [2024-04-26 15:36:35.689956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.477 [2024-04-26 15:36:35.689972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.477 qpair failed and we were unable to recover it. 00:26:18.477 [2024-04-26 15:36:35.690434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.477 [2024-04-26 15:36:35.690760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.477 [2024-04-26 15:36:35.690774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.478 qpair failed and we were unable to recover it. 00:26:18.478 [2024-04-26 15:36:35.691132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.478 [2024-04-26 15:36:35.691495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.478 [2024-04-26 15:36:35.691510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.478 qpair failed and we were unable to recover it. 00:26:18.478 [2024-04-26 15:36:35.691843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.478 [2024-04-26 15:36:35.692128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.478 [2024-04-26 15:36:35.692142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.478 qpair failed and we were unable to recover it. 00:26:18.478 [2024-04-26 15:36:35.692540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.478 [2024-04-26 15:36:35.692918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.478 [2024-04-26 15:36:35.692932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.478 qpair failed and we were unable to recover it. 00:26:18.478 [2024-04-26 15:36:35.693245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.478 [2024-04-26 15:36:35.693592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.478 [2024-04-26 15:36:35.693606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.478 qpair failed and we were unable to recover it. 00:26:18.478 [2024-04-26 15:36:35.693928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.478 [2024-04-26 15:36:35.694287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.478 [2024-04-26 15:36:35.694301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.478 qpair failed and we were unable to recover it. 00:26:18.478 [2024-04-26 15:36:35.694665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.478 [2024-04-26 15:36:35.695029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.478 [2024-04-26 15:36:35.695045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.478 qpair failed and we were unable to recover it. 00:26:18.478 [2024-04-26 15:36:35.695386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.478 [2024-04-26 15:36:35.695741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.478 [2024-04-26 15:36:35.695756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.478 qpair failed and we were unable to recover it. 00:26:18.478 [2024-04-26 15:36:35.696127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.478 [2024-04-26 15:36:35.696444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.478 [2024-04-26 15:36:35.696460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.478 qpair failed and we were unable to recover it. 00:26:18.478 [2024-04-26 15:36:35.696812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.478 [2024-04-26 15:36:35.697120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.478 [2024-04-26 15:36:35.697135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.478 qpair failed and we were unable to recover it. 00:26:18.478 [2024-04-26 15:36:35.697498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.478 [2024-04-26 15:36:35.697711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.478 [2024-04-26 15:36:35.697726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.478 qpair failed and we were unable to recover it. 00:26:18.478 [2024-04-26 15:36:35.698079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.478 [2024-04-26 15:36:35.698418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.478 [2024-04-26 15:36:35.698433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.478 qpair failed and we were unable to recover it. 00:26:18.478 [2024-04-26 15:36:35.698777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.478 [2024-04-26 15:36:35.699145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.478 [2024-04-26 15:36:35.699160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.478 qpair failed and we were unable to recover it. 00:26:18.478 [2024-04-26 15:36:35.699482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.478 [2024-04-26 15:36:35.699817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.478 [2024-04-26 15:36:35.699833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.478 qpair failed and we were unable to recover it. 00:26:18.478 [2024-04-26 15:36:35.700004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.478 [2024-04-26 15:36:35.700339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.478 [2024-04-26 15:36:35.700353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.478 qpair failed and we were unable to recover it. 00:26:18.478 [2024-04-26 15:36:35.700686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.478 [2024-04-26 15:36:35.701031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.478 [2024-04-26 15:36:35.701046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.478 qpair failed and we were unable to recover it. 00:26:18.478 [2024-04-26 15:36:35.701363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.478 [2024-04-26 15:36:35.701721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.478 [2024-04-26 15:36:35.701737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.478 qpair failed and we were unable to recover it. 00:26:18.478 [2024-04-26 15:36:35.702087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.478 [2024-04-26 15:36:35.702427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.478 [2024-04-26 15:36:35.702441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.478 qpair failed and we were unable to recover it. 00:26:18.478 [2024-04-26 15:36:35.702778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.478 [2024-04-26 15:36:35.703121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.478 [2024-04-26 15:36:35.703136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.478 qpair failed and we were unable to recover it. 00:26:18.478 [2024-04-26 15:36:35.703471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.478 [2024-04-26 15:36:35.703850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.478 [2024-04-26 15:36:35.703865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.478 qpair failed and we were unable to recover it. 00:26:18.478 [2024-04-26 15:36:35.704214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.478 [2024-04-26 15:36:35.704585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.478 [2024-04-26 15:36:35.704600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.478 qpair failed and we were unable to recover it. 00:26:18.478 [2024-04-26 15:36:35.704957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.478 [2024-04-26 15:36:35.705321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.478 [2024-04-26 15:36:35.705335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.478 qpair failed and we were unable to recover it. 00:26:18.478 [2024-04-26 15:36:35.705681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.478 [2024-04-26 15:36:35.706026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.478 [2024-04-26 15:36:35.706040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.478 qpair failed and we were unable to recover it. 00:26:18.478 [2024-04-26 15:36:35.706374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.478 [2024-04-26 15:36:35.706686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.478 [2024-04-26 15:36:35.706699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.478 qpair failed and we were unable to recover it. 00:26:18.478 [2024-04-26 15:36:35.707023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.478 [2024-04-26 15:36:35.707397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.478 [2024-04-26 15:36:35.707411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.478 qpair failed and we were unable to recover it. 00:26:18.478 [2024-04-26 15:36:35.707726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.478 [2024-04-26 15:36:35.707905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.478 [2024-04-26 15:36:35.707924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.478 qpair failed and we were unable to recover it. 00:26:18.478 [2024-04-26 15:36:35.708261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.478 [2024-04-26 15:36:35.708605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.478 [2024-04-26 15:36:35.708618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.478 qpair failed and we were unable to recover it. 00:26:18.478 [2024-04-26 15:36:35.708944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.478 [2024-04-26 15:36:35.709292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.478 [2024-04-26 15:36:35.709305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.478 qpair failed and we were unable to recover it. 00:26:18.478 [2024-04-26 15:36:35.709636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.478 [2024-04-26 15:36:35.709993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.478 [2024-04-26 15:36:35.710009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.478 qpair failed and we were unable to recover it. 00:26:18.478 [2024-04-26 15:36:35.710365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.479 [2024-04-26 15:36:35.710719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.479 [2024-04-26 15:36:35.710734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.479 qpair failed and we were unable to recover it. 00:26:18.479 [2024-04-26 15:36:35.711075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.479 [2024-04-26 15:36:35.711501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.479 [2024-04-26 15:36:35.711515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.479 qpair failed and we were unable to recover it. 00:26:18.479 [2024-04-26 15:36:35.711834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.479 [2024-04-26 15:36:35.712208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.479 [2024-04-26 15:36:35.712224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.479 qpair failed and we were unable to recover it. 00:26:18.479 [2024-04-26 15:36:35.712572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.479 [2024-04-26 15:36:35.712984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.479 [2024-04-26 15:36:35.712998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.479 qpair failed and we were unable to recover it. 00:26:18.479 [2024-04-26 15:36:35.713325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.479 [2024-04-26 15:36:35.713650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.479 [2024-04-26 15:36:35.713665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.479 qpair failed and we were unable to recover it. 00:26:18.479 [2024-04-26 15:36:35.714030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.479 [2024-04-26 15:36:35.714391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.479 [2024-04-26 15:36:35.714406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.479 qpair failed and we were unable to recover it. 00:26:18.479 [2024-04-26 15:36:35.714739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.479 [2024-04-26 15:36:35.715078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.479 [2024-04-26 15:36:35.715099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.479 qpair failed and we were unable to recover it. 00:26:18.479 [2024-04-26 15:36:35.715419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.479 [2024-04-26 15:36:35.715787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.479 [2024-04-26 15:36:35.715802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.479 qpair failed and we were unable to recover it. 00:26:18.479 [2024-04-26 15:36:35.716199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.479 [2024-04-26 15:36:35.716535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.479 [2024-04-26 15:36:35.716549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.479 qpair failed and we were unable to recover it. 00:26:18.479 [2024-04-26 15:36:35.716905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.479 [2024-04-26 15:36:35.717278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.479 [2024-04-26 15:36:35.717293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.479 qpair failed and we were unable to recover it. 00:26:18.479 [2024-04-26 15:36:35.717582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.479 [2024-04-26 15:36:35.717931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.479 [2024-04-26 15:36:35.717947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.479 qpair failed and we were unable to recover it. 00:26:18.479 [2024-04-26 15:36:35.718297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.479 [2024-04-26 15:36:35.718607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.479 [2024-04-26 15:36:35.718621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.479 qpair failed and we were unable to recover it. 00:26:18.479 [2024-04-26 15:36:35.718908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.479 [2024-04-26 15:36:35.719261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.479 [2024-04-26 15:36:35.719276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.479 qpair failed and we were unable to recover it. 00:26:18.479 [2024-04-26 15:36:35.719639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.479 [2024-04-26 15:36:35.719962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.479 [2024-04-26 15:36:35.719977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.479 qpair failed and we were unable to recover it. 00:26:18.479 [2024-04-26 15:36:35.720319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.479 [2024-04-26 15:36:35.720637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.479 [2024-04-26 15:36:35.720651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.479 qpair failed and we were unable to recover it. 00:26:18.479 [2024-04-26 15:36:35.720988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.479 [2024-04-26 15:36:35.721317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.479 [2024-04-26 15:36:35.721331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.479 qpair failed and we were unable to recover it. 00:26:18.479 [2024-04-26 15:36:35.721682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.479 [2024-04-26 15:36:35.721926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.479 [2024-04-26 15:36:35.721946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.479 qpair failed and we were unable to recover it. 00:26:18.479 [2024-04-26 15:36:35.722310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.479 [2024-04-26 15:36:35.722674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.479 [2024-04-26 15:36:35.722688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.479 qpair failed and we were unable to recover it. 00:26:18.479 [2024-04-26 15:36:35.723012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.479 [2024-04-26 15:36:35.723380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.479 [2024-04-26 15:36:35.723397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.479 qpair failed and we were unable to recover it. 00:26:18.479 [2024-04-26 15:36:35.723701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.479 [2024-04-26 15:36:35.724093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.479 [2024-04-26 15:36:35.724111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.479 qpair failed and we were unable to recover it. 00:26:18.479 [2024-04-26 15:36:35.724370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.479 [2024-04-26 15:36:35.724764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.479 [2024-04-26 15:36:35.724779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.479 qpair failed and we were unable to recover it. 00:26:18.479 [2024-04-26 15:36:35.724969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.479 [2024-04-26 15:36:35.725364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.479 [2024-04-26 15:36:35.725378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.479 qpair failed and we were unable to recover it. 00:26:18.479 [2024-04-26 15:36:35.725753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.479 [2024-04-26 15:36:35.726127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.479 [2024-04-26 15:36:35.726142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.479 qpair failed and we were unable to recover it. 00:26:18.479 [2024-04-26 15:36:35.726337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.479 [2024-04-26 15:36:35.726691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.479 [2024-04-26 15:36:35.726705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.479 qpair failed and we were unable to recover it. 00:26:18.479 [2024-04-26 15:36:35.727036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.479 [2024-04-26 15:36:35.727402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.479 [2024-04-26 15:36:35.727415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.479 qpair failed and we were unable to recover it. 00:26:18.479 [2024-04-26 15:36:35.727744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.479 [2024-04-26 15:36:35.727941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.479 [2024-04-26 15:36:35.727957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.479 qpair failed and we were unable to recover it. 00:26:18.479 [2024-04-26 15:36:35.728304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.479 [2024-04-26 15:36:35.728650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.479 [2024-04-26 15:36:35.728663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.479 qpair failed and we were unable to recover it. 00:26:18.479 [2024-04-26 15:36:35.728999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.479 [2024-04-26 15:36:35.729345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.479 [2024-04-26 15:36:35.729360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.479 qpair failed and we were unable to recover it. 00:26:18.479 [2024-04-26 15:36:35.729687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.479 [2024-04-26 15:36:35.730035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.479 [2024-04-26 15:36:35.730049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.480 qpair failed and we were unable to recover it. 00:26:18.480 [2024-04-26 15:36:35.730354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.480 [2024-04-26 15:36:35.730558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.480 [2024-04-26 15:36:35.730574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.480 qpair failed and we were unable to recover it. 00:26:18.480 [2024-04-26 15:36:35.730911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.480 [2024-04-26 15:36:35.731172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.480 [2024-04-26 15:36:35.731186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.480 qpair failed and we were unable to recover it. 00:26:18.480 [2024-04-26 15:36:35.731427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.480 [2024-04-26 15:36:35.731735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.480 [2024-04-26 15:36:35.731749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.480 qpair failed and we were unable to recover it. 00:26:18.480 [2024-04-26 15:36:35.732151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.480 [2024-04-26 15:36:35.732401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.480 [2024-04-26 15:36:35.732415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.480 qpair failed and we were unable to recover it. 00:26:18.480 [2024-04-26 15:36:35.732750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.480 [2024-04-26 15:36:35.733120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.480 [2024-04-26 15:36:35.733134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.480 qpair failed and we were unable to recover it. 00:26:18.480 [2024-04-26 15:36:35.733464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.480 [2024-04-26 15:36:35.733817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.480 [2024-04-26 15:36:35.733831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.480 qpair failed and we were unable to recover it. 00:26:18.480 [2024-04-26 15:36:35.734049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.480 [2024-04-26 15:36:35.734435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.480 [2024-04-26 15:36:35.734449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.480 qpair failed and we were unable to recover it. 00:26:18.480 [2024-04-26 15:36:35.734769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.480 [2024-04-26 15:36:35.735032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.480 [2024-04-26 15:36:35.735047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.480 qpair failed and we were unable to recover it. 00:26:18.480 [2024-04-26 15:36:35.735374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.480 [2024-04-26 15:36:35.735740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.480 [2024-04-26 15:36:35.735753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.480 qpair failed and we were unable to recover it. 00:26:18.480 [2024-04-26 15:36:35.736090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.480 [2024-04-26 15:36:35.736458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.480 [2024-04-26 15:36:35.736471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.480 qpair failed and we were unable to recover it. 00:26:18.480 [2024-04-26 15:36:35.736793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.480 [2024-04-26 15:36:35.737114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.480 [2024-04-26 15:36:35.737129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.480 qpair failed and we were unable to recover it. 00:26:18.480 [2024-04-26 15:36:35.737397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.480 [2024-04-26 15:36:35.737782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.480 [2024-04-26 15:36:35.737796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.480 qpair failed and we were unable to recover it. 00:26:18.480 [2024-04-26 15:36:35.738034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.480 [2024-04-26 15:36:35.738346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.480 [2024-04-26 15:36:35.738360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.480 qpair failed and we were unable to recover it. 00:26:18.480 [2024-04-26 15:36:35.738765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.480 [2024-04-26 15:36:35.739105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.480 [2024-04-26 15:36:35.739120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.480 qpair failed and we were unable to recover it. 00:26:18.480 [2024-04-26 15:36:35.739317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.480 [2024-04-26 15:36:35.739646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.480 [2024-04-26 15:36:35.739660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.480 qpair failed and we were unable to recover it. 00:26:18.480 [2024-04-26 15:36:35.740023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.480 [2024-04-26 15:36:35.740277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.480 [2024-04-26 15:36:35.740291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.480 qpair failed and we were unable to recover it. 00:26:18.480 [2024-04-26 15:36:35.740602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.480 [2024-04-26 15:36:35.741018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.480 [2024-04-26 15:36:35.741033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.480 qpair failed and we were unable to recover it. 00:26:18.480 [2024-04-26 15:36:35.741360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.480 [2024-04-26 15:36:35.741699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.480 [2024-04-26 15:36:35.741713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.480 qpair failed and we were unable to recover it. 00:26:18.480 [2024-04-26 15:36:35.742052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.480 [2024-04-26 15:36:35.742397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.480 [2024-04-26 15:36:35.742410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.480 qpair failed and we were unable to recover it. 00:26:18.480 [2024-04-26 15:36:35.742808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.480 [2024-04-26 15:36:35.743069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.480 [2024-04-26 15:36:35.743083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.480 qpair failed and we were unable to recover it. 00:26:18.480 [2024-04-26 15:36:35.743447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.480 [2024-04-26 15:36:35.743733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.480 [2024-04-26 15:36:35.743746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.480 qpair failed and we were unable to recover it. 00:26:18.480 [2024-04-26 15:36:35.743900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.480 [2024-04-26 15:36:35.744219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.480 [2024-04-26 15:36:35.744232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.480 qpair failed and we were unable to recover it. 00:26:18.480 [2024-04-26 15:36:35.744667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.480 [2024-04-26 15:36:35.745007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.480 [2024-04-26 15:36:35.745023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.480 qpair failed and we were unable to recover it. 00:26:18.480 [2024-04-26 15:36:35.745320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.480 [2024-04-26 15:36:35.745656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.480 [2024-04-26 15:36:35.745670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.480 qpair failed and we were unable to recover it. 00:26:18.480 [2024-04-26 15:36:35.746050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.480 [2024-04-26 15:36:35.746432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.480 [2024-04-26 15:36:35.746446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.480 qpair failed and we were unable to recover it. 00:26:18.480 [2024-04-26 15:36:35.746669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.480 [2024-04-26 15:36:35.746988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.480 [2024-04-26 15:36:35.747002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.480 qpair failed and we were unable to recover it. 00:26:18.480 [2024-04-26 15:36:35.747352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.480 [2024-04-26 15:36:35.747560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.480 [2024-04-26 15:36:35.747576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.480 qpair failed and we were unable to recover it. 00:26:18.480 [2024-04-26 15:36:35.747935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.480 [2024-04-26 15:36:35.748295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.480 [2024-04-26 15:36:35.748310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.480 qpair failed and we were unable to recover it. 00:26:18.480 [2024-04-26 15:36:35.748577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.480 [2024-04-26 15:36:35.749021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.481 [2024-04-26 15:36:35.749035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.481 qpair failed and we were unable to recover it. 00:26:18.481 [2024-04-26 15:36:35.749447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.481 [2024-04-26 15:36:35.749784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.481 [2024-04-26 15:36:35.749798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.481 qpair failed and we were unable to recover it. 00:26:18.481 [2024-04-26 15:36:35.750177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.481 [2024-04-26 15:36:35.750533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.481 [2024-04-26 15:36:35.750547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.481 qpair failed and we were unable to recover it. 00:26:18.481 [2024-04-26 15:36:35.750907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.481 [2024-04-26 15:36:35.751290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.481 [2024-04-26 15:36:35.751304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.481 qpair failed and we were unable to recover it. 00:26:18.481 [2024-04-26 15:36:35.751509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.481 [2024-04-26 15:36:35.751942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.481 [2024-04-26 15:36:35.751956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.481 qpair failed and we were unable to recover it. 00:26:18.481 [2024-04-26 15:36:35.752294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.481 [2024-04-26 15:36:35.752626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.481 [2024-04-26 15:36:35.752640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.481 qpair failed and we were unable to recover it. 00:26:18.481 [2024-04-26 15:36:35.752971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.481 [2024-04-26 15:36:35.753334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.481 [2024-04-26 15:36:35.753348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.481 qpair failed and we were unable to recover it. 00:26:18.481 [2024-04-26 15:36:35.753676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.481 [2024-04-26 15:36:35.754013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.481 [2024-04-26 15:36:35.754027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.481 qpair failed and we were unable to recover it. 00:26:18.481 [2024-04-26 15:36:35.754390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.481 [2024-04-26 15:36:35.754763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.481 [2024-04-26 15:36:35.754777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.481 qpair failed and we were unable to recover it. 00:26:18.481 [2024-04-26 15:36:35.755147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.481 [2024-04-26 15:36:35.755357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.481 [2024-04-26 15:36:35.755372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.481 qpair failed and we were unable to recover it. 00:26:18.481 [2024-04-26 15:36:35.755668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.481 [2024-04-26 15:36:35.755974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.481 [2024-04-26 15:36:35.755989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.481 qpair failed and we were unable to recover it. 00:26:18.481 [2024-04-26 15:36:35.756340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.481 [2024-04-26 15:36:35.756707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.481 [2024-04-26 15:36:35.756720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.481 qpair failed and we were unable to recover it. 00:26:18.481 [2024-04-26 15:36:35.757007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.481 [2024-04-26 15:36:35.757354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.481 [2024-04-26 15:36:35.757368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.481 qpair failed and we were unable to recover it. 00:26:18.481 [2024-04-26 15:36:35.757769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.481 [2024-04-26 15:36:35.757985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.481 [2024-04-26 15:36:35.758000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.481 qpair failed and we were unable to recover it. 00:26:18.481 [2024-04-26 15:36:35.758346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.481 [2024-04-26 15:36:35.758710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.481 [2024-04-26 15:36:35.758723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.481 qpair failed and we were unable to recover it. 00:26:18.481 [2024-04-26 15:36:35.759149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.481 [2024-04-26 15:36:35.759519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.481 [2024-04-26 15:36:35.759534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.481 qpair failed and we were unable to recover it. 00:26:18.481 [2024-04-26 15:36:35.759784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.481 [2024-04-26 15:36:35.760115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.481 [2024-04-26 15:36:35.760129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.481 qpair failed and we were unable to recover it. 00:26:18.481 [2024-04-26 15:36:35.760466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.481 [2024-04-26 15:36:35.760799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.481 [2024-04-26 15:36:35.760813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.481 qpair failed and we were unable to recover it. 00:26:18.481 [2024-04-26 15:36:35.761225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.481 [2024-04-26 15:36:35.761465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.481 [2024-04-26 15:36:35.761479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.481 qpair failed and we were unable to recover it. 00:26:18.481 [2024-04-26 15:36:35.761858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.481 [2024-04-26 15:36:35.762206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.481 [2024-04-26 15:36:35.762220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.481 qpair failed and we were unable to recover it. 00:26:18.481 [2024-04-26 15:36:35.762583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.481 [2024-04-26 15:36:35.762954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.481 [2024-04-26 15:36:35.762968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.481 qpair failed and we were unable to recover it. 00:26:18.481 [2024-04-26 15:36:35.763375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.481 [2024-04-26 15:36:35.763724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.481 [2024-04-26 15:36:35.763739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.481 qpair failed and we were unable to recover it. 00:26:18.481 [2024-04-26 15:36:35.764083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.481 [2024-04-26 15:36:35.764421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.481 [2024-04-26 15:36:35.764434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.481 qpair failed and we were unable to recover it. 00:26:18.481 [2024-04-26 15:36:35.764789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.481 [2024-04-26 15:36:35.764951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.481 [2024-04-26 15:36:35.764966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.481 qpair failed and we were unable to recover it. 00:26:18.481 [2024-04-26 15:36:35.765213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.481 [2024-04-26 15:36:35.765593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 15:36:35.765607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.482 qpair failed and we were unable to recover it. 00:26:18.482 [2024-04-26 15:36:35.766023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 15:36:35.766401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 15:36:35.766416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.482 qpair failed and we were unable to recover it. 00:26:18.482 [2024-04-26 15:36:35.766747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 15:36:35.766905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 15:36:35.766920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.482 qpair failed and we were unable to recover it. 00:26:18.482 [2024-04-26 15:36:35.767222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 15:36:35.767586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 15:36:35.767599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.482 qpair failed and we were unable to recover it. 00:26:18.482 [2024-04-26 15:36:35.767936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 15:36:35.768302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 15:36:35.768315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.482 qpair failed and we were unable to recover it. 00:26:18.482 [2024-04-26 15:36:35.768639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 15:36:35.768991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 15:36:35.769006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.482 qpair failed and we were unable to recover it. 00:26:18.482 [2024-04-26 15:36:35.769345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 15:36:35.769678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 15:36:35.769693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.482 qpair failed and we were unable to recover it. 00:26:18.482 [2024-04-26 15:36:35.769934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 15:36:35.770195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 15:36:35.770209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.482 qpair failed and we were unable to recover it. 00:26:18.482 [2024-04-26 15:36:35.770419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 15:36:35.770771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 15:36:35.770786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.482 qpair failed and we were unable to recover it. 00:26:18.482 [2024-04-26 15:36:35.771113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 15:36:35.771453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 15:36:35.771467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.482 qpair failed and we were unable to recover it. 00:26:18.482 [2024-04-26 15:36:35.771790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 15:36:35.772097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 15:36:35.772112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.482 qpair failed and we were unable to recover it. 00:26:18.482 [2024-04-26 15:36:35.772445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 15:36:35.772771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 15:36:35.772785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.482 qpair failed and we were unable to recover it. 00:26:18.482 [2024-04-26 15:36:35.773112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 15:36:35.773304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 15:36:35.773318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.482 qpair failed and we were unable to recover it. 00:26:18.482 [2024-04-26 15:36:35.773546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 15:36:35.773730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 15:36:35.773745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.482 qpair failed and we were unable to recover it. 00:26:18.482 [2024-04-26 15:36:35.774072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 15:36:35.774399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 15:36:35.774413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.482 qpair failed and we were unable to recover it. 00:26:18.482 [2024-04-26 15:36:35.774649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 15:36:35.774993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 15:36:35.775008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.482 qpair failed and we were unable to recover it. 00:26:18.482 [2024-04-26 15:36:35.775227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 15:36:35.775537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 15:36:35.775552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.482 qpair failed and we were unable to recover it. 00:26:18.482 [2024-04-26 15:36:35.775908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 15:36:35.776291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 15:36:35.776304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.482 qpair failed and we were unable to recover it. 00:26:18.482 [2024-04-26 15:36:35.776557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 15:36:35.776903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 15:36:35.776916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.482 qpair failed and we were unable to recover it. 00:26:18.482 [2024-04-26 15:36:35.777197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 15:36:35.777537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 15:36:35.777550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.482 qpair failed and we were unable to recover it. 00:26:18.482 [2024-04-26 15:36:35.777869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 15:36:35.778210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 15:36:35.778224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.482 qpair failed and we were unable to recover it. 00:26:18.482 [2024-04-26 15:36:35.778554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 15:36:35.778922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 15:36:35.778936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.482 qpair failed and we were unable to recover it. 00:26:18.482 [2024-04-26 15:36:35.779212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 15:36:35.779513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 15:36:35.779526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.482 qpair failed and we were unable to recover it. 00:26:18.482 [2024-04-26 15:36:35.779851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 15:36:35.780059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 15:36:35.780072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.482 qpair failed and we were unable to recover it. 00:26:18.482 [2024-04-26 15:36:35.780429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 15:36:35.780777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 15:36:35.780791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.482 qpair failed and we were unable to recover it. 00:26:18.482 [2024-04-26 15:36:35.781156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 15:36:35.781497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 15:36:35.781511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.482 qpair failed and we were unable to recover it. 00:26:18.482 [2024-04-26 15:36:35.781862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 15:36:35.782216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 15:36:35.782230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.482 qpair failed and we were unable to recover it. 00:26:18.482 [2024-04-26 15:36:35.782585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 15:36:35.782922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 15:36:35.782937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.482 qpair failed and we were unable to recover it. 00:26:18.482 [2024-04-26 15:36:35.783054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 15:36:35.783256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 15:36:35.783270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.482 qpair failed and we were unable to recover it. 00:26:18.482 [2024-04-26 15:36:35.783608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.482 [2024-04-26 15:36:35.783888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 15:36:35.783903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.483 qpair failed and we were unable to recover it. 00:26:18.483 [2024-04-26 15:36:35.784114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 15:36:35.784325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 15:36:35.784340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.483 qpair failed and we were unable to recover it. 00:26:18.483 [2024-04-26 15:36:35.784703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 15:36:35.784962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 15:36:35.784978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.483 qpair failed and we were unable to recover it. 00:26:18.483 [2024-04-26 15:36:35.785352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 15:36:35.785691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 15:36:35.785706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.483 qpair failed and we were unable to recover it. 00:26:18.483 [2024-04-26 15:36:35.786046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 15:36:35.786368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 15:36:35.786382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.483 qpair failed and we were unable to recover it. 00:26:18.483 [2024-04-26 15:36:35.786644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 15:36:35.786861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 15:36:35.786877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.483 qpair failed and we were unable to recover it. 00:26:18.483 [2024-04-26 15:36:35.787221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 15:36:35.787469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 15:36:35.787482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.483 qpair failed and we were unable to recover it. 00:26:18.483 [2024-04-26 15:36:35.787807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 15:36:35.788172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 15:36:35.788187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.483 qpair failed and we were unable to recover it. 00:26:18.483 [2024-04-26 15:36:35.788528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 15:36:35.788875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 15:36:35.788890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.483 qpair failed and we were unable to recover it. 00:26:18.483 [2024-04-26 15:36:35.789266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 15:36:35.789637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 15:36:35.789653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.483 qpair failed and we were unable to recover it. 00:26:18.483 [2024-04-26 15:36:35.789997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 15:36:35.790354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 15:36:35.790368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.483 qpair failed and we were unable to recover it. 00:26:18.483 [2024-04-26 15:36:35.790720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 15:36:35.790980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 15:36:35.790995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.483 qpair failed and we were unable to recover it. 00:26:18.483 [2024-04-26 15:36:35.791353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 15:36:35.791630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 15:36:35.791644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.483 qpair failed and we were unable to recover it. 00:26:18.483 [2024-04-26 15:36:35.791896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 15:36:35.792253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 15:36:35.792267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.483 qpair failed and we were unable to recover it. 00:26:18.483 [2024-04-26 15:36:35.792595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 15:36:35.792935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 15:36:35.792949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.483 qpair failed and we were unable to recover it. 00:26:18.483 [2024-04-26 15:36:35.793293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 15:36:35.793668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 15:36:35.793681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.483 qpair failed and we were unable to recover it. 00:26:18.483 [2024-04-26 15:36:35.793950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 15:36:35.794316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 15:36:35.794329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.483 qpair failed and we were unable to recover it. 00:26:18.483 [2024-04-26 15:36:35.794651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 15:36:35.794963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 15:36:35.794978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.483 qpair failed and we were unable to recover it. 00:26:18.483 [2024-04-26 15:36:35.795333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 15:36:35.795601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 15:36:35.795614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.483 qpair failed and we were unable to recover it. 00:26:18.483 [2024-04-26 15:36:35.795937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 15:36:35.796319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 15:36:35.796333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.483 qpair failed and we were unable to recover it. 00:26:18.483 [2024-04-26 15:36:35.796666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 15:36:35.797018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 15:36:35.797032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.483 qpair failed and we were unable to recover it. 00:26:18.483 [2024-04-26 15:36:35.797368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 15:36:35.797715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 15:36:35.797729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.483 qpair failed and we were unable to recover it. 00:26:18.483 [2024-04-26 15:36:35.798048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 15:36:35.798392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 15:36:35.798405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.483 qpair failed and we were unable to recover it. 00:26:18.483 [2024-04-26 15:36:35.798604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 15:36:35.798981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 15:36:35.798996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.483 qpair failed and we were unable to recover it. 00:26:18.483 [2024-04-26 15:36:35.799398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 15:36:35.799769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 15:36:35.799784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.483 qpair failed and we were unable to recover it. 00:26:18.483 [2024-04-26 15:36:35.799991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 15:36:35.800288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 15:36:35.800302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.483 qpair failed and we were unable to recover it. 00:26:18.483 [2024-04-26 15:36:35.800566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 15:36:35.800805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 15:36:35.800819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.483 qpair failed and we were unable to recover it. 00:26:18.483 [2024-04-26 15:36:35.801217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 15:36:35.801591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 15:36:35.801606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.483 qpair failed and we were unable to recover it. 00:26:18.483 [2024-04-26 15:36:35.801991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 15:36:35.802340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.483 [2024-04-26 15:36:35.802355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.484 qpair failed and we were unable to recover it. 00:26:18.484 [2024-04-26 15:36:35.802594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 15:36:35.802896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 15:36:35.802910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.484 qpair failed and we were unable to recover it. 00:26:18.484 [2024-04-26 15:36:35.803266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 15:36:35.803511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 15:36:35.803526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.484 qpair failed and we were unable to recover it. 00:26:18.484 [2024-04-26 15:36:35.803883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 15:36:35.804255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 15:36:35.804269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.484 qpair failed and we were unable to recover it. 00:26:18.484 [2024-04-26 15:36:35.804513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 15:36:35.804885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 15:36:35.804899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.484 qpair failed and we were unable to recover it. 00:26:18.484 [2024-04-26 15:36:35.805252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 15:36:35.805595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 15:36:35.805609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.484 qpair failed and we were unable to recover it. 00:26:18.484 [2024-04-26 15:36:35.805939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 15:36:35.806194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 15:36:35.806208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.484 qpair failed and we were unable to recover it. 00:26:18.484 [2024-04-26 15:36:35.806530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 15:36:35.806845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 15:36:35.806860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.484 qpair failed and we were unable to recover it. 00:26:18.484 [2024-04-26 15:36:35.807237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 15:36:35.807650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 15:36:35.807664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.484 qpair failed and we were unable to recover it. 00:26:18.484 [2024-04-26 15:36:35.808047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 15:36:35.808429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 15:36:35.808447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.484 qpair failed and we were unable to recover it. 00:26:18.484 [2024-04-26 15:36:35.808750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 15:36:35.809159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 15:36:35.809174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.484 qpair failed and we were unable to recover it. 00:26:18.484 [2024-04-26 15:36:35.809436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 15:36:35.809775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 15:36:35.809789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.484 qpair failed and we were unable to recover it. 00:26:18.484 [2024-04-26 15:36:35.810169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 15:36:35.810489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 15:36:35.810503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.484 qpair failed and we were unable to recover it. 00:26:18.484 [2024-04-26 15:36:35.810893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 15:36:35.811242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 15:36:35.811256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.484 qpair failed and we were unable to recover it. 00:26:18.484 [2024-04-26 15:36:35.811603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 15:36:35.811961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 15:36:35.811975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.484 qpair failed and we were unable to recover it. 00:26:18.484 [2024-04-26 15:36:35.812319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 15:36:35.812674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 15:36:35.812689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.484 qpair failed and we were unable to recover it. 00:26:18.484 [2024-04-26 15:36:35.813067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 15:36:35.813286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 15:36:35.813301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.484 qpair failed and we were unable to recover it. 00:26:18.484 [2024-04-26 15:36:35.813714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 15:36:35.814062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 15:36:35.814076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.484 qpair failed and we were unable to recover it. 00:26:18.484 [2024-04-26 15:36:35.814438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 15:36:35.814820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 15:36:35.814835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.484 qpair failed and we were unable to recover it. 00:26:18.484 [2024-04-26 15:36:35.815223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 15:36:35.815596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 15:36:35.815617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.484 qpair failed and we were unable to recover it. 00:26:18.484 [2024-04-26 15:36:35.816031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 15:36:35.816367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 15:36:35.816381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.484 qpair failed and we were unable to recover it. 00:26:18.484 [2024-04-26 15:36:35.816693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 15:36:35.817032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 15:36:35.817046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.484 qpair failed and we were unable to recover it. 00:26:18.484 [2024-04-26 15:36:35.817284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 15:36:35.817630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 15:36:35.817644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.484 qpair failed and we were unable to recover it. 00:26:18.484 [2024-04-26 15:36:35.818018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 15:36:35.818422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 15:36:35.818436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.484 qpair failed and we were unable to recover it. 00:26:18.484 [2024-04-26 15:36:35.818815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 15:36:35.819153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 15:36:35.819169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.484 qpair failed and we were unable to recover it. 00:26:18.484 [2024-04-26 15:36:35.819511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 15:36:35.819912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 15:36:35.819928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.484 qpair failed and we were unable to recover it. 00:26:18.484 [2024-04-26 15:36:35.820177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 15:36:35.820506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 15:36:35.820520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.484 qpair failed and we were unable to recover it. 00:26:18.484 [2024-04-26 15:36:35.820969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 15:36:35.821223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 15:36:35.821238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.484 qpair failed and we were unable to recover it. 00:26:18.484 [2024-04-26 15:36:35.821584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 15:36:35.821834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 15:36:35.821856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.484 qpair failed and we were unable to recover it. 00:26:18.484 [2024-04-26 15:36:35.822225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.484 [2024-04-26 15:36:35.822599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 15:36:35.822617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.485 qpair failed and we were unable to recover it. 00:26:18.485 [2024-04-26 15:36:35.822892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 15:36:35.823305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 15:36:35.823319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.485 qpair failed and we were unable to recover it. 00:26:18.485 [2024-04-26 15:36:35.823645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 15:36:35.824055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 15:36:35.824069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.485 qpair failed and we were unable to recover it. 00:26:18.485 [2024-04-26 15:36:35.824390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 15:36:35.824780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 15:36:35.824794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.485 qpair failed and we were unable to recover it. 00:26:18.485 [2024-04-26 15:36:35.825149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 15:36:35.825401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 15:36:35.825415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.485 qpair failed and we were unable to recover it. 00:26:18.485 [2024-04-26 15:36:35.825737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 15:36:35.825969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 15:36:35.825984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.485 qpair failed and we were unable to recover it. 00:26:18.485 [2024-04-26 15:36:35.826385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 15:36:35.826754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 15:36:35.826768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.485 qpair failed and we were unable to recover it. 00:26:18.485 [2024-04-26 15:36:35.827198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 15:36:35.827554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 15:36:35.827567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.485 qpair failed and we were unable to recover it. 00:26:18.485 [2024-04-26 15:36:35.827809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 15:36:35.828198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 15:36:35.828213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.485 qpair failed and we were unable to recover it. 00:26:18.485 [2024-04-26 15:36:35.828535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 15:36:35.828898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 15:36:35.828912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.485 qpair failed and we were unable to recover it. 00:26:18.485 [2024-04-26 15:36:35.829268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 15:36:35.829636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 15:36:35.829654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.485 qpair failed and we were unable to recover it. 00:26:18.485 [2024-04-26 15:36:35.829987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 15:36:35.830372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 15:36:35.830386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.485 qpair failed and we were unable to recover it. 00:26:18.485 [2024-04-26 15:36:35.830711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 15:36:35.830989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 15:36:35.831003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.485 qpair failed and we were unable to recover it. 00:26:18.485 [2024-04-26 15:36:35.831231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 15:36:35.831645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 15:36:35.831660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.485 qpair failed and we were unable to recover it. 00:26:18.485 [2024-04-26 15:36:35.831985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 15:36:35.832334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 15:36:35.832349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.485 qpair failed and we were unable to recover it. 00:26:18.485 [2024-04-26 15:36:35.832689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 15:36:35.833000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 15:36:35.833016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.485 qpair failed and we were unable to recover it. 00:26:18.485 [2024-04-26 15:36:35.833242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 15:36:35.833689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 15:36:35.833702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.485 qpair failed and we were unable to recover it. 00:26:18.485 [2024-04-26 15:36:35.834044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 15:36:35.834415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 15:36:35.834429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.485 qpair failed and we were unable to recover it. 00:26:18.485 [2024-04-26 15:36:35.834671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 15:36:35.834880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 15:36:35.834895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.485 qpair failed and we were unable to recover it. 00:26:18.485 [2024-04-26 15:36:35.835254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 15:36:35.835588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 15:36:35.835602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.485 qpair failed and we were unable to recover it. 00:26:18.485 [2024-04-26 15:36:35.835923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 15:36:35.836239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 15:36:35.836252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.485 qpair failed and we were unable to recover it. 00:26:18.485 [2024-04-26 15:36:35.836603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 15:36:35.836987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 15:36:35.837001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.485 qpair failed and we were unable to recover it. 00:26:18.485 [2024-04-26 15:36:35.837348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 15:36:35.837749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 15:36:35.837762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.485 qpair failed and we were unable to recover it. 00:26:18.485 [2024-04-26 15:36:35.838172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 15:36:35.838521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 15:36:35.838535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.485 qpair failed and we were unable to recover it. 00:26:18.485 [2024-04-26 15:36:35.838772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 15:36:35.839137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 15:36:35.839151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.485 qpair failed and we were unable to recover it. 00:26:18.485 [2024-04-26 15:36:35.839494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.485 [2024-04-26 15:36:35.839869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 15:36:35.839884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.486 qpair failed and we were unable to recover it. 00:26:18.486 [2024-04-26 15:36:35.840260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 15:36:35.840594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 15:36:35.840608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.486 qpair failed and we were unable to recover it. 00:26:18.486 [2024-04-26 15:36:35.840976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 15:36:35.841311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 15:36:35.841324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.486 qpair failed and we were unable to recover it. 00:26:18.486 [2024-04-26 15:36:35.841653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 15:36:35.842044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 15:36:35.842058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.486 qpair failed and we were unable to recover it. 00:26:18.486 [2024-04-26 15:36:35.842397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 15:36:35.842746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 15:36:35.842759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.486 qpair failed and we were unable to recover it. 00:26:18.486 [2024-04-26 15:36:35.843042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 15:36:35.843354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 15:36:35.843367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.486 qpair failed and we were unable to recover it. 00:26:18.486 [2024-04-26 15:36:35.843674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 15:36:35.844035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 15:36:35.844049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.486 qpair failed and we were unable to recover it. 00:26:18.486 [2024-04-26 15:36:35.844425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 15:36:35.844798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 15:36:35.844813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.486 qpair failed and we were unable to recover it. 00:26:18.486 [2024-04-26 15:36:35.845164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 15:36:35.845491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 15:36:35.845504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.486 qpair failed and we were unable to recover it. 00:26:18.486 [2024-04-26 15:36:35.845876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 15:36:35.846116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 15:36:35.846129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.486 qpair failed and we were unable to recover it. 00:26:18.486 [2024-04-26 15:36:35.846345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 15:36:35.846713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 15:36:35.846727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.486 qpair failed and we were unable to recover it. 00:26:18.486 [2024-04-26 15:36:35.847049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 15:36:35.847391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 15:36:35.847406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.486 qpair failed and we were unable to recover it. 00:26:18.486 [2024-04-26 15:36:35.847743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 15:36:35.848096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 15:36:35.848112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.486 qpair failed and we were unable to recover it. 00:26:18.486 [2024-04-26 15:36:35.848441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 15:36:35.848786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 15:36:35.848800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.486 qpair failed and we were unable to recover it. 00:26:18.486 [2024-04-26 15:36:35.849100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 15:36:35.849479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 15:36:35.849494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.486 qpair failed and we were unable to recover it. 00:26:18.486 [2024-04-26 15:36:35.849860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 15:36:35.850199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 15:36:35.850214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.486 qpair failed and we were unable to recover it. 00:26:18.486 [2024-04-26 15:36:35.850615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 15:36:35.850949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 15:36:35.850964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.486 qpair failed and we were unable to recover it. 00:26:18.486 [2024-04-26 15:36:35.851285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 15:36:35.851642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 15:36:35.851656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.486 qpair failed and we were unable to recover it. 00:26:18.486 [2024-04-26 15:36:35.852007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 15:36:35.852361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 15:36:35.852374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.486 qpair failed and we were unable to recover it. 00:26:18.486 [2024-04-26 15:36:35.852703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 15:36:35.852852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 15:36:35.852867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.486 qpair failed and we were unable to recover it. 00:26:18.486 [2024-04-26 15:36:35.853252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 15:36:35.853564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 15:36:35.853578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.486 qpair failed and we were unable to recover it. 00:26:18.486 [2024-04-26 15:36:35.854018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 15:36:35.854359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 15:36:35.854372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.486 qpair failed and we were unable to recover it. 00:26:18.486 [2024-04-26 15:36:35.854701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 15:36:35.855055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 15:36:35.855069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.486 qpair failed and we were unable to recover it. 00:26:18.486 [2024-04-26 15:36:35.855476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 15:36:35.855824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 15:36:35.855846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.486 qpair failed and we were unable to recover it. 00:26:18.486 [2024-04-26 15:36:35.856182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 15:36:35.856582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 15:36:35.856596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.486 qpair failed and we were unable to recover it. 00:26:18.486 [2024-04-26 15:36:35.857091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 15:36:35.857381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 15:36:35.857413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.486 qpair failed and we were unable to recover it. 00:26:18.486 [2024-04-26 15:36:35.857792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 15:36:35.858142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 15:36:35.858158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.486 qpair failed and we were unable to recover it. 00:26:18.486 [2024-04-26 15:36:35.858516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 15:36:35.858852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 15:36:35.858867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.486 qpair failed and we were unable to recover it. 00:26:18.486 [2024-04-26 15:36:35.859250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 15:36:35.859624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.486 [2024-04-26 15:36:35.859639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.486 qpair failed and we were unable to recover it. 00:26:18.486 [2024-04-26 15:36:35.860107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 15:36:35.860505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 15:36:35.860526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.487 qpair failed and we were unable to recover it. 00:26:18.487 [2024-04-26 15:36:35.860761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 15:36:35.861127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 15:36:35.861142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.487 qpair failed and we were unable to recover it. 00:26:18.487 [2024-04-26 15:36:35.861460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 15:36:35.861823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 15:36:35.861855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.487 qpair failed and we were unable to recover it. 00:26:18.487 [2024-04-26 15:36:35.862099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 15:36:35.862475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 15:36:35.862491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.487 qpair failed and we were unable to recover it. 00:26:18.487 [2024-04-26 15:36:35.862850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 15:36:35.863206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 15:36:35.863220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.487 qpair failed and we were unable to recover it. 00:26:18.487 [2024-04-26 15:36:35.863442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 15:36:35.863809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 15:36:35.863823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.487 qpair failed and we were unable to recover it. 00:26:18.487 [2024-04-26 15:36:35.864060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 15:36:35.864464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 15:36:35.864482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.487 qpair failed and we were unable to recover it. 00:26:18.487 [2024-04-26 15:36:35.864827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 15:36:35.865207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 15:36:35.865224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.487 qpair failed and we were unable to recover it. 00:26:18.487 [2024-04-26 15:36:35.865572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 15:36:35.865907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 15:36:35.865921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.487 qpair failed and we were unable to recover it. 00:26:18.487 [2024-04-26 15:36:35.866289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 15:36:35.866629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 15:36:35.866643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.487 qpair failed and we were unable to recover it. 00:26:18.487 [2024-04-26 15:36:35.866977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 15:36:35.867351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 15:36:35.867366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.487 qpair failed and we were unable to recover it. 00:26:18.487 [2024-04-26 15:36:35.867691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 15:36:35.868032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 15:36:35.868046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.487 qpair failed and we were unable to recover it. 00:26:18.487 [2024-04-26 15:36:35.868413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 15:36:35.868788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 15:36:35.868803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.487 qpair failed and we were unable to recover it. 00:26:18.487 [2024-04-26 15:36:35.869043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 15:36:35.869417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 15:36:35.869432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.487 qpair failed and we were unable to recover it. 00:26:18.487 [2024-04-26 15:36:35.869787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 15:36:35.870102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 15:36:35.870117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.487 qpair failed and we were unable to recover it. 00:26:18.487 [2024-04-26 15:36:35.870436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 15:36:35.870687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 15:36:35.870702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.487 qpair failed and we were unable to recover it. 00:26:18.487 [2024-04-26 15:36:35.871060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 15:36:35.871402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 15:36:35.871417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.487 qpair failed and we were unable to recover it. 00:26:18.487 [2024-04-26 15:36:35.871764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 15:36:35.871974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 15:36:35.871990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.487 qpair failed and we were unable to recover it. 00:26:18.487 [2024-04-26 15:36:35.872337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 15:36:35.872720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 15:36:35.872734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.487 qpair failed and we were unable to recover it. 00:26:18.487 [2024-04-26 15:36:35.873094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 15:36:35.873459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 15:36:35.873473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.487 qpair failed and we were unable to recover it. 00:26:18.487 [2024-04-26 15:36:35.873803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 15:36:35.874161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 15:36:35.874176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.487 qpair failed and we were unable to recover it. 00:26:18.487 [2024-04-26 15:36:35.874388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 15:36:35.874705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 15:36:35.874719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.487 qpair failed and we were unable to recover it. 00:26:18.487 [2024-04-26 15:36:35.875038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 15:36:35.875361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 15:36:35.875375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.487 qpair failed and we were unable to recover it. 00:26:18.487 [2024-04-26 15:36:35.875688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 15:36:35.876034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 15:36:35.876049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.487 qpair failed and we were unable to recover it. 00:26:18.487 [2024-04-26 15:36:35.876368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 15:36:35.876725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 15:36:35.876739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.487 qpair failed and we were unable to recover it. 00:26:18.487 [2024-04-26 15:36:35.877087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 15:36:35.877435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 15:36:35.877450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.487 qpair failed and we were unable to recover it. 00:26:18.487 [2024-04-26 15:36:35.877764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 15:36:35.878100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 15:36:35.878114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.487 qpair failed and we were unable to recover it. 00:26:18.487 [2024-04-26 15:36:35.878475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 15:36:35.878847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 15:36:35.878863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.487 qpair failed and we were unable to recover it. 00:26:18.487 [2024-04-26 15:36:35.879209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 15:36:35.879542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.487 [2024-04-26 15:36:35.879556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.488 qpair failed and we were unable to recover it. 00:26:18.488 [2024-04-26 15:36:35.879816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 15:36:35.880144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 15:36:35.880161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.488 qpair failed and we were unable to recover it. 00:26:18.488 [2024-04-26 15:36:35.880485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 15:36:35.880863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 15:36:35.880878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.488 qpair failed and we were unable to recover it. 00:26:18.488 [2024-04-26 15:36:35.881211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 15:36:35.881578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 15:36:35.881592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.488 qpair failed and we were unable to recover it. 00:26:18.488 [2024-04-26 15:36:35.881912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 15:36:35.882130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 15:36:35.882147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.488 qpair failed and we were unable to recover it. 00:26:18.488 [2024-04-26 15:36:35.882487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 15:36:35.882855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 15:36:35.882888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.488 qpair failed and we were unable to recover it. 00:26:18.488 [2024-04-26 15:36:35.883177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 15:36:35.883418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 15:36:35.883445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.488 qpair failed and we were unable to recover it. 00:26:18.488 [2024-04-26 15:36:35.883816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 15:36:35.884179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 15:36:35.884207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.488 qpair failed and we were unable to recover it. 00:26:18.488 [2024-04-26 15:36:35.884590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 15:36:35.884933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 15:36:35.884960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.488 qpair failed and we were unable to recover it. 00:26:18.488 [2024-04-26 15:36:35.885324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 15:36:35.885675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 15:36:35.885701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.488 qpair failed and we were unable to recover it. 00:26:18.488 [2024-04-26 15:36:35.885990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 15:36:35.886376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 15:36:35.886401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.488 qpair failed and we were unable to recover it. 00:26:18.488 [2024-04-26 15:36:35.886791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 15:36:35.887163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 15:36:35.887192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.488 qpair failed and we were unable to recover it. 00:26:18.488 [2024-04-26 15:36:35.887454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 15:36:35.887739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 15:36:35.887764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.488 qpair failed and we were unable to recover it. 00:26:18.488 [2024-04-26 15:36:35.888149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 15:36:35.888525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 15:36:35.888552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.488 qpair failed and we were unable to recover it. 00:26:18.488 [2024-04-26 15:36:35.888774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 15:36:35.889118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 15:36:35.889146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.488 qpair failed and we were unable to recover it. 00:26:18.488 [2024-04-26 15:36:35.889524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 15:36:35.889872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 15:36:35.889898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.488 qpair failed and we were unable to recover it. 00:26:18.488 [2024-04-26 15:36:35.890254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 15:36:35.890632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 15:36:35.890659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.488 qpair failed and we were unable to recover it. 00:26:18.488 [2024-04-26 15:36:35.890934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 15:36:35.891395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 15:36:35.891421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.488 qpair failed and we were unable to recover it. 00:26:18.488 [2024-04-26 15:36:35.891680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 15:36:35.892034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 15:36:35.892063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.488 qpair failed and we were unable to recover it. 00:26:18.488 [2024-04-26 15:36:35.892410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 15:36:35.892788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 15:36:35.892814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.488 qpair failed and we were unable to recover it. 00:26:18.488 [2024-04-26 15:36:35.892986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 15:36:35.893244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 15:36:35.893270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.488 qpair failed and we were unable to recover it. 00:26:18.488 [2024-04-26 15:36:35.893604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 15:36:35.893951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 15:36:35.893979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.488 qpair failed and we were unable to recover it. 00:26:18.488 [2024-04-26 15:36:35.894352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 15:36:35.894727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 15:36:35.894754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.488 qpair failed and we were unable to recover it. 00:26:18.488 [2024-04-26 15:36:35.895109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 15:36:35.895357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 15:36:35.895385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.488 qpair failed and we were unable to recover it. 00:26:18.488 [2024-04-26 15:36:35.895746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 15:36:35.896159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 15:36:35.896187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.488 qpair failed and we were unable to recover it. 00:26:18.488 [2024-04-26 15:36:35.896416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 15:36:35.896792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 15:36:35.896818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.488 qpair failed and we were unable to recover it. 00:26:18.488 [2024-04-26 15:36:35.897017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.488 [2024-04-26 15:36:35.897379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 15:36:35.897406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.489 qpair failed and we were unable to recover it. 00:26:18.489 [2024-04-26 15:36:35.897646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 15:36:35.898030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 15:36:35.898059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.489 qpair failed and we were unable to recover it. 00:26:18.489 [2024-04-26 15:36:35.898335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 15:36:35.898579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 15:36:35.898606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.489 qpair failed and we were unable to recover it. 00:26:18.489 [2024-04-26 15:36:35.898890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 15:36:35.899319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 15:36:35.899345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.489 qpair failed and we were unable to recover it. 00:26:18.489 [2024-04-26 15:36:35.899687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 15:36:35.899946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 15:36:35.899974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.489 qpair failed and we were unable to recover it. 00:26:18.489 [2024-04-26 15:36:35.900270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 15:36:35.900640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 15:36:35.900667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.489 qpair failed and we were unable to recover it. 00:26:18.489 [2024-04-26 15:36:35.901026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 15:36:35.901419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 15:36:35.901446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.489 qpair failed and we were unable to recover it. 00:26:18.489 [2024-04-26 15:36:35.901860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 15:36:35.902248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 15:36:35.902274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.489 qpair failed and we were unable to recover it. 00:26:18.489 [2024-04-26 15:36:35.902649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 15:36:35.902996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 15:36:35.903015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.489 qpair failed and we were unable to recover it. 00:26:18.489 [2024-04-26 15:36:35.903346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 15:36:35.903730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 15:36:35.903756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.489 qpair failed and we were unable to recover it. 00:26:18.489 [2024-04-26 15:36:35.904011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 15:36:35.904382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 15:36:35.904400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.489 qpair failed and we were unable to recover it. 00:26:18.489 [2024-04-26 15:36:35.904763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 15:36:35.905161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 15:36:35.905188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.489 qpair failed and we were unable to recover it. 00:26:18.489 [2024-04-26 15:36:35.905443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 15:36:35.905858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 15:36:35.905886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.489 qpair failed and we were unable to recover it. 00:26:18.489 [2024-04-26 15:36:35.906239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 15:36:35.906580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 15:36:35.906606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.489 qpair failed and we were unable to recover it. 00:26:18.489 [2024-04-26 15:36:35.906976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 15:36:35.907348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 15:36:35.907373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.489 qpair failed and we were unable to recover it. 00:26:18.489 [2024-04-26 15:36:35.907766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 15:36:35.908128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 15:36:35.908154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.489 qpair failed and we were unable to recover it. 00:26:18.489 [2024-04-26 15:36:35.908500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 15:36:35.908856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 15:36:35.908882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.489 qpair failed and we were unable to recover it. 00:26:18.489 [2024-04-26 15:36:35.909256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 15:36:35.909614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 15:36:35.909640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.489 qpair failed and we were unable to recover it. 00:26:18.489 [2024-04-26 15:36:35.910056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 15:36:35.910414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 15:36:35.910439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.489 qpair failed and we were unable to recover it. 00:26:18.489 [2024-04-26 15:36:35.910712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 15:36:35.910981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 15:36:35.911007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.489 qpair failed and we were unable to recover it. 00:26:18.489 [2024-04-26 15:36:35.911257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 15:36:35.911518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 15:36:35.911532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.489 qpair failed and we were unable to recover it. 00:26:18.489 [2024-04-26 15:36:35.911758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 15:36:35.911948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 15:36:35.911966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.489 qpair failed and we were unable to recover it. 00:26:18.489 [2024-04-26 15:36:35.912265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 15:36:35.912592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 15:36:35.912609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.489 qpair failed and we were unable to recover it. 00:26:18.489 [2024-04-26 15:36:35.912982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 15:36:35.913364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 15:36:35.913388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.489 qpair failed and we were unable to recover it. 00:26:18.489 [2024-04-26 15:36:35.913615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 15:36:35.914019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 15:36:35.914045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.489 qpair failed and we were unable to recover it. 00:26:18.489 [2024-04-26 15:36:35.914444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 15:36:35.914879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.489 [2024-04-26 15:36:35.914905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.489 qpair failed and we were unable to recover it. 00:26:18.489 [2024-04-26 15:36:35.915281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.758 [2024-04-26 15:36:35.915513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.758 [2024-04-26 15:36:35.915543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.758 qpair failed and we were unable to recover it. 00:26:18.758 [2024-04-26 15:36:35.915777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.758 [2024-04-26 15:36:35.916147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.758 [2024-04-26 15:36:35.916174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.758 qpair failed and we were unable to recover it. 00:26:18.758 [2024-04-26 15:36:35.916597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.758 [2024-04-26 15:36:35.917005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.758 [2024-04-26 15:36:35.917031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.758 qpair failed and we were unable to recover it. 00:26:18.758 [2024-04-26 15:36:35.917414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.758 [2024-04-26 15:36:35.917753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.758 [2024-04-26 15:36:35.917778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.758 qpair failed and we were unable to recover it. 00:26:18.758 [2024-04-26 15:36:35.918148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.758 [2024-04-26 15:36:35.918389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.758 [2024-04-26 15:36:35.918415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.758 qpair failed and we were unable to recover it. 00:26:18.758 [2024-04-26 15:36:35.918531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.758 [2024-04-26 15:36:35.918921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.758 [2024-04-26 15:36:35.918939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.758 qpair failed and we were unable to recover it. 00:26:18.758 [2024-04-26 15:36:35.919285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.758 [2024-04-26 15:36:35.919493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.758 [2024-04-26 15:36:35.919508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.758 qpair failed and we were unable to recover it. 00:26:18.758 [2024-04-26 15:36:35.919878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.758 [2024-04-26 15:36:35.920284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.758 [2024-04-26 15:36:35.920315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.758 qpair failed and we were unable to recover it. 00:26:18.758 [2024-04-26 15:36:35.920673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.758 [2024-04-26 15:36:35.921019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.758 [2024-04-26 15:36:35.921035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.758 qpair failed and we were unable to recover it. 00:26:18.758 [2024-04-26 15:36:35.921398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.758 [2024-04-26 15:36:35.921738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.758 [2024-04-26 15:36:35.921752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.758 qpair failed and we were unable to recover it. 00:26:18.758 [2024-04-26 15:36:35.922013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.758 [2024-04-26 15:36:35.922381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.758 [2024-04-26 15:36:35.922394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.758 qpair failed and we were unable to recover it. 00:26:18.758 [2024-04-26 15:36:35.922816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.758 [2024-04-26 15:36:35.923157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.758 [2024-04-26 15:36:35.923173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.758 qpair failed and we were unable to recover it. 00:26:18.758 [2024-04-26 15:36:35.923438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.758 [2024-04-26 15:36:35.923768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.758 [2024-04-26 15:36:35.923782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.758 qpair failed and we were unable to recover it. 00:26:18.758 [2024-04-26 15:36:35.924150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.758 [2024-04-26 15:36:35.924357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.758 [2024-04-26 15:36:35.924371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.758 qpair failed and we were unable to recover it. 00:26:18.758 [2024-04-26 15:36:35.924601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.758 [2024-04-26 15:36:35.924895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.758 [2024-04-26 15:36:35.924909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.758 qpair failed and we were unable to recover it. 00:26:18.758 [2024-04-26 15:36:35.925248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.758 [2024-04-26 15:36:35.925499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.758 [2024-04-26 15:36:35.925513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.758 qpair failed and we were unable to recover it. 00:26:18.758 [2024-04-26 15:36:35.925757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.758 [2024-04-26 15:36:35.926092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.758 [2024-04-26 15:36:35.926107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.758 qpair failed and we were unable to recover it. 00:26:18.758 [2024-04-26 15:36:35.926475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.758 [2024-04-26 15:36:35.926699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.758 [2024-04-26 15:36:35.926719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.758 qpair failed and we were unable to recover it. 00:26:18.759 [2024-04-26 15:36:35.927091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.759 [2024-04-26 15:36:35.927421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.759 [2024-04-26 15:36:35.927434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.759 qpair failed and we were unable to recover it. 00:26:18.759 [2024-04-26 15:36:35.927672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.759 [2024-04-26 15:36:35.928039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.759 [2024-04-26 15:36:35.928054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.759 qpair failed and we were unable to recover it. 00:26:18.759 [2024-04-26 15:36:35.928369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.759 [2024-04-26 15:36:35.928653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.759 [2024-04-26 15:36:35.928666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.759 qpair failed and we were unable to recover it. 00:26:18.759 [2024-04-26 15:36:35.929017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.759 [2024-04-26 15:36:35.929368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.759 [2024-04-26 15:36:35.929382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.759 qpair failed and we were unable to recover it. 00:26:18.759 [2024-04-26 15:36:35.929588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.759 [2024-04-26 15:36:35.929790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.759 [2024-04-26 15:36:35.929806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.759 qpair failed and we were unable to recover it. 00:26:18.759 [2024-04-26 15:36:35.930102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.759 [2024-04-26 15:36:35.930502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.759 [2024-04-26 15:36:35.930516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.759 qpair failed and we were unable to recover it. 00:26:18.759 [2024-04-26 15:36:35.930850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.759 [2024-04-26 15:36:35.931199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.759 [2024-04-26 15:36:35.931213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.759 qpair failed and we were unable to recover it. 00:26:18.759 [2024-04-26 15:36:35.931600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.759 [2024-04-26 15:36:35.931936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.759 [2024-04-26 15:36:35.931950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.759 qpair failed and we were unable to recover it. 00:26:18.759 [2024-04-26 15:36:35.932302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.759 [2024-04-26 15:36:35.932644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.759 [2024-04-26 15:36:35.932658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.759 qpair failed and we were unable to recover it. 00:26:18.759 [2024-04-26 15:36:35.932900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.759 [2024-04-26 15:36:35.933232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.759 [2024-04-26 15:36:35.933250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.759 qpair failed and we were unable to recover it. 00:26:18.759 [2024-04-26 15:36:35.933549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.759 [2024-04-26 15:36:35.933894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.759 [2024-04-26 15:36:35.933908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.759 qpair failed and we were unable to recover it. 00:26:18.759 [2024-04-26 15:36:35.934268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.759 [2024-04-26 15:36:35.934521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.759 [2024-04-26 15:36:35.934536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.759 qpair failed and we were unable to recover it. 00:26:18.759 [2024-04-26 15:36:35.934789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.759 [2024-04-26 15:36:35.935135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.759 [2024-04-26 15:36:35.935150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.759 qpair failed and we were unable to recover it. 00:26:18.759 [2024-04-26 15:36:35.935518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.759 [2024-04-26 15:36:35.935830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.759 [2024-04-26 15:36:35.935852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.759 qpair failed and we were unable to recover it. 00:26:18.759 [2024-04-26 15:36:35.936233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.759 [2024-04-26 15:36:35.936593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.759 [2024-04-26 15:36:35.936606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.759 qpair failed and we were unable to recover it. 00:26:18.759 [2024-04-26 15:36:35.936860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.759 [2024-04-26 15:36:35.937307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.759 [2024-04-26 15:36:35.937321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.759 qpair failed and we were unable to recover it. 00:26:18.759 [2024-04-26 15:36:35.937654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.759 [2024-04-26 15:36:35.938104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.759 [2024-04-26 15:36:35.938167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.759 qpair failed and we were unable to recover it. 00:26:18.759 [2024-04-26 15:36:35.938549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.759 [2024-04-26 15:36:35.938886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.759 [2024-04-26 15:36:35.938902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.759 qpair failed and we were unable to recover it. 00:26:18.759 [2024-04-26 15:36:35.939245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.759 [2024-04-26 15:36:35.939516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.759 [2024-04-26 15:36:35.939531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.759 qpair failed and we were unable to recover it. 00:26:18.759 [2024-04-26 15:36:35.939911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.759 [2024-04-26 15:36:35.940233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.759 [2024-04-26 15:36:35.940255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.759 qpair failed and we were unable to recover it. 00:26:18.759 [2024-04-26 15:36:35.940608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.759 [2024-04-26 15:36:35.940918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.759 [2024-04-26 15:36:35.940933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.759 qpair failed and we were unable to recover it. 00:26:18.759 [2024-04-26 15:36:35.941288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.759 [2024-04-26 15:36:35.941663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.759 [2024-04-26 15:36:35.941677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.759 qpair failed and we were unable to recover it. 00:26:18.759 [2024-04-26 15:36:35.941927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.759 [2024-04-26 15:36:35.942182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.759 [2024-04-26 15:36:35.942198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.759 qpair failed and we were unable to recover it. 00:26:18.759 [2024-04-26 15:36:35.942525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.759 [2024-04-26 15:36:35.942895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.759 [2024-04-26 15:36:35.942911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.759 qpair failed and we were unable to recover it. 00:26:18.759 [2024-04-26 15:36:35.943282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.759 [2024-04-26 15:36:35.943637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.759 [2024-04-26 15:36:35.943651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.759 qpair failed and we were unable to recover it. 00:26:18.759 [2024-04-26 15:36:35.943974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.759 [2024-04-26 15:36:35.944354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.759 [2024-04-26 15:36:35.944368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.759 qpair failed and we were unable to recover it. 00:26:18.759 [2024-04-26 15:36:35.944626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.759 [2024-04-26 15:36:35.945042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.759 [2024-04-26 15:36:35.945057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.759 qpair failed and we were unable to recover it. 00:26:18.759 [2024-04-26 15:36:35.945403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.759 [2024-04-26 15:36:35.945656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.759 [2024-04-26 15:36:35.945670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.759 qpair failed and we were unable to recover it. 00:26:18.759 [2024-04-26 15:36:35.946050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.759 [2024-04-26 15:36:35.946392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.760 [2024-04-26 15:36:35.946407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.760 qpair failed and we were unable to recover it. 00:26:18.760 [2024-04-26 15:36:35.946769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.760 [2024-04-26 15:36:35.947129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.760 [2024-04-26 15:36:35.947143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.760 qpair failed and we were unable to recover it. 00:26:18.760 [2024-04-26 15:36:35.947515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.760 [2024-04-26 15:36:35.947888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.760 [2024-04-26 15:36:35.947909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.760 qpair failed and we were unable to recover it. 00:26:18.760 [2024-04-26 15:36:35.948297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.760 [2024-04-26 15:36:35.948518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.760 [2024-04-26 15:36:35.948533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.760 qpair failed and we were unable to recover it. 00:26:18.760 [2024-04-26 15:36:35.948894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.760 [2024-04-26 15:36:35.949300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.760 [2024-04-26 15:36:35.949314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.760 qpair failed and we were unable to recover it. 00:26:18.760 [2024-04-26 15:36:35.949516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.760 [2024-04-26 15:36:35.949893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.760 [2024-04-26 15:36:35.949908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.760 qpair failed and we were unable to recover it. 00:26:18.760 [2024-04-26 15:36:35.950243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.760 [2024-04-26 15:36:35.950370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.760 [2024-04-26 15:36:35.950383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.760 qpair failed and we were unable to recover it. 00:26:18.760 [2024-04-26 15:36:35.950716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.760 [2024-04-26 15:36:35.950969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.760 [2024-04-26 15:36:35.950983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.760 qpair failed and we were unable to recover it. 00:26:18.760 [2024-04-26 15:36:35.951352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.760 [2024-04-26 15:36:35.951703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.760 [2024-04-26 15:36:35.951717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.760 qpair failed and we were unable to recover it. 00:26:18.760 [2024-04-26 15:36:35.952064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.760 [2024-04-26 15:36:35.952398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.760 [2024-04-26 15:36:35.952411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.760 qpair failed and we were unable to recover it. 00:26:18.760 [2024-04-26 15:36:35.952777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.760 [2024-04-26 15:36:35.953240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.760 [2024-04-26 15:36:35.953254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.760 qpair failed and we were unable to recover it. 00:26:18.760 [2024-04-26 15:36:35.953582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.760 [2024-04-26 15:36:35.953935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.760 [2024-04-26 15:36:35.953950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.760 qpair failed and we were unable to recover it. 00:26:18.760 [2024-04-26 15:36:35.954284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.760 [2024-04-26 15:36:35.954664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.760 [2024-04-26 15:36:35.954677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.760 qpair failed and we were unable to recover it. 00:26:18.760 [2024-04-26 15:36:35.954998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.760 [2024-04-26 15:36:35.955361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.760 [2024-04-26 15:36:35.955375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.760 qpair failed and we were unable to recover it. 00:26:18.760 [2024-04-26 15:36:35.955579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.760 [2024-04-26 15:36:35.956012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.760 [2024-04-26 15:36:35.956027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.760 qpair failed and we were unable to recover it. 00:26:18.760 [2024-04-26 15:36:35.956363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.760 [2024-04-26 15:36:35.956731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.760 [2024-04-26 15:36:35.956745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.760 qpair failed and we were unable to recover it. 00:26:18.760 [2024-04-26 15:36:35.957092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.760 [2024-04-26 15:36:35.957452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.760 [2024-04-26 15:36:35.957466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.760 qpair failed and we were unable to recover it. 00:26:18.760 [2024-04-26 15:36:35.957834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.760 [2024-04-26 15:36:35.958224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.760 [2024-04-26 15:36:35.958240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.760 qpair failed and we were unable to recover it. 00:26:18.760 [2024-04-26 15:36:35.958592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.760 [2024-04-26 15:36:35.958928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.760 [2024-04-26 15:36:35.958942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.760 qpair failed and we were unable to recover it. 00:26:18.760 [2024-04-26 15:36:35.959222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.760 [2024-04-26 15:36:35.959567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.760 [2024-04-26 15:36:35.959581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.760 qpair failed and we were unable to recover it. 00:26:18.760 [2024-04-26 15:36:35.959908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.760 [2024-04-26 15:36:35.960288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.760 [2024-04-26 15:36:35.960303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.760 qpair failed and we were unable to recover it. 00:26:18.760 [2024-04-26 15:36:35.960653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.760 [2024-04-26 15:36:35.960986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.760 [2024-04-26 15:36:35.961000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.760 qpair failed and we were unable to recover it. 00:26:18.760 [2024-04-26 15:36:35.961351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.760 [2024-04-26 15:36:35.961679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.760 [2024-04-26 15:36:35.961693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.760 qpair failed and we were unable to recover it. 00:26:18.760 [2024-04-26 15:36:35.962045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.760 [2024-04-26 15:36:35.962410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.760 [2024-04-26 15:36:35.962424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.760 qpair failed and we were unable to recover it. 00:26:18.760 [2024-04-26 15:36:35.962691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.760 [2024-04-26 15:36:35.963040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.760 [2024-04-26 15:36:35.963054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.760 qpair failed and we were unable to recover it. 00:26:18.760 [2024-04-26 15:36:35.963427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.760 [2024-04-26 15:36:35.963769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.760 [2024-04-26 15:36:35.963784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.760 qpair failed and we were unable to recover it. 00:26:18.760 [2024-04-26 15:36:35.964127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.760 [2024-04-26 15:36:35.964465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.760 [2024-04-26 15:36:35.964479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.760 qpair failed and we were unable to recover it. 00:26:18.760 [2024-04-26 15:36:35.964820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.760 [2024-04-26 15:36:35.965231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.760 [2024-04-26 15:36:35.965246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.760 qpair failed and we were unable to recover it. 00:26:18.760 [2024-04-26 15:36:35.965449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.760 [2024-04-26 15:36:35.965769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.760 [2024-04-26 15:36:35.965783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.760 qpair failed and we were unable to recover it. 00:26:18.761 [2024-04-26 15:36:35.966142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.761 [2024-04-26 15:36:35.966506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.761 [2024-04-26 15:36:35.966519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.761 qpair failed and we were unable to recover it. 00:26:18.761 [2024-04-26 15:36:35.966847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.761 [2024-04-26 15:36:35.967193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.761 [2024-04-26 15:36:35.967207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.761 qpair failed and we were unable to recover it. 00:26:18.761 [2024-04-26 15:36:35.967528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.761 [2024-04-26 15:36:35.967904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.761 [2024-04-26 15:36:35.967919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.761 qpair failed and we were unable to recover it. 00:26:18.761 [2024-04-26 15:36:35.968262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.761 [2024-04-26 15:36:35.968630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.761 [2024-04-26 15:36:35.968644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.761 qpair failed and we were unable to recover it. 00:26:18.761 [2024-04-26 15:36:35.968989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.761 [2024-04-26 15:36:35.969369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.761 [2024-04-26 15:36:35.969385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.761 qpair failed and we were unable to recover it. 00:26:18.761 [2024-04-26 15:36:35.969695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.761 [2024-04-26 15:36:35.969967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.761 [2024-04-26 15:36:35.969982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.761 qpair failed and we were unable to recover it. 00:26:18.761 [2024-04-26 15:36:35.970312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.761 [2024-04-26 15:36:35.970642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.761 [2024-04-26 15:36:35.970655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.761 qpair failed and we were unable to recover it. 00:26:18.761 [2024-04-26 15:36:35.970899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.761 [2024-04-26 15:36:35.971147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.761 [2024-04-26 15:36:35.971161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.761 qpair failed and we were unable to recover it. 00:26:18.761 [2024-04-26 15:36:35.971523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.761 [2024-04-26 15:36:35.971775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.761 [2024-04-26 15:36:35.971788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.761 qpair failed and we were unable to recover it. 00:26:18.761 [2024-04-26 15:36:35.972210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.761 [2024-04-26 15:36:35.972551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.761 [2024-04-26 15:36:35.972565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.761 qpair failed and we were unable to recover it. 00:26:18.761 [2024-04-26 15:36:35.972908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.761 [2024-04-26 15:36:35.973257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.761 [2024-04-26 15:36:35.973270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.761 qpair failed and we were unable to recover it. 00:26:18.761 [2024-04-26 15:36:35.973599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.761 [2024-04-26 15:36:35.973926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.761 [2024-04-26 15:36:35.973941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.761 qpair failed and we were unable to recover it. 00:26:18.761 [2024-04-26 15:36:35.974317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.761 [2024-04-26 15:36:35.974686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.761 [2024-04-26 15:36:35.974700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.761 qpair failed and we were unable to recover it. 00:26:18.761 [2024-04-26 15:36:35.975036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.761 [2024-04-26 15:36:35.975357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.761 [2024-04-26 15:36:35.975371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.761 qpair failed and we were unable to recover it. 00:26:18.761 [2024-04-26 15:36:35.975739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.761 [2024-04-26 15:36:35.976189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.761 [2024-04-26 15:36:35.976204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.761 qpair failed and we were unable to recover it. 00:26:18.761 [2024-04-26 15:36:35.976521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.761 [2024-04-26 15:36:35.976761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.761 [2024-04-26 15:36:35.976775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.761 qpair failed and we were unable to recover it. 00:26:18.761 [2024-04-26 15:36:35.977202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.761 [2024-04-26 15:36:35.977427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.761 [2024-04-26 15:36:35.977441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.761 qpair failed and we were unable to recover it. 00:26:18.761 [2024-04-26 15:36:35.977809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.761 [2024-04-26 15:36:35.978144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.761 [2024-04-26 15:36:35.978159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.761 qpair failed and we were unable to recover it. 00:26:18.761 [2024-04-26 15:36:35.978412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.761 [2024-04-26 15:36:35.978635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.761 [2024-04-26 15:36:35.978649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.761 qpair failed and we were unable to recover it. 00:26:18.761 [2024-04-26 15:36:35.978899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.761 [2024-04-26 15:36:35.979281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.761 [2024-04-26 15:36:35.979295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.761 qpair failed and we were unable to recover it. 00:26:18.761 [2024-04-26 15:36:35.979619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.761 [2024-04-26 15:36:35.979938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.761 [2024-04-26 15:36:35.979952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.761 qpair failed and we were unable to recover it. 00:26:18.761 [2024-04-26 15:36:35.980292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.761 [2024-04-26 15:36:35.980653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.761 [2024-04-26 15:36:35.980666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.761 qpair failed and we were unable to recover it. 00:26:18.761 [2024-04-26 15:36:35.980995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.761 [2024-04-26 15:36:35.981366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.761 [2024-04-26 15:36:35.981379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.761 qpair failed and we were unable to recover it. 00:26:18.761 [2024-04-26 15:36:35.981702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.761 [2024-04-26 15:36:35.982049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.761 [2024-04-26 15:36:35.982064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.761 qpair failed and we were unable to recover it. 00:26:18.761 [2024-04-26 15:36:35.982470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.761 [2024-04-26 15:36:35.982794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.761 [2024-04-26 15:36:35.982808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.761 qpair failed and we were unable to recover it. 00:26:18.761 [2024-04-26 15:36:35.983063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.761 [2024-04-26 15:36:35.983427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.761 [2024-04-26 15:36:35.983441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.761 qpair failed and we were unable to recover it. 00:26:18.761 [2024-04-26 15:36:35.983767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.761 [2024-04-26 15:36:35.984097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.761 [2024-04-26 15:36:35.984111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.761 qpair failed and we were unable to recover it. 00:26:18.761 [2024-04-26 15:36:35.984437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.761 [2024-04-26 15:36:35.984813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.761 [2024-04-26 15:36:35.984827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.761 qpair failed and we were unable to recover it. 00:26:18.761 [2024-04-26 15:36:35.985188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.761 [2024-04-26 15:36:35.985553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.762 [2024-04-26 15:36:35.985568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.762 qpair failed and we were unable to recover it. 00:26:18.762 [2024-04-26 15:36:35.985924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.762 [2024-04-26 15:36:35.986269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.762 [2024-04-26 15:36:35.986282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.762 qpair failed and we were unable to recover it. 00:26:18.762 [2024-04-26 15:36:35.986615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.762 [2024-04-26 15:36:35.986982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.762 [2024-04-26 15:36:35.986996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.762 qpair failed and we were unable to recover it. 00:26:18.762 [2024-04-26 15:36:35.987397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.762 [2024-04-26 15:36:35.987744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.762 [2024-04-26 15:36:35.987759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.762 qpair failed and we were unable to recover it. 00:26:18.762 [2024-04-26 15:36:35.988003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.762 [2024-04-26 15:36:35.988367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.762 [2024-04-26 15:36:35.988382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.762 qpair failed and we were unable to recover it. 00:26:18.762 [2024-04-26 15:36:35.988736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.762 [2024-04-26 15:36:35.989083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.762 [2024-04-26 15:36:35.989099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.762 qpair failed and we were unable to recover it. 00:26:18.762 [2024-04-26 15:36:35.989420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.762 [2024-04-26 15:36:35.989788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.762 [2024-04-26 15:36:35.989801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.762 qpair failed and we were unable to recover it. 00:26:18.762 [2024-04-26 15:36:35.990200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.762 [2024-04-26 15:36:35.990582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.762 [2024-04-26 15:36:35.990596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.762 qpair failed and we were unable to recover it. 00:26:18.762 [2024-04-26 15:36:35.990798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.762 [2024-04-26 15:36:35.991192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.762 [2024-04-26 15:36:35.991206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.762 qpair failed and we were unable to recover it. 00:26:18.762 [2024-04-26 15:36:35.991469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.762 [2024-04-26 15:36:35.991879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.762 [2024-04-26 15:36:35.991894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.762 qpair failed and we were unable to recover it. 00:26:18.762 [2024-04-26 15:36:35.992111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.762 [2024-04-26 15:36:35.992480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.762 [2024-04-26 15:36:35.992494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.762 qpair failed and we were unable to recover it. 00:26:18.762 [2024-04-26 15:36:35.992818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.762 [2024-04-26 15:36:35.993169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.762 [2024-04-26 15:36:35.993183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.762 qpair failed and we were unable to recover it. 00:26:18.762 [2024-04-26 15:36:35.993503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.762 [2024-04-26 15:36:35.993903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.762 [2024-04-26 15:36:35.993918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.762 qpair failed and we were unable to recover it. 00:26:18.762 [2024-04-26 15:36:35.994350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.762 [2024-04-26 15:36:35.994681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.762 [2024-04-26 15:36:35.994694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.762 qpair failed and we were unable to recover it. 00:26:18.762 [2024-04-26 15:36:35.995040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.762 [2024-04-26 15:36:35.995385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.762 [2024-04-26 15:36:35.995398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.762 qpair failed and we were unable to recover it. 00:26:18.762 [2024-04-26 15:36:35.995741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.762 [2024-04-26 15:36:35.996112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.762 [2024-04-26 15:36:35.996127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.762 qpair failed and we were unable to recover it. 00:26:18.762 [2024-04-26 15:36:35.996495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.762 [2024-04-26 15:36:35.996861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.762 [2024-04-26 15:36:35.996876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.762 qpair failed and we were unable to recover it. 00:26:18.762 [2024-04-26 15:36:35.997145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.762 [2024-04-26 15:36:35.997480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.762 [2024-04-26 15:36:35.997494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.762 qpair failed and we were unable to recover it. 00:26:18.762 [2024-04-26 15:36:35.997831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.762 [2024-04-26 15:36:35.998164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.762 [2024-04-26 15:36:35.998178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.762 qpair failed and we were unable to recover it. 00:26:18.762 [2024-04-26 15:36:35.998503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.762 [2024-04-26 15:36:35.998868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.762 [2024-04-26 15:36:35.998882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.762 qpair failed and we were unable to recover it. 00:26:18.762 [2024-04-26 15:36:35.999236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.762 [2024-04-26 15:36:35.999582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.762 [2024-04-26 15:36:35.999596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.762 qpair failed and we were unable to recover it. 00:26:18.762 [2024-04-26 15:36:35.999962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.762 [2024-04-26 15:36:36.000353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.762 [2024-04-26 15:36:36.000368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.762 qpair failed and we were unable to recover it. 00:26:18.762 [2024-04-26 15:36:36.000701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.762 [2024-04-26 15:36:36.001080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.762 [2024-04-26 15:36:36.001096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.762 qpair failed and we were unable to recover it. 00:26:18.762 [2024-04-26 15:36:36.001352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.762 [2024-04-26 15:36:36.001557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.762 [2024-04-26 15:36:36.001572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.762 qpair failed and we were unable to recover it. 00:26:18.762 [2024-04-26 15:36:36.001931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.762 [2024-04-26 15:36:36.002275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.762 [2024-04-26 15:36:36.002289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.763 qpair failed and we were unable to recover it. 00:26:18.763 [2024-04-26 15:36:36.002637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.763 [2024-04-26 15:36:36.003004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.763 [2024-04-26 15:36:36.003020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.763 qpair failed and we were unable to recover it. 00:26:18.763 [2024-04-26 15:36:36.003428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.763 [2024-04-26 15:36:36.003799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.763 [2024-04-26 15:36:36.003814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.763 qpair failed and we were unable to recover it. 00:26:18.763 [2024-04-26 15:36:36.004171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.763 [2024-04-26 15:36:36.004502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.763 [2024-04-26 15:36:36.004517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.763 qpair failed and we were unable to recover it. 00:26:18.763 [2024-04-26 15:36:36.004872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.763 [2024-04-26 15:36:36.005230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.763 [2024-04-26 15:36:36.005244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.763 qpair failed and we were unable to recover it. 00:26:18.763 [2024-04-26 15:36:36.005558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.763 [2024-04-26 15:36:36.005903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.763 [2024-04-26 15:36:36.005917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.763 qpair failed and we were unable to recover it. 00:26:18.763 [2024-04-26 15:36:36.006161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.763 [2024-04-26 15:36:36.006510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.763 [2024-04-26 15:36:36.006524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.763 qpair failed and we were unable to recover it. 00:26:18.763 [2024-04-26 15:36:36.006855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.763 [2024-04-26 15:36:36.007236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.763 [2024-04-26 15:36:36.007250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.763 qpair failed and we were unable to recover it. 00:26:18.763 [2024-04-26 15:36:36.007597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.763 [2024-04-26 15:36:36.007914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.763 [2024-04-26 15:36:36.007937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.763 qpair failed and we were unable to recover it. 00:26:18.763 [2024-04-26 15:36:36.008288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.763 [2024-04-26 15:36:36.008670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.763 [2024-04-26 15:36:36.008683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.763 qpair failed and we were unable to recover it. 00:26:18.763 [2024-04-26 15:36:36.009006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.763 [2024-04-26 15:36:36.009376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.763 [2024-04-26 15:36:36.009390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.763 qpair failed and we were unable to recover it. 00:26:18.763 [2024-04-26 15:36:36.009716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.763 [2024-04-26 15:36:36.009925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.763 [2024-04-26 15:36:36.009942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.763 qpair failed and we were unable to recover it. 00:26:18.763 [2024-04-26 15:36:36.010288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.763 [2024-04-26 15:36:36.010633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.763 [2024-04-26 15:36:36.010647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.763 qpair failed and we were unable to recover it. 00:26:18.763 [2024-04-26 15:36:36.010996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.763 [2024-04-26 15:36:36.011241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.763 [2024-04-26 15:36:36.011256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.763 qpair failed and we were unable to recover it. 00:26:18.763 [2024-04-26 15:36:36.011657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.763 [2024-04-26 15:36:36.012031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.763 [2024-04-26 15:36:36.012047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.763 qpair failed and we were unable to recover it. 00:26:18.763 [2024-04-26 15:36:36.012393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.763 [2024-04-26 15:36:36.012698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.763 [2024-04-26 15:36:36.012712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.763 qpair failed and we were unable to recover it. 00:26:18.763 [2024-04-26 15:36:36.012912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.763 [2024-04-26 15:36:36.013325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.763 [2024-04-26 15:36:36.013339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.763 qpair failed and we were unable to recover it. 00:26:18.763 [2024-04-26 15:36:36.013697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.763 [2024-04-26 15:36:36.013953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.763 [2024-04-26 15:36:36.013967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.763 qpair failed and we were unable to recover it. 00:26:18.763 [2024-04-26 15:36:36.014245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.763 [2024-04-26 15:36:36.014612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.763 [2024-04-26 15:36:36.014626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.763 qpair failed and we were unable to recover it. 00:26:18.763 [2024-04-26 15:36:36.014990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.763 [2024-04-26 15:36:36.015233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.763 [2024-04-26 15:36:36.015246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.763 qpair failed and we were unable to recover it. 00:26:18.763 [2024-04-26 15:36:36.015647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.763 [2024-04-26 15:36:36.016031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.763 [2024-04-26 15:36:36.016046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.763 qpair failed and we were unable to recover it. 00:26:18.763 [2024-04-26 15:36:36.016332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.763 [2024-04-26 15:36:36.016706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.763 [2024-04-26 15:36:36.016721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.763 qpair failed and we were unable to recover it. 00:26:18.763 [2024-04-26 15:36:36.016954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.763 [2024-04-26 15:36:36.017315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.763 [2024-04-26 15:36:36.017330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.763 qpair failed and we were unable to recover it. 00:26:18.763 [2024-04-26 15:36:36.017664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.763 [2024-04-26 15:36:36.017999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.763 [2024-04-26 15:36:36.018014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.763 qpair failed and we were unable to recover it. 00:26:18.763 [2024-04-26 15:36:36.018385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.763 [2024-04-26 15:36:36.018753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.763 [2024-04-26 15:36:36.018767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.763 qpair failed and we were unable to recover it. 00:26:18.763 [2024-04-26 15:36:36.019103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.763 [2024-04-26 15:36:36.019456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.763 [2024-04-26 15:36:36.019471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.763 qpair failed and we were unable to recover it. 00:26:18.763 [2024-04-26 15:36:36.019804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.764 [2024-04-26 15:36:36.020211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.764 [2024-04-26 15:36:36.020226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.764 qpair failed and we were unable to recover it. 00:26:18.764 [2024-04-26 15:36:36.020558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.764 [2024-04-26 15:36:36.020912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.764 [2024-04-26 15:36:36.020928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.764 qpair failed and we were unable to recover it. 00:26:18.764 [2024-04-26 15:36:36.021296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.764 [2024-04-26 15:36:36.021682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.764 [2024-04-26 15:36:36.021696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.764 qpair failed and we were unable to recover it. 00:26:18.764 [2024-04-26 15:36:36.022039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.764 [2024-04-26 15:36:36.022256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.764 [2024-04-26 15:36:36.022271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.764 qpair failed and we were unable to recover it. 00:26:18.764 [2024-04-26 15:36:36.022610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.764 [2024-04-26 15:36:36.022978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.764 [2024-04-26 15:36:36.022994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.764 qpair failed and we were unable to recover it. 00:26:18.764 [2024-04-26 15:36:36.023326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.764 [2024-04-26 15:36:36.023661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.764 [2024-04-26 15:36:36.023675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.764 qpair failed and we were unable to recover it. 00:26:18.764 [2024-04-26 15:36:36.023969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.764 [2024-04-26 15:36:36.024323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.764 [2024-04-26 15:36:36.024337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.764 qpair failed and we were unable to recover it. 00:26:18.764 [2024-04-26 15:36:36.024663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.764 [2024-04-26 15:36:36.025005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.764 [2024-04-26 15:36:36.025020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.764 qpair failed and we were unable to recover it. 00:26:18.764 [2024-04-26 15:36:36.025355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.764 [2024-04-26 15:36:36.025761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.764 [2024-04-26 15:36:36.025774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.764 qpair failed and we were unable to recover it. 00:26:18.764 [2024-04-26 15:36:36.026097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.764 [2024-04-26 15:36:36.026312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.764 [2024-04-26 15:36:36.026328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.764 qpair failed and we were unable to recover it. 00:26:18.764 [2024-04-26 15:36:36.026573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.764 [2024-04-26 15:36:36.026948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.764 [2024-04-26 15:36:36.026963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.764 qpair failed and we were unable to recover it. 00:26:18.764 [2024-04-26 15:36:36.027296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.764 [2024-04-26 15:36:36.027654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.764 [2024-04-26 15:36:36.027669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.764 qpair failed and we were unable to recover it. 00:26:18.764 [2024-04-26 15:36:36.027997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.764 [2024-04-26 15:36:36.028353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.764 [2024-04-26 15:36:36.028368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.764 qpair failed and we were unable to recover it. 00:26:18.764 [2024-04-26 15:36:36.028690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.764 [2024-04-26 15:36:36.028976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.764 [2024-04-26 15:36:36.028990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.764 qpair failed and we were unable to recover it. 00:26:18.764 [2024-04-26 15:36:36.029353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.764 [2024-04-26 15:36:36.029651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.764 [2024-04-26 15:36:36.029665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.764 qpair failed and we were unable to recover it. 00:26:18.764 [2024-04-26 15:36:36.029988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.764 [2024-04-26 15:36:36.030371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.764 [2024-04-26 15:36:36.030389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.764 qpair failed and we were unable to recover it. 00:26:18.764 [2024-04-26 15:36:36.030710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.764 [2024-04-26 15:36:36.031033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.764 [2024-04-26 15:36:36.031047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.764 qpair failed and we were unable to recover it. 00:26:18.764 [2024-04-26 15:36:36.031375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.764 [2024-04-26 15:36:36.031733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.764 [2024-04-26 15:36:36.031748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.764 qpair failed and we were unable to recover it. 00:26:18.764 [2024-04-26 15:36:36.032069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.764 [2024-04-26 15:36:36.032497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.764 [2024-04-26 15:36:36.032511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.764 qpair failed and we were unable to recover it. 00:26:18.764 [2024-04-26 15:36:36.032880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.764 [2024-04-26 15:36:36.033250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.764 [2024-04-26 15:36:36.033265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.764 qpair failed and we were unable to recover it. 00:26:18.764 [2024-04-26 15:36:36.033491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.764 [2024-04-26 15:36:36.033825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.764 [2024-04-26 15:36:36.033865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.764 qpair failed and we were unable to recover it. 00:26:18.764 [2024-04-26 15:36:36.034203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.764 [2024-04-26 15:36:36.034415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.764 [2024-04-26 15:36:36.034430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.764 qpair failed and we were unable to recover it. 00:26:18.764 [2024-04-26 15:36:36.034755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.764 [2024-04-26 15:36:36.035096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.764 [2024-04-26 15:36:36.035111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.764 qpair failed and we were unable to recover it. 00:26:18.764 [2024-04-26 15:36:36.035425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.764 [2024-04-26 15:36:36.035793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.764 [2024-04-26 15:36:36.035807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.764 qpair failed and we were unable to recover it. 00:26:18.764 [2024-04-26 15:36:36.036212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.764 [2024-04-26 15:36:36.036534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.764 [2024-04-26 15:36:36.036549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.764 qpair failed and we were unable to recover it. 00:26:18.764 [2024-04-26 15:36:36.036897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.764 [2024-04-26 15:36:36.037201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.764 [2024-04-26 15:36:36.037219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.764 qpair failed and we were unable to recover it. 00:26:18.764 [2024-04-26 15:36:36.037456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.764 [2024-04-26 15:36:36.037773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.764 [2024-04-26 15:36:36.037787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.764 qpair failed and we were unable to recover it. 00:26:18.764 [2024-04-26 15:36:36.038015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.764 [2024-04-26 15:36:36.038369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.764 [2024-04-26 15:36:36.038383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.764 qpair failed and we were unable to recover it. 00:26:18.764 [2024-04-26 15:36:36.038761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.764 [2024-04-26 15:36:36.039110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.765 [2024-04-26 15:36:36.039124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.765 qpair failed and we were unable to recover it. 00:26:18.765 [2024-04-26 15:36:36.039458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.765 [2024-04-26 15:36:36.039813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.765 [2024-04-26 15:36:36.039827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.765 qpair failed and we were unable to recover it. 00:26:18.765 [2024-04-26 15:36:36.040198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.765 [2024-04-26 15:36:36.040541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.765 [2024-04-26 15:36:36.040554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.765 qpair failed and we were unable to recover it. 00:26:18.765 [2024-04-26 15:36:36.040908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.765 [2024-04-26 15:36:36.041156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.765 [2024-04-26 15:36:36.041169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.765 qpair failed and we were unable to recover it. 00:26:18.765 [2024-04-26 15:36:36.041500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.765 [2024-04-26 15:36:36.041939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.765 [2024-04-26 15:36:36.041954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.765 qpair failed and we were unable to recover it. 00:26:18.765 [2024-04-26 15:36:36.042300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.765 [2024-04-26 15:36:36.042650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.765 [2024-04-26 15:36:36.042664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.765 qpair failed and we were unable to recover it. 00:26:18.765 [2024-04-26 15:36:36.043088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.765 [2024-04-26 15:36:36.043427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.765 [2024-04-26 15:36:36.043442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.765 qpair failed and we were unable to recover it. 00:26:18.765 [2024-04-26 15:36:36.043692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.765 [2024-04-26 15:36:36.044034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.765 [2024-04-26 15:36:36.044052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.765 qpair failed and we were unable to recover it. 00:26:18.765 [2024-04-26 15:36:36.044415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.765 [2024-04-26 15:36:36.044753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.765 [2024-04-26 15:36:36.044767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.765 qpair failed and we were unable to recover it. 00:26:18.765 [2024-04-26 15:36:36.045137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.765 [2024-04-26 15:36:36.045490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.765 [2024-04-26 15:36:36.045505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.765 qpair failed and we were unable to recover it. 00:26:18.765 [2024-04-26 15:36:36.045857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.765 [2024-04-26 15:36:36.046292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.765 [2024-04-26 15:36:36.046306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.765 qpair failed and we were unable to recover it. 00:26:18.765 [2024-04-26 15:36:36.046624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.765 [2024-04-26 15:36:36.046966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.765 [2024-04-26 15:36:36.046980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.765 qpair failed and we were unable to recover it. 00:26:18.765 [2024-04-26 15:36:36.047313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.765 [2024-04-26 15:36:36.047668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.765 [2024-04-26 15:36:36.047682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.765 qpair failed and we were unable to recover it. 00:26:18.765 [2024-04-26 15:36:36.048008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.765 [2024-04-26 15:36:36.048387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.765 [2024-04-26 15:36:36.048402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.765 qpair failed and we were unable to recover it. 00:26:18.765 [2024-04-26 15:36:36.048716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.765 [2024-04-26 15:36:36.048930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.765 [2024-04-26 15:36:36.048946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.765 qpair failed and we were unable to recover it. 00:26:18.765 [2024-04-26 15:36:36.049281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.765 [2024-04-26 15:36:36.049637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.765 [2024-04-26 15:36:36.049652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.765 qpair failed and we were unable to recover it. 00:26:18.765 [2024-04-26 15:36:36.049981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.765 [2024-04-26 15:36:36.050353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.765 [2024-04-26 15:36:36.050367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.765 qpair failed and we were unable to recover it. 00:26:18.765 [2024-04-26 15:36:36.050691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.765 [2024-04-26 15:36:36.051028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.765 [2024-04-26 15:36:36.051042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.765 qpair failed and we were unable to recover it. 00:26:18.765 [2024-04-26 15:36:36.051375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.765 [2024-04-26 15:36:36.051519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.765 [2024-04-26 15:36:36.051533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.765 qpair failed and we were unable to recover it. 00:26:18.765 [2024-04-26 15:36:36.051895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.765 [2024-04-26 15:36:36.052248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.765 [2024-04-26 15:36:36.052262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.765 qpair failed and we were unable to recover it. 00:26:18.765 [2024-04-26 15:36:36.052592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.765 [2024-04-26 15:36:36.052965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.765 [2024-04-26 15:36:36.052980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.765 qpair failed and we were unable to recover it. 00:26:18.765 [2024-04-26 15:36:36.053315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.765 [2024-04-26 15:36:36.053678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.765 [2024-04-26 15:36:36.053692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.765 qpair failed and we were unable to recover it. 00:26:18.765 [2024-04-26 15:36:36.053915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.765 [2024-04-26 15:36:36.054136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.765 [2024-04-26 15:36:36.054150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.765 qpair failed and we were unable to recover it. 00:26:18.765 [2024-04-26 15:36:36.054501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.765 [2024-04-26 15:36:36.054685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.765 [2024-04-26 15:36:36.054700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.765 qpair failed and we were unable to recover it. 00:26:18.765 [2024-04-26 15:36:36.054895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.765 [2024-04-26 15:36:36.055323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.765 [2024-04-26 15:36:36.055337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.765 qpair failed and we were unable to recover it. 00:26:18.765 [2024-04-26 15:36:36.055659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.765 [2024-04-26 15:36:36.056029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.765 [2024-04-26 15:36:36.056044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.765 qpair failed and we were unable to recover it. 00:26:18.765 [2024-04-26 15:36:36.056399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.765 [2024-04-26 15:36:36.056771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.765 [2024-04-26 15:36:36.056785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.765 qpair failed and we were unable to recover it. 00:26:18.765 [2024-04-26 15:36:36.057018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.765 [2024-04-26 15:36:36.057401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.766 [2024-04-26 15:36:36.057415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.766 qpair failed and we were unable to recover it. 00:26:18.766 [2024-04-26 15:36:36.057739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.766 [2024-04-26 15:36:36.058104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.766 [2024-04-26 15:36:36.058119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.766 qpair failed and we were unable to recover it. 00:26:18.766 [2024-04-26 15:36:36.058445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.766 [2024-04-26 15:36:36.058815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.766 [2024-04-26 15:36:36.058830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.766 qpair failed and we were unable to recover it. 00:26:18.766 [2024-04-26 15:36:36.059159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.766 [2024-04-26 15:36:36.059529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.766 [2024-04-26 15:36:36.059543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.766 qpair failed and we were unable to recover it. 00:26:18.766 [2024-04-26 15:36:36.059897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.766 [2024-04-26 15:36:36.060266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.766 [2024-04-26 15:36:36.060281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.766 qpair failed and we were unable to recover it. 00:26:18.766 [2024-04-26 15:36:36.060615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.766 [2024-04-26 15:36:36.060824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.766 [2024-04-26 15:36:36.060846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.766 qpair failed and we were unable to recover it. 00:26:18.766 [2024-04-26 15:36:36.061212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.766 [2024-04-26 15:36:36.061547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.766 [2024-04-26 15:36:36.061560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.766 qpair failed and we were unable to recover it. 00:26:18.766 [2024-04-26 15:36:36.061990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.766 [2024-04-26 15:36:36.062323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.766 [2024-04-26 15:36:36.062337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.766 qpair failed and we were unable to recover it. 00:26:18.766 [2024-04-26 15:36:36.062699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.766 [2024-04-26 15:36:36.063113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.766 [2024-04-26 15:36:36.063127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.766 qpair failed and we were unable to recover it. 00:26:18.766 [2024-04-26 15:36:36.063460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.766 [2024-04-26 15:36:36.063861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.766 [2024-04-26 15:36:36.063877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.766 qpair failed and we were unable to recover it. 00:26:18.766 [2024-04-26 15:36:36.064231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.766 [2024-04-26 15:36:36.064592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.766 [2024-04-26 15:36:36.064607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.766 qpair failed and we were unable to recover it. 00:26:18.766 [2024-04-26 15:36:36.064928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.766 [2024-04-26 15:36:36.065282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.766 [2024-04-26 15:36:36.065296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.766 qpair failed and we were unable to recover it. 00:26:18.766 [2024-04-26 15:36:36.065516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.766 [2024-04-26 15:36:36.065889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.766 [2024-04-26 15:36:36.065903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.766 qpair failed and we were unable to recover it. 00:26:18.766 [2024-04-26 15:36:36.066263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.766 [2024-04-26 15:36:36.066606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.766 [2024-04-26 15:36:36.066619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.766 qpair failed and we were unable to recover it. 00:26:18.766 [2024-04-26 15:36:36.066925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.766 [2024-04-26 15:36:36.067245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.766 [2024-04-26 15:36:36.067259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.766 qpair failed and we were unable to recover it. 00:26:18.766 [2024-04-26 15:36:36.067576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.766 [2024-04-26 15:36:36.067926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.766 [2024-04-26 15:36:36.067941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.766 qpair failed and we were unable to recover it. 00:26:18.766 [2024-04-26 15:36:36.068290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.766 [2024-04-26 15:36:36.068498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.766 [2024-04-26 15:36:36.068513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.766 qpair failed and we were unable to recover it. 00:26:18.766 [2024-04-26 15:36:36.068729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.766 [2024-04-26 15:36:36.069093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.766 [2024-04-26 15:36:36.069107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.766 qpair failed and we were unable to recover it. 00:26:18.766 [2024-04-26 15:36:36.069430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.766 [2024-04-26 15:36:36.069679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.766 [2024-04-26 15:36:36.069694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.766 qpair failed and we were unable to recover it. 00:26:18.766 [2024-04-26 15:36:36.070062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.766 [2024-04-26 15:36:36.070458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.766 [2024-04-26 15:36:36.070473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.766 qpair failed and we were unable to recover it. 00:26:18.766 [2024-04-26 15:36:36.070790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.766 [2024-04-26 15:36:36.071003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.766 [2024-04-26 15:36:36.071019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.766 qpair failed and we were unable to recover it. 00:26:18.766 [2024-04-26 15:36:36.071372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.766 [2024-04-26 15:36:36.071575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.766 [2024-04-26 15:36:36.071590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.766 qpair failed and we were unable to recover it. 00:26:18.766 [2024-04-26 15:36:36.071909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.766 [2024-04-26 15:36:36.072233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.766 [2024-04-26 15:36:36.072248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.766 qpair failed and we were unable to recover it. 00:26:18.766 [2024-04-26 15:36:36.072611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.766 [2024-04-26 15:36:36.072941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.766 [2024-04-26 15:36:36.072955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.766 qpair failed and we were unable to recover it. 00:26:18.766 [2024-04-26 15:36:36.073305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.766 [2024-04-26 15:36:36.073678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.766 [2024-04-26 15:36:36.073693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.766 qpair failed and we were unable to recover it. 00:26:18.766 [2024-04-26 15:36:36.074033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.766 [2024-04-26 15:36:36.074383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.766 [2024-04-26 15:36:36.074396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.766 qpair failed and we were unable to recover it. 00:26:18.766 [2024-04-26 15:36:36.074753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.766 [2024-04-26 15:36:36.074965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.766 [2024-04-26 15:36:36.074981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.766 qpair failed and we were unable to recover it. 00:26:18.766 [2024-04-26 15:36:36.075318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.766 [2024-04-26 15:36:36.075682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.766 [2024-04-26 15:36:36.075695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.766 qpair failed and we were unable to recover it. 00:26:18.766 [2024-04-26 15:36:36.076019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.766 [2024-04-26 15:36:36.076394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.766 [2024-04-26 15:36:36.076408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.766 qpair failed and we were unable to recover it. 00:26:18.767 [2024-04-26 15:36:36.076730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.767 [2024-04-26 15:36:36.077096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.767 [2024-04-26 15:36:36.077111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.767 qpair failed and we were unable to recover it. 00:26:18.767 [2024-04-26 15:36:36.077430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.767 [2024-04-26 15:36:36.077793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.767 [2024-04-26 15:36:36.077806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.767 qpair failed and we were unable to recover it. 00:26:18.767 [2024-04-26 15:36:36.078169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.767 [2024-04-26 15:36:36.078572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.767 [2024-04-26 15:36:36.078586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.767 qpair failed and we were unable to recover it. 00:26:18.767 [2024-04-26 15:36:36.078944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.767 [2024-04-26 15:36:36.079305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.767 [2024-04-26 15:36:36.079318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.767 qpair failed and we were unable to recover it. 00:26:18.767 [2024-04-26 15:36:36.079643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.767 [2024-04-26 15:36:36.080003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.767 [2024-04-26 15:36:36.080018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.767 qpair failed and we were unable to recover it. 00:26:18.767 [2024-04-26 15:36:36.080370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.767 [2024-04-26 15:36:36.080577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.767 [2024-04-26 15:36:36.080592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.767 qpair failed and we were unable to recover it. 00:26:18.767 [2024-04-26 15:36:36.080925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.767 [2024-04-26 15:36:36.081295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.767 [2024-04-26 15:36:36.081309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.767 qpair failed and we were unable to recover it. 00:26:18.767 [2024-04-26 15:36:36.081658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.767 [2024-04-26 15:36:36.081998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.767 [2024-04-26 15:36:36.082012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.767 qpair failed and we were unable to recover it. 00:26:18.767 [2024-04-26 15:36:36.082416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.767 [2024-04-26 15:36:36.082778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.767 [2024-04-26 15:36:36.082793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.767 qpair failed and we were unable to recover it. 00:26:18.767 [2024-04-26 15:36:36.082987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.767 [2024-04-26 15:36:36.083352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.767 [2024-04-26 15:36:36.083366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.767 qpair failed and we were unable to recover it. 00:26:18.767 [2024-04-26 15:36:36.083687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.767 [2024-04-26 15:36:36.084036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.767 [2024-04-26 15:36:36.084051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.767 qpair failed and we were unable to recover it. 00:26:18.767 [2024-04-26 15:36:36.084373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.767 [2024-04-26 15:36:36.084730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.767 [2024-04-26 15:36:36.084744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.767 qpair failed and we were unable to recover it. 00:26:18.767 [2024-04-26 15:36:36.085088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.767 [2024-04-26 15:36:36.085455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.767 [2024-04-26 15:36:36.085470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.767 qpair failed and we were unable to recover it. 00:26:18.767 [2024-04-26 15:36:36.085886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.767 [2024-04-26 15:36:36.086246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.767 [2024-04-26 15:36:36.086262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.767 qpair failed and we were unable to recover it. 00:26:18.767 [2024-04-26 15:36:36.086502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.767 [2024-04-26 15:36:36.086833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.767 [2024-04-26 15:36:36.086858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.767 qpair failed and we were unable to recover it. 00:26:18.767 [2024-04-26 15:36:36.087120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.767 [2024-04-26 15:36:36.087490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.767 [2024-04-26 15:36:36.087503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.767 qpair failed and we were unable to recover it. 00:26:18.767 [2024-04-26 15:36:36.087826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.767 [2024-04-26 15:36:36.088200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.767 [2024-04-26 15:36:36.088215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.767 qpair failed and we were unable to recover it. 00:26:18.767 [2024-04-26 15:36:36.088536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.767 [2024-04-26 15:36:36.088875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.767 [2024-04-26 15:36:36.088890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.767 qpair failed and we were unable to recover it. 00:26:18.767 [2024-04-26 15:36:36.089119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.767 [2024-04-26 15:36:36.089484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.767 [2024-04-26 15:36:36.089499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.767 qpair failed and we were unable to recover it. 00:26:18.767 [2024-04-26 15:36:36.089822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.767 [2024-04-26 15:36:36.090187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.767 [2024-04-26 15:36:36.090202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.767 qpair failed and we were unable to recover it. 00:26:18.767 [2024-04-26 15:36:36.090530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.767 [2024-04-26 15:36:36.090875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.767 [2024-04-26 15:36:36.090890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.767 qpair failed and we were unable to recover it. 00:26:18.767 [2024-04-26 15:36:36.091242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.768 [2024-04-26 15:36:36.091610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.768 [2024-04-26 15:36:36.091623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.768 qpair failed and we were unable to recover it. 00:26:18.768 [2024-04-26 15:36:36.091874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.768 [2024-04-26 15:36:36.092247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.768 [2024-04-26 15:36:36.092261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.768 qpair failed and we were unable to recover it. 00:26:18.768 [2024-04-26 15:36:36.092584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.768 [2024-04-26 15:36:36.092836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.768 [2024-04-26 15:36:36.092856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.768 qpair failed and we were unable to recover it. 00:26:18.768 [2024-04-26 15:36:36.093186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.768 [2024-04-26 15:36:36.093552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.768 [2024-04-26 15:36:36.093566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.768 qpair failed and we were unable to recover it. 00:26:18.768 [2024-04-26 15:36:36.093912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.768 [2024-04-26 15:36:36.094267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.768 [2024-04-26 15:36:36.094281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.768 qpair failed and we were unable to recover it. 00:26:18.768 [2024-04-26 15:36:36.094635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.768 [2024-04-26 15:36:36.094835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.768 [2024-04-26 15:36:36.094858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.768 qpair failed and we were unable to recover it. 00:26:18.768 [2024-04-26 15:36:36.095214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.768 [2024-04-26 15:36:36.095477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.768 [2024-04-26 15:36:36.095491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.768 qpair failed and we were unable to recover it. 00:26:18.768 [2024-04-26 15:36:36.095821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.768 [2024-04-26 15:36:36.096037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.768 [2024-04-26 15:36:36.096053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.768 qpair failed and we were unable to recover it. 00:26:18.768 [2024-04-26 15:36:36.096391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.768 [2024-04-26 15:36:36.096723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.768 [2024-04-26 15:36:36.096736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.768 qpair failed and we were unable to recover it. 00:26:18.768 [2024-04-26 15:36:36.096958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.768 [2024-04-26 15:36:36.097301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.768 [2024-04-26 15:36:36.097315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.768 qpair failed and we were unable to recover it. 00:26:18.768 [2024-04-26 15:36:36.097634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.768 [2024-04-26 15:36:36.097988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.768 [2024-04-26 15:36:36.098003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.768 qpair failed and we were unable to recover it. 00:26:18.768 [2024-04-26 15:36:36.098376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.768 [2024-04-26 15:36:36.098720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.768 [2024-04-26 15:36:36.098734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.768 qpair failed and we were unable to recover it. 00:26:18.768 [2024-04-26 15:36:36.099115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.768 [2024-04-26 15:36:36.099472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.768 [2024-04-26 15:36:36.099487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.768 qpair failed and we were unable to recover it. 00:26:18.768 [2024-04-26 15:36:36.099833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.768 [2024-04-26 15:36:36.100183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.768 [2024-04-26 15:36:36.100197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.768 qpair failed and we were unable to recover it. 00:26:18.768 [2024-04-26 15:36:36.100529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.768 [2024-04-26 15:36:36.100876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.768 [2024-04-26 15:36:36.100891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.768 qpair failed and we were unable to recover it. 00:26:18.768 [2024-04-26 15:36:36.101251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.768 [2024-04-26 15:36:36.101618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.768 [2024-04-26 15:36:36.101634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.768 qpair failed and we were unable to recover it. 00:26:18.768 [2024-04-26 15:36:36.101867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.768 [2024-04-26 15:36:36.102202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.768 [2024-04-26 15:36:36.102217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.768 qpair failed and we were unable to recover it. 00:26:18.768 [2024-04-26 15:36:36.102562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.768 [2024-04-26 15:36:36.102897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.768 [2024-04-26 15:36:36.102911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.768 qpair failed and we were unable to recover it. 00:26:18.768 [2024-04-26 15:36:36.103274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.768 [2024-04-26 15:36:36.103609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.768 [2024-04-26 15:36:36.103623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.768 qpair failed and we were unable to recover it. 00:26:18.768 [2024-04-26 15:36:36.103995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.768 [2024-04-26 15:36:36.104334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.768 [2024-04-26 15:36:36.104349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.768 qpair failed and we were unable to recover it. 00:26:18.768 [2024-04-26 15:36:36.104679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.768 [2024-04-26 15:36:36.104918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.768 [2024-04-26 15:36:36.104933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.768 qpair failed and we were unable to recover it. 00:26:18.768 [2024-04-26 15:36:36.105295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.768 [2024-04-26 15:36:36.105607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.768 [2024-04-26 15:36:36.105621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.768 qpair failed and we were unable to recover it. 00:26:18.768 [2024-04-26 15:36:36.105986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.768 [2024-04-26 15:36:36.106373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.768 [2024-04-26 15:36:36.106388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.768 qpair failed and we were unable to recover it. 00:26:18.768 [2024-04-26 15:36:36.106729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.768 [2024-04-26 15:36:36.107079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.768 [2024-04-26 15:36:36.107094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.768 qpair failed and we were unable to recover it. 00:26:18.768 [2024-04-26 15:36:36.107445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.768 [2024-04-26 15:36:36.107710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.768 [2024-04-26 15:36:36.107726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.768 qpair failed and we were unable to recover it. 00:26:18.768 [2024-04-26 15:36:36.107942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.768 [2024-04-26 15:36:36.108329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.768 [2024-04-26 15:36:36.108345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.768 qpair failed and we were unable to recover it. 00:26:18.768 [2024-04-26 15:36:36.108675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.768 [2024-04-26 15:36:36.109051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.768 [2024-04-26 15:36:36.109065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.768 qpair failed and we were unable to recover it. 00:26:18.768 [2024-04-26 15:36:36.109445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.768 [2024-04-26 15:36:36.109811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.768 [2024-04-26 15:36:36.109825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.769 qpair failed and we were unable to recover it. 00:26:18.769 [2024-04-26 15:36:36.110066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.769 [2024-04-26 15:36:36.110412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.769 [2024-04-26 15:36:36.110427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.769 qpair failed and we were unable to recover it. 00:26:18.769 [2024-04-26 15:36:36.110777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.769 [2024-04-26 15:36:36.111118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.769 [2024-04-26 15:36:36.111133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.769 qpair failed and we were unable to recover it. 00:26:18.769 [2024-04-26 15:36:36.111470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.769 [2024-04-26 15:36:36.111676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.769 [2024-04-26 15:36:36.111690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.769 qpair failed and we were unable to recover it. 00:26:18.769 [2024-04-26 15:36:36.112020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.769 [2024-04-26 15:36:36.112359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.769 [2024-04-26 15:36:36.112373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.769 qpair failed and we were unable to recover it. 00:26:18.769 [2024-04-26 15:36:36.112693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.769 [2024-04-26 15:36:36.113035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.769 [2024-04-26 15:36:36.113050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.769 qpair failed and we were unable to recover it. 00:26:18.769 [2024-04-26 15:36:36.113371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.769 [2024-04-26 15:36:36.113720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.769 [2024-04-26 15:36:36.113734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.769 qpair failed and we were unable to recover it. 00:26:18.769 [2024-04-26 15:36:36.114076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.769 [2024-04-26 15:36:36.114428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.769 [2024-04-26 15:36:36.114443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.769 qpair failed and we were unable to recover it. 00:26:18.769 [2024-04-26 15:36:36.114808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.769 [2024-04-26 15:36:36.115094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.769 [2024-04-26 15:36:36.115109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.769 qpair failed and we were unable to recover it. 00:26:18.769 [2024-04-26 15:36:36.115399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.769 [2024-04-26 15:36:36.115620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.769 [2024-04-26 15:36:36.115636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.769 qpair failed and we were unable to recover it. 00:26:18.769 [2024-04-26 15:36:36.115993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.769 [2024-04-26 15:36:36.116333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.769 [2024-04-26 15:36:36.116347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.769 qpair failed and we were unable to recover it. 00:26:18.769 [2024-04-26 15:36:36.116705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.769 [2024-04-26 15:36:36.117048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.769 [2024-04-26 15:36:36.117063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.769 qpair failed and we were unable to recover it. 00:26:18.769 [2024-04-26 15:36:36.117386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.769 [2024-04-26 15:36:36.117746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.769 [2024-04-26 15:36:36.117759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.769 qpair failed and we were unable to recover it. 00:26:18.769 [2024-04-26 15:36:36.118095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.769 [2024-04-26 15:36:36.118469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.769 [2024-04-26 15:36:36.118483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.769 qpair failed and we were unable to recover it. 00:26:18.769 [2024-04-26 15:36:36.118808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.769 [2024-04-26 15:36:36.119174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.769 [2024-04-26 15:36:36.119189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.769 qpair failed and we were unable to recover it. 00:26:18.769 [2024-04-26 15:36:36.119586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.769 [2024-04-26 15:36:36.119960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.769 [2024-04-26 15:36:36.119974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.769 qpair failed and we were unable to recover it. 00:26:18.769 [2024-04-26 15:36:36.120396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.769 [2024-04-26 15:36:36.120755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.769 [2024-04-26 15:36:36.120770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.769 qpair failed and we were unable to recover it. 00:26:18.769 [2024-04-26 15:36:36.121130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.769 [2024-04-26 15:36:36.121473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.769 [2024-04-26 15:36:36.121486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.769 qpair failed and we were unable to recover it. 00:26:18.769 [2024-04-26 15:36:36.121847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.769 [2024-04-26 15:36:36.122170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.769 [2024-04-26 15:36:36.122185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.769 qpair failed and we were unable to recover it. 00:26:18.769 [2024-04-26 15:36:36.122484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.769 [2024-04-26 15:36:36.122830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.769 [2024-04-26 15:36:36.122853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.769 qpair failed and we were unable to recover it. 00:26:18.769 [2024-04-26 15:36:36.123202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.769 [2024-04-26 15:36:36.123567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.769 [2024-04-26 15:36:36.123581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.769 qpair failed and we were unable to recover it. 00:26:18.769 [2024-04-26 15:36:36.123955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.769 [2024-04-26 15:36:36.124319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.769 [2024-04-26 15:36:36.124334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.769 qpair failed and we were unable to recover it. 00:26:18.769 [2024-04-26 15:36:36.124687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.769 [2024-04-26 15:36:36.125048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.769 [2024-04-26 15:36:36.125063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.769 qpair failed and we were unable to recover it. 00:26:18.769 [2024-04-26 15:36:36.125395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.769 [2024-04-26 15:36:36.125755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.769 [2024-04-26 15:36:36.125770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.769 qpair failed and we were unable to recover it. 00:26:18.769 [2024-04-26 15:36:36.126104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.769 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 1789605 Killed "${NVMF_APP[@]}" "$@" 00:26:18.769 [2024-04-26 15:36:36.126437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.769 [2024-04-26 15:36:36.126452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.769 qpair failed and we were unable to recover it. 00:26:18.769 [2024-04-26 15:36:36.126667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.769 [2024-04-26 15:36:36.127018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.769 [2024-04-26 15:36:36.127032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.769 qpair failed and we were unable to recover it. 00:26:18.769 15:36:36 -- host/target_disconnect.sh@56 -- # disconnect_init 10.0.0.2 00:26:18.769 [2024-04-26 15:36:36.127373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.769 15:36:36 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:18.769 [2024-04-26 15:36:36.127744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.769 15:36:36 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:26:18.769 [2024-04-26 15:36:36.127758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.769 qpair failed and we were unable to recover it. 00:26:18.769 15:36:36 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:18.769 [2024-04-26 15:36:36.128098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.769 15:36:36 -- common/autotest_common.sh@10 -- # set +x 00:26:18.769 [2024-04-26 15:36:36.128455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.769 [2024-04-26 15:36:36.128469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.769 qpair failed and we were unable to recover it. 00:26:18.770 [2024-04-26 15:36:36.128790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.770 [2024-04-26 15:36:36.129051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.770 [2024-04-26 15:36:36.129066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.770 qpair failed and we were unable to recover it. 00:26:18.770 [2024-04-26 15:36:36.129393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.770 [2024-04-26 15:36:36.129678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.770 [2024-04-26 15:36:36.129692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.770 qpair failed and we were unable to recover it. 00:26:18.770 [2024-04-26 15:36:36.130019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.770 [2024-04-26 15:36:36.130397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.770 [2024-04-26 15:36:36.130411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.770 qpair failed and we were unable to recover it. 00:26:18.770 [2024-04-26 15:36:36.130684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.770 [2024-04-26 15:36:36.130944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.770 [2024-04-26 15:36:36.130959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.770 qpair failed and we were unable to recover it. 00:26:18.770 [2024-04-26 15:36:36.131298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.770 [2024-04-26 15:36:36.131503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.770 [2024-04-26 15:36:36.131517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.770 qpair failed and we were unable to recover it. 00:26:18.770 [2024-04-26 15:36:36.131894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.770 [2024-04-26 15:36:36.132260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.770 [2024-04-26 15:36:36.132276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.770 qpair failed and we were unable to recover it. 00:26:18.770 [2024-04-26 15:36:36.132520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.770 [2024-04-26 15:36:36.132898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.770 [2024-04-26 15:36:36.132927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.770 qpair failed and we were unable to recover it. 00:26:18.770 [2024-04-26 15:36:36.133309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.770 [2024-04-26 15:36:36.133602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.770 [2024-04-26 15:36:36.133629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.770 qpair failed and we were unable to recover it. 00:26:18.770 [2024-04-26 15:36:36.134014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.770 [2024-04-26 15:36:36.134388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.770 [2024-04-26 15:36:36.134416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.770 qpair failed and we were unable to recover it. 00:26:18.770 [2024-04-26 15:36:36.134804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.770 [2024-04-26 15:36:36.135209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.770 [2024-04-26 15:36:36.135240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.770 qpair failed and we were unable to recover it. 00:26:18.770 15:36:36 -- nvmf/common.sh@470 -- # nvmfpid=1790497 00:26:18.770 [2024-04-26 15:36:36.135613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.770 15:36:36 -- nvmf/common.sh@471 -- # waitforlisten 1790497 00:26:18.770 [2024-04-26 15:36:36.135961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.770 [2024-04-26 15:36:36.135990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.770 qpair failed and we were unable to recover it. 00:26:18.770 15:36:36 -- common/autotest_common.sh@817 -- # '[' -z 1790497 ']' 00:26:18.770 15:36:36 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:18.770 [2024-04-26 15:36:36.136372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.770 15:36:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:18.770 15:36:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:18.770 [2024-04-26 15:36:36.136755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.770 [2024-04-26 15:36:36.136786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.770 qpair failed and we were unable to recover it. 00:26:18.770 15:36:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:18.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:18.770 [2024-04-26 15:36:36.137067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.770 15:36:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:18.770 15:36:36 -- common/autotest_common.sh@10 -- # set +x 00:26:18.770 [2024-04-26 15:36:36.137406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.770 [2024-04-26 15:36:36.137435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.770 qpair failed and we were unable to recover it. 00:26:18.770 [2024-04-26 15:36:36.137693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.770 [2024-04-26 15:36:36.138094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.770 [2024-04-26 15:36:36.138124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.770 qpair failed and we were unable to recover it. 00:26:18.770 [2024-04-26 15:36:36.138500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.770 [2024-04-26 15:36:36.138876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.770 [2024-04-26 15:36:36.138904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.770 qpair failed and we were unable to recover it. 00:26:18.770 [2024-04-26 15:36:36.139290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.770 [2024-04-26 15:36:36.139628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.770 [2024-04-26 15:36:36.139649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.770 qpair failed and we were unable to recover it. 00:26:18.770 [2024-04-26 15:36:36.140004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.770 [2024-04-26 15:36:36.140366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.770 [2024-04-26 15:36:36.140382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.770 qpair failed and we were unable to recover it. 00:26:18.770 [2024-04-26 15:36:36.140517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.770 [2024-04-26 15:36:36.140684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.770 [2024-04-26 15:36:36.140703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.770 qpair failed and we were unable to recover it. 00:26:18.770 [2024-04-26 15:36:36.140958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.770 [2024-04-26 15:36:36.141201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.770 [2024-04-26 15:36:36.141229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.770 qpair failed and we were unable to recover it. 00:26:18.770 [2024-04-26 15:36:36.141595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.770 [2024-04-26 15:36:36.141981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.770 [2024-04-26 15:36:36.142011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.770 qpair failed and we were unable to recover it. 00:26:18.770 [2024-04-26 15:36:36.142365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.770 [2024-04-26 15:36:36.142598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.770 [2024-04-26 15:36:36.142626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.770 qpair failed and we were unable to recover it. 00:26:18.770 [2024-04-26 15:36:36.142886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.770 [2024-04-26 15:36:36.143176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.770 [2024-04-26 15:36:36.143203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.770 qpair failed and we were unable to recover it. 00:26:18.770 [2024-04-26 15:36:36.143434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.770 [2024-04-26 15:36:36.143708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.770 [2024-04-26 15:36:36.143735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.770 qpair failed and we were unable to recover it. 00:26:18.770 [2024-04-26 15:36:36.144123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.770 [2024-04-26 15:36:36.144489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.770 [2024-04-26 15:36:36.144518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.770 qpair failed and we were unable to recover it. 00:26:18.770 [2024-04-26 15:36:36.144779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.770 [2024-04-26 15:36:36.145042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.771 [2024-04-26 15:36:36.145072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.771 qpair failed and we were unable to recover it. 00:26:18.771 [2024-04-26 15:36:36.145427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.771 [2024-04-26 15:36:36.145789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.771 [2024-04-26 15:36:36.145819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.771 qpair failed and we were unable to recover it. 00:26:18.771 [2024-04-26 15:36:36.146177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.771 [2024-04-26 15:36:36.146525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.771 [2024-04-26 15:36:36.146545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.771 qpair failed and we were unable to recover it. 00:26:18.771 [2024-04-26 15:36:36.146908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.771 [2024-04-26 15:36:36.147299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.771 [2024-04-26 15:36:36.147315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.771 qpair failed and we were unable to recover it. 00:26:18.771 [2024-04-26 15:36:36.147671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.771 [2024-04-26 15:36:36.147897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.771 [2024-04-26 15:36:36.147916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.771 qpair failed and we were unable to recover it. 00:26:18.771 [2024-04-26 15:36:36.148288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.771 [2024-04-26 15:36:36.148532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.771 [2024-04-26 15:36:36.148559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.771 qpair failed and we were unable to recover it. 00:26:18.771 [2024-04-26 15:36:36.148922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.771 [2024-04-26 15:36:36.149182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.771 [2024-04-26 15:36:36.149209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.771 qpair failed and we were unable to recover it. 00:26:18.771 [2024-04-26 15:36:36.149569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.771 [2024-04-26 15:36:36.149806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.771 [2024-04-26 15:36:36.149821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.771 qpair failed and we were unable to recover it. 00:26:18.771 [2024-04-26 15:36:36.150186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.771 [2024-04-26 15:36:36.150527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.771 [2024-04-26 15:36:36.150553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.771 qpair failed and we were unable to recover it. 00:26:18.771 [2024-04-26 15:36:36.150901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.771 [2024-04-26 15:36:36.151163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.771 [2024-04-26 15:36:36.151189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.771 qpair failed and we were unable to recover it. 00:26:18.771 [2024-04-26 15:36:36.151511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.771 [2024-04-26 15:36:36.151875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.771 [2024-04-26 15:36:36.151904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.771 qpair failed and we were unable to recover it. 00:26:18.771 [2024-04-26 15:36:36.152296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.771 [2024-04-26 15:36:36.152558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.771 [2024-04-26 15:36:36.152582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.771 qpair failed and we were unable to recover it. 00:26:18.771 [2024-04-26 15:36:36.152854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.771 [2024-04-26 15:36:36.153250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.771 [2024-04-26 15:36:36.153278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.771 qpair failed and we were unable to recover it. 00:26:18.771 [2024-04-26 15:36:36.153590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.771 [2024-04-26 15:36:36.153962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.771 [2024-04-26 15:36:36.153989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.771 qpair failed and we were unable to recover it. 00:26:18.771 [2024-04-26 15:36:36.154253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.771 [2024-04-26 15:36:36.154603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.771 [2024-04-26 15:36:36.154629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.771 qpair failed and we were unable to recover it. 00:26:18.771 [2024-04-26 15:36:36.155025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.771 [2024-04-26 15:36:36.155418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.771 [2024-04-26 15:36:36.155445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.771 qpair failed and we were unable to recover it. 00:26:18.771 [2024-04-26 15:36:36.155812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.771 [2024-04-26 15:36:36.156166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.771 [2024-04-26 15:36:36.156185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.771 qpair failed and we were unable to recover it. 00:26:18.771 [2024-04-26 15:36:36.156449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.771 [2024-04-26 15:36:36.156789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.771 [2024-04-26 15:36:36.156805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.771 qpair failed and we were unable to recover it. 00:26:18.771 [2024-04-26 15:36:36.157158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.771 [2024-04-26 15:36:36.157507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.771 [2024-04-26 15:36:36.157522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.771 qpair failed and we were unable to recover it. 00:26:18.771 [2024-04-26 15:36:36.157879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.771 [2024-04-26 15:36:36.158260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.771 [2024-04-26 15:36:36.158338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.771 qpair failed and we were unable to recover it. 00:26:18.771 [2024-04-26 15:36:36.158740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.771 [2024-04-26 15:36:36.158983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.771 [2024-04-26 15:36:36.158998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.771 qpair failed and we were unable to recover it. 00:26:18.771 [2024-04-26 15:36:36.159266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.771 [2024-04-26 15:36:36.159646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.771 [2024-04-26 15:36:36.159671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.771 qpair failed and we were unable to recover it. 00:26:18.771 [2024-04-26 15:36:36.160076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.771 [2024-04-26 15:36:36.160449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.771 [2024-04-26 15:36:36.160475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.771 qpair failed and we were unable to recover it. 00:26:18.771 [2024-04-26 15:36:36.160835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.771 [2024-04-26 15:36:36.161103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.771 [2024-04-26 15:36:36.161130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.771 qpair failed and we were unable to recover it. 00:26:18.771 [2024-04-26 15:36:36.161366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.771 [2024-04-26 15:36:36.161789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.771 [2024-04-26 15:36:36.161816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.771 qpair failed and we were unable to recover it. 00:26:18.771 [2024-04-26 15:36:36.162184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.771 [2024-04-26 15:36:36.162564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.771 [2024-04-26 15:36:36.162586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.771 qpair failed and we were unable to recover it. 00:26:18.771 [2024-04-26 15:36:36.162958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.771 [2024-04-26 15:36:36.163363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.771 [2024-04-26 15:36:36.163388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.771 qpair failed and we were unable to recover it. 00:26:18.771 [2024-04-26 15:36:36.163754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.771 [2024-04-26 15:36:36.163915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.771 [2024-04-26 15:36:36.163941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.771 qpair failed and we were unable to recover it. 00:26:18.771 [2024-04-26 15:36:36.164295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.771 [2024-04-26 15:36:36.164644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.771 [2024-04-26 15:36:36.164661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.771 qpair failed and we were unable to recover it. 00:26:18.771 [2024-04-26 15:36:36.164993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.772 [2024-04-26 15:36:36.165357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.772 [2024-04-26 15:36:36.165378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.772 qpair failed and we were unable to recover it. 00:26:18.772 [2024-04-26 15:36:36.165594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.772 [2024-04-26 15:36:36.165962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.772 [2024-04-26 15:36:36.165990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.772 qpair failed and we were unable to recover it. 00:26:18.772 [2024-04-26 15:36:36.166212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.772 [2024-04-26 15:36:36.166591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.772 [2024-04-26 15:36:36.166614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.772 qpair failed and we were unable to recover it. 00:26:18.772 [2024-04-26 15:36:36.166982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.772 [2024-04-26 15:36:36.167382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.772 [2024-04-26 15:36:36.167409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.772 qpair failed and we were unable to recover it. 00:26:18.772 [2024-04-26 15:36:36.167805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.772 [2024-04-26 15:36:36.168175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.772 [2024-04-26 15:36:36.168212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.772 qpair failed and we were unable to recover it. 00:26:18.772 [2024-04-26 15:36:36.168566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.772 [2024-04-26 15:36:36.168922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.772 [2024-04-26 15:36:36.168948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.772 qpair failed and we were unable to recover it. 00:26:18.772 [2024-04-26 15:36:36.169340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.772 [2024-04-26 15:36:36.169689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.772 [2024-04-26 15:36:36.169705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.772 qpair failed and we were unable to recover it. 00:26:18.772 [2024-04-26 15:36:36.170056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.772 [2024-04-26 15:36:36.170398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.772 [2024-04-26 15:36:36.170412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.772 qpair failed and we were unable to recover it. 00:26:18.772 [2024-04-26 15:36:36.170665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.772 [2024-04-26 15:36:36.171035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.772 [2024-04-26 15:36:36.171061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.772 qpair failed and we were unable to recover it. 00:26:18.772 [2024-04-26 15:36:36.171382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.772 [2024-04-26 15:36:36.171779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.772 [2024-04-26 15:36:36.171798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.772 qpair failed and we were unable to recover it. 00:26:18.772 [2024-04-26 15:36:36.172199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.772 [2024-04-26 15:36:36.172528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.772 [2024-04-26 15:36:36.172549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.772 qpair failed and we were unable to recover it. 00:26:18.772 [2024-04-26 15:36:36.172896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.772 [2024-04-26 15:36:36.173258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.772 [2024-04-26 15:36:36.173272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.772 qpair failed and we were unable to recover it. 00:26:18.772 [2024-04-26 15:36:36.173487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.772 [2024-04-26 15:36:36.173732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.772 [2024-04-26 15:36:36.173747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.772 qpair failed and we were unable to recover it. 00:26:18.772 [2024-04-26 15:36:36.174179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.772 [2024-04-26 15:36:36.174522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.772 [2024-04-26 15:36:36.174537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.772 qpair failed and we were unable to recover it. 00:26:18.772 [2024-04-26 15:36:36.174905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.772 [2024-04-26 15:36:36.175124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.772 [2024-04-26 15:36:36.175138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.772 qpair failed and we were unable to recover it. 00:26:18.772 [2024-04-26 15:36:36.175505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.772 [2024-04-26 15:36:36.175851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.772 [2024-04-26 15:36:36.175867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.772 qpair failed and we were unable to recover it. 00:26:18.772 [2024-04-26 15:36:36.176189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.772 [2024-04-26 15:36:36.176571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.772 [2024-04-26 15:36:36.176586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.772 qpair failed and we were unable to recover it. 00:26:18.772 [2024-04-26 15:36:36.176911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.772 [2024-04-26 15:36:36.177164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.772 [2024-04-26 15:36:36.177178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.772 qpair failed and we were unable to recover it. 00:26:18.772 [2024-04-26 15:36:36.177503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.772 [2024-04-26 15:36:36.177856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.772 [2024-04-26 15:36:36.177871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.772 qpair failed and we were unable to recover it. 00:26:18.772 [2024-04-26 15:36:36.178090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.772 [2024-04-26 15:36:36.178483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.772 [2024-04-26 15:36:36.178498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.772 qpair failed and we were unable to recover it. 00:26:18.772 [2024-04-26 15:36:36.178832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.772 [2024-04-26 15:36:36.179073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.772 [2024-04-26 15:36:36.179089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.772 qpair failed and we were unable to recover it. 00:26:18.772 [2024-04-26 15:36:36.179461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.772 [2024-04-26 15:36:36.179858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.772 [2024-04-26 15:36:36.179876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.772 qpair failed and we were unable to recover it. 00:26:18.772 [2024-04-26 15:36:36.180134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.772 [2024-04-26 15:36:36.180369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.772 [2024-04-26 15:36:36.180383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.772 qpair failed and we were unable to recover it. 00:26:18.772 [2024-04-26 15:36:36.180759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.772 [2024-04-26 15:36:36.181108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.772 [2024-04-26 15:36:36.181123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.772 qpair failed and we were unable to recover it. 00:26:18.772 [2024-04-26 15:36:36.181334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.772 [2024-04-26 15:36:36.181734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.772 [2024-04-26 15:36:36.181748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.772 qpair failed and we were unable to recover it. 00:26:18.773 [2024-04-26 15:36:36.182094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.773 [2024-04-26 15:36:36.182309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.773 [2024-04-26 15:36:36.182324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.773 qpair failed and we were unable to recover it. 00:26:18.773 [2024-04-26 15:36:36.182598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.773 [2024-04-26 15:36:36.182935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.773 [2024-04-26 15:36:36.182950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.773 qpair failed and we were unable to recover it. 00:26:18.773 [2024-04-26 15:36:36.183339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.773 [2024-04-26 15:36:36.183717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.773 [2024-04-26 15:36:36.183731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.773 qpair failed and we were unable to recover it. 00:26:18.773 [2024-04-26 15:36:36.184072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.773 [2024-04-26 15:36:36.184289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.773 [2024-04-26 15:36:36.184304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.773 qpair failed and we were unable to recover it. 00:26:18.773 [2024-04-26 15:36:36.184664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.773 [2024-04-26 15:36:36.185010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.773 [2024-04-26 15:36:36.185024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.773 qpair failed and we were unable to recover it. 00:26:18.773 [2024-04-26 15:36:36.185383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.773 [2024-04-26 15:36:36.185752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.773 [2024-04-26 15:36:36.185768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.773 qpair failed and we were unable to recover it. 00:26:18.773 [2024-04-26 15:36:36.186154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.773 [2024-04-26 15:36:36.186279] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:26:18.773 [2024-04-26 15:36:36.186345] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:18.773 [2024-04-26 15:36:36.186537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.773 [2024-04-26 15:36:36.186556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.773 qpair failed and we were unable to recover it. 00:26:18.773 [2024-04-26 15:36:36.186912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.773 [2024-04-26 15:36:36.187340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.773 [2024-04-26 15:36:36.187357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.773 qpair failed and we were unable to recover it. 00:26:18.773 [2024-04-26 15:36:36.187673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.773 [2024-04-26 15:36:36.187934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.773 [2024-04-26 15:36:36.187952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.773 qpair failed and we were unable to recover it. 00:26:18.773 [2024-04-26 15:36:36.188316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.773 [2024-04-26 15:36:36.188706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.773 [2024-04-26 15:36:36.188722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.773 qpair failed and we were unable to recover it. 00:26:18.773 [2024-04-26 15:36:36.189111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.773 [2024-04-26 15:36:36.189488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.773 [2024-04-26 15:36:36.189504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.773 qpair failed and we were unable to recover it. 00:26:18.773 [2024-04-26 15:36:36.189859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.773 [2024-04-26 15:36:36.190101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.773 [2024-04-26 15:36:36.190119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.773 qpair failed and we were unable to recover it. 00:26:18.773 [2024-04-26 15:36:36.190508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.773 [2024-04-26 15:36:36.190884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.773 [2024-04-26 15:36:36.190901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.773 qpair failed and we were unable to recover it. 00:26:18.773 [2024-04-26 15:36:36.191275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.773 [2024-04-26 15:36:36.191483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.773 [2024-04-26 15:36:36.191499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.773 qpair failed and we were unable to recover it. 00:26:18.773 [2024-04-26 15:36:36.191853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.773 [2024-04-26 15:36:36.192236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.773 [2024-04-26 15:36:36.192252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.773 qpair failed and we were unable to recover it. 00:26:18.773 [2024-04-26 15:36:36.192453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.773 [2024-04-26 15:36:36.192822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.773 [2024-04-26 15:36:36.192844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.773 qpair failed and we were unable to recover it. 00:26:18.773 [2024-04-26 15:36:36.193213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.773 [2024-04-26 15:36:36.193593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.773 [2024-04-26 15:36:36.193608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.773 qpair failed and we were unable to recover it. 00:26:18.773 [2024-04-26 15:36:36.193852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.773 [2024-04-26 15:36:36.194185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.773 [2024-04-26 15:36:36.194201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.773 qpair failed and we were unable to recover it. 00:26:18.773 [2024-04-26 15:36:36.194572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.773 [2024-04-26 15:36:36.194768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.773 [2024-04-26 15:36:36.194784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.773 qpair failed and we were unable to recover it. 00:26:18.773 [2024-04-26 15:36:36.195203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.773 [2024-04-26 15:36:36.195395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.773 [2024-04-26 15:36:36.195412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.773 qpair failed and we were unable to recover it. 00:26:18.773 [2024-04-26 15:36:36.195802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.773 [2024-04-26 15:36:36.196154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.773 [2024-04-26 15:36:36.196171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.773 qpair failed and we were unable to recover it. 00:26:18.773 [2024-04-26 15:36:36.196530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.773 [2024-04-26 15:36:36.196795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.773 [2024-04-26 15:36:36.196811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.773 qpair failed and we were unable to recover it. 00:26:18.773 [2024-04-26 15:36:36.197170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.773 [2024-04-26 15:36:36.197561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.773 [2024-04-26 15:36:36.197575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.773 qpair failed and we were unable to recover it. 00:26:18.773 [2024-04-26 15:36:36.197810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.773 [2024-04-26 15:36:36.198223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.773 [2024-04-26 15:36:36.198238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.773 qpair failed and we were unable to recover it. 00:26:18.773 [2024-04-26 15:36:36.198571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.773 [2024-04-26 15:36:36.198979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.773 [2024-04-26 15:36:36.198995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:18.773 qpair failed and we were unable to recover it. 00:26:19.043 [2024-04-26 15:36:36.199387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.043 [2024-04-26 15:36:36.199726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.043 [2024-04-26 15:36:36.199741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.043 qpair failed and we were unable to recover it. 00:26:19.043 [2024-04-26 15:36:36.199992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.043 [2024-04-26 15:36:36.200395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.043 [2024-04-26 15:36:36.200409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.043 qpair failed and we were unable to recover it. 00:26:19.043 [2024-04-26 15:36:36.200664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.043 [2024-04-26 15:36:36.200917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.043 [2024-04-26 15:36:36.200932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.043 qpair failed and we were unable to recover it. 00:26:19.043 [2024-04-26 15:36:36.201235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.043 [2024-04-26 15:36:36.201570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.043 [2024-04-26 15:36:36.201584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.043 qpair failed and we were unable to recover it. 00:26:19.043 [2024-04-26 15:36:36.201922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.043 [2024-04-26 15:36:36.202319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.043 [2024-04-26 15:36:36.202335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.043 qpair failed and we were unable to recover it. 00:26:19.043 [2024-04-26 15:36:36.202688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.043 [2024-04-26 15:36:36.202899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.043 [2024-04-26 15:36:36.202916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.043 qpair failed and we were unable to recover it. 00:26:19.043 [2024-04-26 15:36:36.203259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.043 [2024-04-26 15:36:36.203640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.043 [2024-04-26 15:36:36.203655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.043 qpair failed and we were unable to recover it. 00:26:19.043 [2024-04-26 15:36:36.203998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.043 [2024-04-26 15:36:36.204200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.043 [2024-04-26 15:36:36.204214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.043 qpair failed and we were unable to recover it. 00:26:19.043 [2024-04-26 15:36:36.204609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.043 [2024-04-26 15:36:36.204981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.043 [2024-04-26 15:36:36.204996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.043 qpair failed and we were unable to recover it. 00:26:19.043 [2024-04-26 15:36:36.205202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.043 [2024-04-26 15:36:36.205626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-04-26 15:36:36.205640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.044 qpair failed and we were unable to recover it. 00:26:19.044 [2024-04-26 15:36:36.205973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-04-26 15:36:36.206341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-04-26 15:36:36.206357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.044 qpair failed and we were unable to recover it. 00:26:19.044 [2024-04-26 15:36:36.206567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-04-26 15:36:36.206909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-04-26 15:36:36.206925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.044 qpair failed and we were unable to recover it. 00:26:19.044 [2024-04-26 15:36:36.207343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-04-26 15:36:36.207737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-04-26 15:36:36.207751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.044 qpair failed and we were unable to recover it. 00:26:19.044 [2024-04-26 15:36:36.208094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-04-26 15:36:36.208303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-04-26 15:36:36.208318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.044 qpair failed and we were unable to recover it. 00:26:19.044 [2024-04-26 15:36:36.208699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-04-26 15:36:36.209022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-04-26 15:36:36.209038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.044 qpair failed and we were unable to recover it. 00:26:19.044 [2024-04-26 15:36:36.209441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-04-26 15:36:36.209786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-04-26 15:36:36.209799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.044 qpair failed and we were unable to recover it. 00:26:19.044 [2024-04-26 15:36:36.210253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-04-26 15:36:36.210632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-04-26 15:36:36.210647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.044 qpair failed and we were unable to recover it. 00:26:19.044 [2024-04-26 15:36:36.210999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-04-26 15:36:36.211355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-04-26 15:36:36.211371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.044 qpair failed and we were unable to recover it. 00:26:19.044 [2024-04-26 15:36:36.211713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-04-26 15:36:36.212024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-04-26 15:36:36.212039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.044 qpair failed and we were unable to recover it. 00:26:19.044 [2024-04-26 15:36:36.212276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-04-26 15:36:36.212651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-04-26 15:36:36.212666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.044 qpair failed and we were unable to recover it. 00:26:19.044 [2024-04-26 15:36:36.212930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-04-26 15:36:36.213157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-04-26 15:36:36.213171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.044 qpair failed and we were unable to recover it. 00:26:19.044 [2024-04-26 15:36:36.213577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-04-26 15:36:36.214023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-04-26 15:36:36.214039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.044 qpair failed and we were unable to recover it. 00:26:19.044 [2024-04-26 15:36:36.214416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-04-26 15:36:36.214825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-04-26 15:36:36.214856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.044 qpair failed and we were unable to recover it. 00:26:19.044 [2024-04-26 15:36:36.215253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-04-26 15:36:36.215599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-04-26 15:36:36.215614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.044 qpair failed and we were unable to recover it. 00:26:19.044 [2024-04-26 15:36:36.216075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-04-26 15:36:36.216437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-04-26 15:36:36.216453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.044 qpair failed and we were unable to recover it. 00:26:19.044 [2024-04-26 15:36:36.216825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-04-26 15:36:36.217225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-04-26 15:36:36.217240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.044 qpair failed and we were unable to recover it. 00:26:19.044 [2024-04-26 15:36:36.217464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-04-26 15:36:36.217851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-04-26 15:36:36.217867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.044 qpair failed and we were unable to recover it. 00:26:19.044 [2024-04-26 15:36:36.218226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-04-26 15:36:36.218437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-04-26 15:36:36.218451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.044 qpair failed and we were unable to recover it. 00:26:19.044 [2024-04-26 15:36:36.218661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-04-26 15:36:36.218936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-04-26 15:36:36.218951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.044 qpair failed and we were unable to recover it. 00:26:19.044 [2024-04-26 15:36:36.219321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-04-26 15:36:36.219701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-04-26 15:36:36.219716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.044 qpair failed and we were unable to recover it. 00:26:19.044 [2024-04-26 15:36:36.220047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-04-26 15:36:36.220438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-04-26 15:36:36.220452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.044 qpair failed and we were unable to recover it. 00:26:19.044 [2024-04-26 15:36:36.220647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-04-26 15:36:36.220969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-04-26 15:36:36.220984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.044 qpair failed and we were unable to recover it. 00:26:19.044 [2024-04-26 15:36:36.221335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-04-26 15:36:36.221543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-04-26 15:36:36.221558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.044 qpair failed and we were unable to recover it. 00:26:19.044 [2024-04-26 15:36:36.221948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-04-26 15:36:36.222180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-04-26 15:36:36.222194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.044 qpair failed and we were unable to recover it. 00:26:19.044 [2024-04-26 15:36:36.222555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-04-26 15:36:36.222910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-04-26 15:36:36.222925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.044 qpair failed and we were unable to recover it. 00:26:19.044 [2024-04-26 15:36:36.223220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-04-26 15:36:36.223579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-04-26 15:36:36.223593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.044 qpair failed and we were unable to recover it. 00:26:19.044 EAL: No free 2048 kB hugepages reported on node 1 00:26:19.044 [2024-04-26 15:36:36.223912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-04-26 15:36:36.224135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-04-26 15:36:36.224150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.044 qpair failed and we were unable to recover it. 00:26:19.044 [2024-04-26 15:36:36.224511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-04-26 15:36:36.224754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.044 [2024-04-26 15:36:36.224767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.044 qpair failed and we were unable to recover it. 00:26:19.045 [2024-04-26 15:36:36.225178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-04-26 15:36:36.225561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-04-26 15:36:36.225577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.045 qpair failed and we were unable to recover it. 00:26:19.045 [2024-04-26 15:36:36.225942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-04-26 15:36:36.226300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-04-26 15:36:36.226314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.045 qpair failed and we were unable to recover it. 00:26:19.045 [2024-04-26 15:36:36.226660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-04-26 15:36:36.227024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-04-26 15:36:36.227038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.045 qpair failed and we were unable to recover it. 00:26:19.045 [2024-04-26 15:36:36.227468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-04-26 15:36:36.227861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-04-26 15:36:36.227876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.045 qpair failed and we were unable to recover it. 00:26:19.045 [2024-04-26 15:36:36.228280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-04-26 15:36:36.228650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-04-26 15:36:36.228665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.045 qpair failed and we were unable to recover it. 00:26:19.045 [2024-04-26 15:36:36.229010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-04-26 15:36:36.229358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-04-26 15:36:36.229372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.045 qpair failed and we were unable to recover it. 00:26:19.045 [2024-04-26 15:36:36.229702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-04-26 15:36:36.229970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-04-26 15:36:36.229985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.045 qpair failed and we were unable to recover it. 00:26:19.045 [2024-04-26 15:36:36.230213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-04-26 15:36:36.230584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-04-26 15:36:36.230599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.045 qpair failed and we were unable to recover it. 00:26:19.045 [2024-04-26 15:36:36.231058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-04-26 15:36:36.231441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-04-26 15:36:36.231455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.045 qpair failed and we were unable to recover it. 00:26:19.045 [2024-04-26 15:36:36.231822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-04-26 15:36:36.232107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-04-26 15:36:36.232122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.045 qpair failed and we were unable to recover it. 00:26:19.045 [2024-04-26 15:36:36.232453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-04-26 15:36:36.232801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-04-26 15:36:36.232815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.045 qpair failed and we were unable to recover it. 00:26:19.045 [2024-04-26 15:36:36.233173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-04-26 15:36:36.233384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-04-26 15:36:36.233399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.045 qpair failed and we were unable to recover it. 00:26:19.045 [2024-04-26 15:36:36.233737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-04-26 15:36:36.233962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-04-26 15:36:36.233977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.045 qpair failed and we were unable to recover it. 00:26:19.045 [2024-04-26 15:36:36.234351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-04-26 15:36:36.234700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-04-26 15:36:36.234714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.045 qpair failed and we were unable to recover it. 00:26:19.045 [2024-04-26 15:36:36.235056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-04-26 15:36:36.235407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-04-26 15:36:36.235422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.045 qpair failed and we were unable to recover it. 00:26:19.045 [2024-04-26 15:36:36.235808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-04-26 15:36:36.236177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-04-26 15:36:36.236198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.045 qpair failed and we were unable to recover it. 00:26:19.045 [2024-04-26 15:36:36.236597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-04-26 15:36:36.236938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-04-26 15:36:36.236953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.045 qpair failed and we were unable to recover it. 00:26:19.045 [2024-04-26 15:36:36.237366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-04-26 15:36:36.237744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-04-26 15:36:36.237759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.045 qpair failed and we were unable to recover it. 00:26:19.045 [2024-04-26 15:36:36.238083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-04-26 15:36:36.238440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-04-26 15:36:36.238455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.045 qpair failed and we were unable to recover it. 00:26:19.045 [2024-04-26 15:36:36.238801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-04-26 15:36:36.239161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-04-26 15:36:36.239175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.045 qpair failed and we were unable to recover it. 00:26:19.045 [2024-04-26 15:36:36.239410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-04-26 15:36:36.239613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-04-26 15:36:36.239627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.045 qpair failed and we were unable to recover it. 00:26:19.045 [2024-04-26 15:36:36.239907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-04-26 15:36:36.240147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-04-26 15:36:36.240162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.045 qpair failed and we were unable to recover it. 00:26:19.045 [2024-04-26 15:36:36.240576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-04-26 15:36:36.240927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-04-26 15:36:36.240942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.045 qpair failed and we were unable to recover it. 00:26:19.045 [2024-04-26 15:36:36.241322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-04-26 15:36:36.241511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-04-26 15:36:36.241527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.045 qpair failed and we were unable to recover it. 00:26:19.045 [2024-04-26 15:36:36.241866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-04-26 15:36:36.242228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-04-26 15:36:36.242244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.045 qpair failed and we were unable to recover it. 00:26:19.045 [2024-04-26 15:36:36.242574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-04-26 15:36:36.242928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-04-26 15:36:36.242943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.045 qpair failed and we were unable to recover it. 00:26:19.045 [2024-04-26 15:36:36.243317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-04-26 15:36:36.243501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-04-26 15:36:36.243516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.045 qpair failed and we were unable to recover it. 00:26:19.045 [2024-04-26 15:36:36.243852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-04-26 15:36:36.244197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-04-26 15:36:36.244211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.045 qpair failed and we were unable to recover it. 00:26:19.045 [2024-04-26 15:36:36.244456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.045 [2024-04-26 15:36:36.244662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.046 [2024-04-26 15:36:36.244677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.046 qpair failed and we were unable to recover it. 00:26:19.046 [2024-04-26 15:36:36.245014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.046 [2024-04-26 15:36:36.245369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.046 [2024-04-26 15:36:36.245383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.046 qpair failed and we were unable to recover it. 00:26:19.046 [2024-04-26 15:36:36.245713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.046 [2024-04-26 15:36:36.245972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.046 [2024-04-26 15:36:36.245988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.046 qpair failed and we were unable to recover it. 00:26:19.046 [2024-04-26 15:36:36.246359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.046 [2024-04-26 15:36:36.246731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.046 [2024-04-26 15:36:36.246745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.046 qpair failed and we were unable to recover it. 00:26:19.046 [2024-04-26 15:36:36.247099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.046 [2024-04-26 15:36:36.247483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.046 [2024-04-26 15:36:36.247498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.046 qpair failed and we were unable to recover it. 00:26:19.046 [2024-04-26 15:36:36.247873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.046 [2024-04-26 15:36:36.248209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.046 [2024-04-26 15:36:36.248224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.046 qpair failed and we were unable to recover it. 00:26:19.046 [2024-04-26 15:36:36.248548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.046 [2024-04-26 15:36:36.248929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.046 [2024-04-26 15:36:36.248944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.046 qpair failed and we were unable to recover it. 00:26:19.046 [2024-04-26 15:36:36.249195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.046 [2024-04-26 15:36:36.249544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.046 [2024-04-26 15:36:36.249558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.046 qpair failed and we were unable to recover it. 00:26:19.046 [2024-04-26 15:36:36.249895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.046 [2024-04-26 15:36:36.250220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.046 [2024-04-26 15:36:36.250236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.046 qpair failed and we were unable to recover it. 00:26:19.046 [2024-04-26 15:36:36.250492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.046 [2024-04-26 15:36:36.250864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.046 [2024-04-26 15:36:36.250879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.046 qpair failed and we were unable to recover it. 00:26:19.046 [2024-04-26 15:36:36.251230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.046 [2024-04-26 15:36:36.251447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.046 [2024-04-26 15:36:36.251461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.046 qpair failed and we were unable to recover it. 00:26:19.046 [2024-04-26 15:36:36.251882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.046 [2024-04-26 15:36:36.252252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.046 [2024-04-26 15:36:36.252268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.046 qpair failed and we were unable to recover it. 00:26:19.046 [2024-04-26 15:36:36.252620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.046 [2024-04-26 15:36:36.252977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.046 [2024-04-26 15:36:36.252993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.046 qpair failed and we were unable to recover it. 00:26:19.046 [2024-04-26 15:36:36.253322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.046 [2024-04-26 15:36:36.253647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.046 [2024-04-26 15:36:36.253662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.046 qpair failed and we were unable to recover it. 00:26:19.046 [2024-04-26 15:36:36.253932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.046 [2024-04-26 15:36:36.254200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.046 [2024-04-26 15:36:36.254216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.046 qpair failed and we were unable to recover it. 00:26:19.046 [2024-04-26 15:36:36.254444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.046 [2024-04-26 15:36:36.254798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.046 [2024-04-26 15:36:36.254813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.046 qpair failed and we were unable to recover it. 00:26:19.046 [2024-04-26 15:36:36.255189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.046 [2024-04-26 15:36:36.255544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.046 [2024-04-26 15:36:36.255559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.046 qpair failed and we were unable to recover it. 00:26:19.046 [2024-04-26 15:36:36.255920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.046 [2024-04-26 15:36:36.256288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.046 [2024-04-26 15:36:36.256303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.046 qpair failed and we were unable to recover it. 00:26:19.046 [2024-04-26 15:36:36.256513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.046 [2024-04-26 15:36:36.256852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.046 [2024-04-26 15:36:36.256867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.046 qpair failed and we were unable to recover it. 00:26:19.046 [2024-04-26 15:36:36.257021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.046 [2024-04-26 15:36:36.257350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.046 [2024-04-26 15:36:36.257365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.046 qpair failed and we were unable to recover it. 00:26:19.046 [2024-04-26 15:36:36.257585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.046 [2024-04-26 15:36:36.257936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.046 [2024-04-26 15:36:36.257951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.046 qpair failed and we were unable to recover it. 00:26:19.046 [2024-04-26 15:36:36.258332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.046 [2024-04-26 15:36:36.258717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.046 [2024-04-26 15:36:36.258733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.046 qpair failed and we were unable to recover it. 00:26:19.046 [2024-04-26 15:36:36.258987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.046 [2024-04-26 15:36:36.259320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.046 [2024-04-26 15:36:36.259334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.046 qpair failed and we were unable to recover it. 00:26:19.046 [2024-04-26 15:36:36.259723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.046 [2024-04-26 15:36:36.260079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.046 [2024-04-26 15:36:36.260094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.046 qpair failed and we were unable to recover it. 00:26:19.046 [2024-04-26 15:36:36.260421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.046 [2024-04-26 15:36:36.260770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.046 [2024-04-26 15:36:36.260788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.046 qpair failed and we were unable to recover it. 00:26:19.046 [2024-04-26 15:36:36.261006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.046 [2024-04-26 15:36:36.261357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.046 [2024-04-26 15:36:36.261372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.046 qpair failed and we were unable to recover it. 00:26:19.046 [2024-04-26 15:36:36.261593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.046 [2024-04-26 15:36:36.261805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.046 [2024-04-26 15:36:36.261820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.046 qpair failed and we were unable to recover it. 00:26:19.046 [2024-04-26 15:36:36.262165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.046 [2024-04-26 15:36:36.262513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.046 [2024-04-26 15:36:36.262527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.046 qpair failed and we were unable to recover it. 00:26:19.046 [2024-04-26 15:36:36.262874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.046 [2024-04-26 15:36:36.263235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.047 [2024-04-26 15:36:36.263250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.047 qpair failed and we were unable to recover it. 00:26:19.047 [2024-04-26 15:36:36.263585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.047 [2024-04-26 15:36:36.263966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.047 [2024-04-26 15:36:36.263981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.047 qpair failed and we were unable to recover it. 00:26:19.047 [2024-04-26 15:36:36.264318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.047 [2024-04-26 15:36:36.264692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.047 [2024-04-26 15:36:36.264706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.047 qpair failed and we were unable to recover it. 00:26:19.047 [2024-04-26 15:36:36.265053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.047 [2024-04-26 15:36:36.265400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.047 [2024-04-26 15:36:36.265415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.047 qpair failed and we were unable to recover it. 00:26:19.047 [2024-04-26 15:36:36.265775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.047 [2024-04-26 15:36:36.265998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.047 [2024-04-26 15:36:36.266013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.047 qpair failed and we were unable to recover it. 00:26:19.047 [2024-04-26 15:36:36.266494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.047 [2024-04-26 15:36:36.266869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.047 [2024-04-26 15:36:36.266885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.047 qpair failed and we were unable to recover it. 00:26:19.047 [2024-04-26 15:36:36.267246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.047 [2024-04-26 15:36:36.267592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.047 [2024-04-26 15:36:36.267611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.047 qpair failed and we were unable to recover it. 00:26:19.047 [2024-04-26 15:36:36.267954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.047 [2024-04-26 15:36:36.268351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.047 [2024-04-26 15:36:36.268366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.047 qpair failed and we were unable to recover it. 00:26:19.047 [2024-04-26 15:36:36.268608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.047 [2024-04-26 15:36:36.268975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.047 [2024-04-26 15:36:36.268990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.047 qpair failed and we were unable to recover it. 00:26:19.047 [2024-04-26 15:36:36.269231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.047 [2024-04-26 15:36:36.269638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.047 [2024-04-26 15:36:36.269652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.047 qpair failed and we were unable to recover it. 00:26:19.047 [2024-04-26 15:36:36.269987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.047 [2024-04-26 15:36:36.270335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.047 [2024-04-26 15:36:36.270349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.047 qpair failed and we were unable to recover it. 00:26:19.047 [2024-04-26 15:36:36.270694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.047 [2024-04-26 15:36:36.271047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.047 [2024-04-26 15:36:36.271062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.047 qpair failed and we were unable to recover it. 00:26:19.047 [2024-04-26 15:36:36.271410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.047 [2024-04-26 15:36:36.271850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.047 [2024-04-26 15:36:36.271865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.047 qpair failed and we were unable to recover it. 00:26:19.047 [2024-04-26 15:36:36.272230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.047 [2024-04-26 15:36:36.272578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.047 [2024-04-26 15:36:36.272592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.047 qpair failed and we were unable to recover it. 00:26:19.047 [2024-04-26 15:36:36.272859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.047 [2024-04-26 15:36:36.273227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.047 [2024-04-26 15:36:36.273241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.047 qpair failed and we were unable to recover it. 00:26:19.047 [2024-04-26 15:36:36.273651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.047 [2024-04-26 15:36:36.274031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.047 [2024-04-26 15:36:36.274046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.047 qpair failed and we were unable to recover it. 00:26:19.047 [2024-04-26 15:36:36.274420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.047 [2024-04-26 15:36:36.274759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.047 [2024-04-26 15:36:36.274777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.047 qpair failed and we were unable to recover it. 00:26:19.047 [2024-04-26 15:36:36.275115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.047 [2024-04-26 15:36:36.275476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.047 [2024-04-26 15:36:36.275492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.047 qpair failed and we were unable to recover it. 00:26:19.047 [2024-04-26 15:36:36.275790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.047 [2024-04-26 15:36:36.276175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.047 [2024-04-26 15:36:36.276190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.047 qpair failed and we were unable to recover it. 00:26:19.047 [2024-04-26 15:36:36.276399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.047 [2024-04-26 15:36:36.276721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.047 [2024-04-26 15:36:36.276736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.047 qpair failed and we were unable to recover it. 00:26:19.047 [2024-04-26 15:36:36.277088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.047 [2024-04-26 15:36:36.277463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.047 [2024-04-26 15:36:36.277479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.047 qpair failed and we were unable to recover it. 00:26:19.047 [2024-04-26 15:36:36.277850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.047 [2024-04-26 15:36:36.278199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.047 [2024-04-26 15:36:36.278213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.047 qpair failed and we were unable to recover it. 00:26:19.047 [2024-04-26 15:36:36.278483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.047 [2024-04-26 15:36:36.278737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.047 [2024-04-26 15:36:36.278751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.047 qpair failed and we were unable to recover it. 00:26:19.047 [2024-04-26 15:36:36.279089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.047 [2024-04-26 15:36:36.279305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.047 [2024-04-26 15:36:36.279319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.047 qpair failed and we were unable to recover it. 00:26:19.047 [2024-04-26 15:36:36.279644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.047 [2024-04-26 15:36:36.279997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.047 [2024-04-26 15:36:36.280012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.047 qpair failed and we were unable to recover it. 00:26:19.047 [2024-04-26 15:36:36.280043] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:19.047 [2024-04-26 15:36:36.280372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.047 [2024-04-26 15:36:36.280616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.047 [2024-04-26 15:36:36.280629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.047 qpair failed and we were unable to recover it. 00:26:19.047 [2024-04-26 15:36:36.281076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.047 [2024-04-26 15:36:36.281311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.047 [2024-04-26 15:36:36.281330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.047 qpair failed and we were unable to recover it. 00:26:19.047 [2024-04-26 15:36:36.281759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.047 [2024-04-26 15:36:36.282095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.047 [2024-04-26 15:36:36.282111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.047 qpair failed and we were unable to recover it. 00:26:19.047 [2024-04-26 15:36:36.282267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.047 [2024-04-26 15:36:36.282602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.047 [2024-04-26 15:36:36.282615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.047 qpair failed and we were unable to recover it. 00:26:19.048 [2024-04-26 15:36:36.282946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.048 [2024-04-26 15:36:36.283322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.048 [2024-04-26 15:36:36.283336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.048 qpair failed and we were unable to recover it. 00:26:19.048 [2024-04-26 15:36:36.283725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.048 [2024-04-26 15:36:36.283935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.048 [2024-04-26 15:36:36.283949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.048 qpair failed and we were unable to recover it. 00:26:19.048 [2024-04-26 15:36:36.284344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.048 [2024-04-26 15:36:36.284670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.048 [2024-04-26 15:36:36.284685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.048 qpair failed and we were unable to recover it. 00:26:19.048 [2024-04-26 15:36:36.285040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.048 [2024-04-26 15:36:36.285251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.048 [2024-04-26 15:36:36.285267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.048 qpair failed and we were unable to recover it. 00:26:19.048 [2024-04-26 15:36:36.285655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.048 [2024-04-26 15:36:36.286016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.048 [2024-04-26 15:36:36.286031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.048 qpair failed and we were unable to recover it. 00:26:19.048 [2024-04-26 15:36:36.286404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.048 [2024-04-26 15:36:36.286757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.048 [2024-04-26 15:36:36.286771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.048 qpair failed and we were unable to recover it. 00:26:19.048 [2024-04-26 15:36:36.287001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.048 [2024-04-26 15:36:36.287369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.048 [2024-04-26 15:36:36.287384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.048 qpair failed and we were unable to recover it. 00:26:19.048 [2024-04-26 15:36:36.287721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.048 [2024-04-26 15:36:36.288092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.048 [2024-04-26 15:36:36.288112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.048 qpair failed and we were unable to recover it. 00:26:19.048 [2024-04-26 15:36:36.288449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.048 [2024-04-26 15:36:36.288775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.048 [2024-04-26 15:36:36.288789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.048 qpair failed and we were unable to recover it. 00:26:19.048 [2024-04-26 15:36:36.289158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.048 [2024-04-26 15:36:36.289544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.048 [2024-04-26 15:36:36.289559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.048 qpair failed and we were unable to recover it. 00:26:19.048 [2024-04-26 15:36:36.289807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.048 [2024-04-26 15:36:36.290161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.048 [2024-04-26 15:36:36.290176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.048 qpair failed and we were unable to recover it. 00:26:19.048 [2024-04-26 15:36:36.290508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.048 [2024-04-26 15:36:36.290871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.048 [2024-04-26 15:36:36.290887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.048 qpair failed and we were unable to recover it. 00:26:19.048 [2024-04-26 15:36:36.291231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.048 [2024-04-26 15:36:36.291594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.048 [2024-04-26 15:36:36.291608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.048 qpair failed and we were unable to recover it. 00:26:19.048 [2024-04-26 15:36:36.291941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.048 [2024-04-26 15:36:36.292294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.048 [2024-04-26 15:36:36.292309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.048 qpair failed and we were unable to recover it. 00:26:19.048 [2024-04-26 15:36:36.292620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.048 [2024-04-26 15:36:36.293016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.048 [2024-04-26 15:36:36.293030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.048 qpair failed and we were unable to recover it. 00:26:19.048 [2024-04-26 15:36:36.293379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.048 [2024-04-26 15:36:36.293591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.048 [2024-04-26 15:36:36.293606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.048 qpair failed and we were unable to recover it. 00:26:19.048 [2024-04-26 15:36:36.293807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.048 [2024-04-26 15:36:36.294157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.048 [2024-04-26 15:36:36.294172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.048 qpair failed and we were unable to recover it. 00:26:19.048 [2024-04-26 15:36:36.294386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.048 [2024-04-26 15:36:36.294757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.048 [2024-04-26 15:36:36.294776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.048 qpair failed and we were unable to recover it. 00:26:19.048 [2024-04-26 15:36:36.295114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.048 [2024-04-26 15:36:36.295470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.048 [2024-04-26 15:36:36.295484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.048 qpair failed and we were unable to recover it. 00:26:19.048 [2024-04-26 15:36:36.295850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.048 [2024-04-26 15:36:36.296212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.048 [2024-04-26 15:36:36.296227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.048 qpair failed and we were unable to recover it. 00:26:19.048 [2024-04-26 15:36:36.296568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.048 [2024-04-26 15:36:36.297010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.048 [2024-04-26 15:36:36.297025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.048 qpair failed and we were unable to recover it. 00:26:19.048 [2024-04-26 15:36:36.297359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.048 [2024-04-26 15:36:36.297588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.049 [2024-04-26 15:36:36.297603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.049 qpair failed and we were unable to recover it. 00:26:19.049 [2024-04-26 15:36:36.298074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.049 [2024-04-26 15:36:36.298422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.049 [2024-04-26 15:36:36.298436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.049 qpair failed and we were unable to recover it. 00:26:19.049 [2024-04-26 15:36:36.298714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.049 [2024-04-26 15:36:36.299071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.049 [2024-04-26 15:36:36.299086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.049 qpair failed and we were unable to recover it. 00:26:19.049 [2024-04-26 15:36:36.299357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.049 [2024-04-26 15:36:36.299715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.049 [2024-04-26 15:36:36.299731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.049 qpair failed and we were unable to recover it. 00:26:19.049 [2024-04-26 15:36:36.299969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.049 [2024-04-26 15:36:36.300393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.049 [2024-04-26 15:36:36.300408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.049 qpair failed and we were unable to recover it. 00:26:19.049 [2024-04-26 15:36:36.300701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.049 [2024-04-26 15:36:36.300923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.049 [2024-04-26 15:36:36.300938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.049 qpair failed and we were unable to recover it. 00:26:19.049 [2024-04-26 15:36:36.301293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.049 [2024-04-26 15:36:36.301662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.049 [2024-04-26 15:36:36.301680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.049 qpair failed and we were unable to recover it. 00:26:19.049 [2024-04-26 15:36:36.301919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.049 [2024-04-26 15:36:36.302270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.049 [2024-04-26 15:36:36.302284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.049 qpair failed and we were unable to recover it. 00:26:19.049 [2024-04-26 15:36:36.302612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.049 [2024-04-26 15:36:36.302980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.049 [2024-04-26 15:36:36.302995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.049 qpair failed and we were unable to recover it. 00:26:19.049 [2024-04-26 15:36:36.303344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.049 [2024-04-26 15:36:36.303708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.049 [2024-04-26 15:36:36.303722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.049 qpair failed and we were unable to recover it. 00:26:19.049 [2024-04-26 15:36:36.304036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.049 [2024-04-26 15:36:36.304397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.049 [2024-04-26 15:36:36.304411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.049 qpair failed and we were unable to recover it. 00:26:19.049 [2024-04-26 15:36:36.304742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.049 [2024-04-26 15:36:36.304977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.049 [2024-04-26 15:36:36.304992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.049 qpair failed and we were unable to recover it. 00:26:19.049 [2024-04-26 15:36:36.305354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.049 [2024-04-26 15:36:36.305722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.049 [2024-04-26 15:36:36.305737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.049 qpair failed and we were unable to recover it. 00:26:19.049 [2024-04-26 15:36:36.306094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.049 [2024-04-26 15:36:36.306297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.049 [2024-04-26 15:36:36.306311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.049 qpair failed and we were unable to recover it. 00:26:19.049 [2024-04-26 15:36:36.306638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.049 [2024-04-26 15:36:36.306900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.049 [2024-04-26 15:36:36.306915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.049 qpair failed and we were unable to recover it. 00:26:19.049 [2024-04-26 15:36:36.307276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.049 [2024-04-26 15:36:36.307663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.049 [2024-04-26 15:36:36.307679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.049 qpair failed and we were unable to recover it. 00:26:19.049 [2024-04-26 15:36:36.308037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.049 [2024-04-26 15:36:36.308440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.049 [2024-04-26 15:36:36.308454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.049 qpair failed and we were unable to recover it. 00:26:19.049 [2024-04-26 15:36:36.308808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.049 [2024-04-26 15:36:36.309205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.049 [2024-04-26 15:36:36.309220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.049 qpair failed and we were unable to recover it. 00:26:19.049 [2024-04-26 15:36:36.309585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.049 [2024-04-26 15:36:36.309758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.049 [2024-04-26 15:36:36.309772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.049 qpair failed and we were unable to recover it. 00:26:19.049 [2024-04-26 15:36:36.310186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.049 [2024-04-26 15:36:36.310519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.049 [2024-04-26 15:36:36.310533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.049 qpair failed and we were unable to recover it. 00:26:19.049 [2024-04-26 15:36:36.310887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.049 [2024-04-26 15:36:36.311267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.049 [2024-04-26 15:36:36.311280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.049 qpair failed and we were unable to recover it. 00:26:19.049 [2024-04-26 15:36:36.311613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.049 [2024-04-26 15:36:36.311951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.049 [2024-04-26 15:36:36.311967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.049 qpair failed and we were unable to recover it. 00:26:19.049 [2024-04-26 15:36:36.312183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.049 [2024-04-26 15:36:36.312521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.049 [2024-04-26 15:36:36.312535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.049 qpair failed and we were unable to recover it. 00:26:19.049 [2024-04-26 15:36:36.312835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.049 [2024-04-26 15:36:36.313168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.049 [2024-04-26 15:36:36.313182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.049 qpair failed and we were unable to recover it. 00:26:19.049 [2024-04-26 15:36:36.313394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.049 [2024-04-26 15:36:36.313713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.049 [2024-04-26 15:36:36.313727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.049 qpair failed and we were unable to recover it. 00:26:19.049 [2024-04-26 15:36:36.314055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.049 [2024-04-26 15:36:36.314410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.049 [2024-04-26 15:36:36.314424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.049 qpair failed and we were unable to recover it. 00:26:19.049 [2024-04-26 15:36:36.314748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.049 [2024-04-26 15:36:36.315123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.049 [2024-04-26 15:36:36.315138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.049 qpair failed and we were unable to recover it. 00:26:19.049 [2024-04-26 15:36:36.315536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.049 [2024-04-26 15:36:36.315881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.049 [2024-04-26 15:36:36.315896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.049 qpair failed and we were unable to recover it. 00:26:19.049 [2024-04-26 15:36:36.316156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.049 [2024-04-26 15:36:36.316515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.049 [2024-04-26 15:36:36.316529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.049 qpair failed and we were unable to recover it. 00:26:19.049 [2024-04-26 15:36:36.316858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.050 [2024-04-26 15:36:36.317207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.050 [2024-04-26 15:36:36.317221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.050 qpair failed and we were unable to recover it. 00:26:19.050 [2024-04-26 15:36:36.317548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.050 [2024-04-26 15:36:36.317905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.050 [2024-04-26 15:36:36.317920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.050 qpair failed and we were unable to recover it. 00:26:19.050 [2024-04-26 15:36:36.318246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.050 [2024-04-26 15:36:36.318553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.050 [2024-04-26 15:36:36.318568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.050 qpair failed and we were unable to recover it. 00:26:19.050 [2024-04-26 15:36:36.318897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.050 [2024-04-26 15:36:36.319261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.050 [2024-04-26 15:36:36.319276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.050 qpair failed and we were unable to recover it. 00:26:19.050 [2024-04-26 15:36:36.319640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.050 [2024-04-26 15:36:36.320027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.050 [2024-04-26 15:36:36.320042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.050 qpair failed and we were unable to recover it. 00:26:19.050 [2024-04-26 15:36:36.320246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.050 [2024-04-26 15:36:36.320555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.050 [2024-04-26 15:36:36.320570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.050 qpair failed and we were unable to recover it. 00:26:19.050 [2024-04-26 15:36:36.320933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.050 [2024-04-26 15:36:36.321306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.050 [2024-04-26 15:36:36.321320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.050 qpair failed and we were unable to recover it. 00:26:19.050 [2024-04-26 15:36:36.321541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.050 [2024-04-26 15:36:36.321905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.050 [2024-04-26 15:36:36.321919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.050 qpair failed and we were unable to recover it. 00:26:19.050 [2024-04-26 15:36:36.322220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.050 [2024-04-26 15:36:36.322544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.050 [2024-04-26 15:36:36.322560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.050 qpair failed and we were unable to recover it. 00:26:19.050 [2024-04-26 15:36:36.322905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.050 [2024-04-26 15:36:36.323247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.050 [2024-04-26 15:36:36.323269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.050 qpair failed and we were unable to recover it. 00:26:19.050 [2024-04-26 15:36:36.323514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.050 [2024-04-26 15:36:36.323855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.050 [2024-04-26 15:36:36.323870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.050 qpair failed and we were unable to recover it. 00:26:19.050 [2024-04-26 15:36:36.324321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.050 [2024-04-26 15:36:36.324664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.050 [2024-04-26 15:36:36.324679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.050 qpair failed and we were unable to recover it. 00:26:19.050 [2024-04-26 15:36:36.325031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.050 [2024-04-26 15:36:36.325287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.050 [2024-04-26 15:36:36.325302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.050 qpair failed and we were unable to recover it. 00:26:19.050 [2024-04-26 15:36:36.325629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.050 [2024-04-26 15:36:36.325964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.050 [2024-04-26 15:36:36.325981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.050 qpair failed and we were unable to recover it. 00:26:19.050 [2024-04-26 15:36:36.326427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.050 [2024-04-26 15:36:36.326749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.050 [2024-04-26 15:36:36.326763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.050 qpair failed and we were unable to recover it. 00:26:19.050 [2024-04-26 15:36:36.327133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.050 [2024-04-26 15:36:36.327341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.050 [2024-04-26 15:36:36.327357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.050 qpair failed and we were unable to recover it. 00:26:19.050 [2024-04-26 15:36:36.327563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.050 [2024-04-26 15:36:36.327927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.050 [2024-04-26 15:36:36.327942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.050 qpair failed and we were unable to recover it. 00:26:19.050 [2024-04-26 15:36:36.328302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.050 [2024-04-26 15:36:36.328653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.050 [2024-04-26 15:36:36.328668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.050 qpair failed and we were unable to recover it. 00:26:19.050 [2024-04-26 15:36:36.329015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.050 [2024-04-26 15:36:36.329354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.050 [2024-04-26 15:36:36.329368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.050 qpair failed and we were unable to recover it. 00:26:19.050 [2024-04-26 15:36:36.329593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.050 [2024-04-26 15:36:36.329856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.050 [2024-04-26 15:36:36.329872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.050 qpair failed and we were unable to recover it. 00:26:19.050 [2024-04-26 15:36:36.330095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.050 [2024-04-26 15:36:36.330433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.050 [2024-04-26 15:36:36.330448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.050 qpair failed and we were unable to recover it. 00:26:19.050 [2024-04-26 15:36:36.330777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.050 [2024-04-26 15:36:36.331117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.050 [2024-04-26 15:36:36.331132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.050 qpair failed and we were unable to recover it. 00:26:19.050 [2024-04-26 15:36:36.331499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.050 [2024-04-26 15:36:36.331850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.050 [2024-04-26 15:36:36.331866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.050 qpair failed and we were unable to recover it. 00:26:19.050 [2024-04-26 15:36:36.332225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.050 [2024-04-26 15:36:36.332566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.050 [2024-04-26 15:36:36.332581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.050 qpair failed and we were unable to recover it. 00:26:19.050 [2024-04-26 15:36:36.332914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.050 [2024-04-26 15:36:36.333304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.050 [2024-04-26 15:36:36.333318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.050 qpair failed and we were unable to recover it. 00:26:19.050 [2024-04-26 15:36:36.333706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.050 [2024-04-26 15:36:36.334087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.050 [2024-04-26 15:36:36.334103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.050 qpair failed and we were unable to recover it. 00:26:19.050 [2024-04-26 15:36:36.334461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.050 [2024-04-26 15:36:36.334789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.050 [2024-04-26 15:36:36.334805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.050 qpair failed and we were unable to recover it. 00:26:19.050 [2024-04-26 15:36:36.335147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.050 [2024-04-26 15:36:36.335391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.050 [2024-04-26 15:36:36.335406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.050 qpair failed and we were unable to recover it. 00:26:19.050 [2024-04-26 15:36:36.335749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.050 [2024-04-26 15:36:36.336178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.050 [2024-04-26 15:36:36.336194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.051 qpair failed and we were unable to recover it. 00:26:19.051 [2024-04-26 15:36:36.336515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.051 [2024-04-26 15:36:36.336888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.051 [2024-04-26 15:36:36.336904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.051 qpair failed and we were unable to recover it. 00:26:19.051 [2024-04-26 15:36:36.337272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.051 [2024-04-26 15:36:36.337638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.051 [2024-04-26 15:36:36.337652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.051 qpair failed and we were unable to recover it. 00:26:19.051 [2024-04-26 15:36:36.338027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.051 [2024-04-26 15:36:36.338395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.051 [2024-04-26 15:36:36.338409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.051 qpair failed and we were unable to recover it. 00:26:19.051 [2024-04-26 15:36:36.338765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.051 [2024-04-26 15:36:36.339102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.051 [2024-04-26 15:36:36.339117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.051 qpair failed and we were unable to recover it. 00:26:19.051 [2024-04-26 15:36:36.339336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.051 [2024-04-26 15:36:36.339716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.051 [2024-04-26 15:36:36.339730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.051 qpair failed and we were unable to recover it. 00:26:19.051 [2024-04-26 15:36:36.340097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.051 [2024-04-26 15:36:36.340509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.051 [2024-04-26 15:36:36.340524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.051 qpair failed and we were unable to recover it. 00:26:19.051 [2024-04-26 15:36:36.340891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.051 [2024-04-26 15:36:36.341114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.051 [2024-04-26 15:36:36.341130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.051 qpair failed and we were unable to recover it. 00:26:19.051 [2024-04-26 15:36:36.341344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.051 [2024-04-26 15:36:36.341699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.051 [2024-04-26 15:36:36.341712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.051 qpair failed and we were unable to recover it. 00:26:19.051 [2024-04-26 15:36:36.342033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.051 [2024-04-26 15:36:36.342401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.051 [2024-04-26 15:36:36.342415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.051 qpair failed and we were unable to recover it. 00:26:19.051 [2024-04-26 15:36:36.342776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.051 [2024-04-26 15:36:36.343155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.051 [2024-04-26 15:36:36.343171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.051 qpair failed and we were unable to recover it. 00:26:19.051 [2024-04-26 15:36:36.343566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.051 [2024-04-26 15:36:36.343895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.051 [2024-04-26 15:36:36.343911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.051 qpair failed and we were unable to recover it. 00:26:19.051 [2024-04-26 15:36:36.344292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.051 [2024-04-26 15:36:36.344668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.051 [2024-04-26 15:36:36.344683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.051 qpair failed and we were unable to recover it. 00:26:19.051 [2024-04-26 15:36:36.345085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.051 [2024-04-26 15:36:36.345425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.051 [2024-04-26 15:36:36.345439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.051 qpair failed and we were unable to recover it. 00:26:19.051 [2024-04-26 15:36:36.345791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.051 [2024-04-26 15:36:36.346159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.051 [2024-04-26 15:36:36.346175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.051 qpair failed and we were unable to recover it. 00:26:19.051 [2024-04-26 15:36:36.346493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.051 [2024-04-26 15:36:36.346864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.051 [2024-04-26 15:36:36.346879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.051 qpair failed and we were unable to recover it. 00:26:19.051 [2024-04-26 15:36:36.347088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.051 [2024-04-26 15:36:36.347436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.051 [2024-04-26 15:36:36.347451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.051 qpair failed and we were unable to recover it. 00:26:19.051 [2024-04-26 15:36:36.347814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.051 [2024-04-26 15:36:36.348204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.051 [2024-04-26 15:36:36.348220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.051 qpair failed and we were unable to recover it. 00:26:19.051 [2024-04-26 15:36:36.348564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.051 [2024-04-26 15:36:36.348933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.051 [2024-04-26 15:36:36.348947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.051 qpair failed and we were unable to recover it. 00:26:19.051 [2024-04-26 15:36:36.349324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.051 [2024-04-26 15:36:36.349693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.051 [2024-04-26 15:36:36.349709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.051 qpair failed and we were unable to recover it. 00:26:19.051 [2024-04-26 15:36:36.350035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.051 [2024-04-26 15:36:36.350417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.051 [2024-04-26 15:36:36.350432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.051 qpair failed and we were unable to recover it. 00:26:19.051 [2024-04-26 15:36:36.350795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.051 [2024-04-26 15:36:36.351173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.051 [2024-04-26 15:36:36.351188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.051 qpair failed and we were unable to recover it. 00:26:19.051 [2024-04-26 15:36:36.351541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.051 [2024-04-26 15:36:36.351909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.051 [2024-04-26 15:36:36.351924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.051 qpair failed and we were unable to recover it. 00:26:19.051 [2024-04-26 15:36:36.352160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.051 [2024-04-26 15:36:36.352537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.051 [2024-04-26 15:36:36.352551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.051 qpair failed and we were unable to recover it. 00:26:19.051 [2024-04-26 15:36:36.352915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.051 [2024-04-26 15:36:36.353303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.051 [2024-04-26 15:36:36.353317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.051 qpair failed and we were unable to recover it. 00:26:19.051 [2024-04-26 15:36:36.353679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.051 [2024-04-26 15:36:36.354053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.051 [2024-04-26 15:36:36.354068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.051 qpair failed and we were unable to recover it. 00:26:19.051 [2024-04-26 15:36:36.354437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.051 [2024-04-26 15:36:36.354811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.051 [2024-04-26 15:36:36.354825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.051 qpair failed and we were unable to recover it. 00:26:19.051 [2024-04-26 15:36:36.355141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.051 [2024-04-26 15:36:36.355354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.051 [2024-04-26 15:36:36.355368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.051 qpair failed and we were unable to recover it. 00:26:19.051 [2024-04-26 15:36:36.355557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.051 [2024-04-26 15:36:36.355701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.051 [2024-04-26 15:36:36.355715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.051 qpair failed and we were unable to recover it. 00:26:19.051 [2024-04-26 15:36:36.355982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.051 [2024-04-26 15:36:36.356184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.051 [2024-04-26 15:36:36.356198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.051 qpair failed and we were unable to recover it. 00:26:19.051 [2024-04-26 15:36:36.356554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.051 [2024-04-26 15:36:36.356925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.051 [2024-04-26 15:36:36.356941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.051 qpair failed and we were unable to recover it. 00:26:19.051 [2024-04-26 15:36:36.357180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.051 [2024-04-26 15:36:36.357391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.052 [2024-04-26 15:36:36.357406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.052 qpair failed and we were unable to recover it. 00:26:19.052 [2024-04-26 15:36:36.357745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.052 [2024-04-26 15:36:36.357984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.052 [2024-04-26 15:36:36.357999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.052 qpair failed and we were unable to recover it. 00:26:19.052 [2024-04-26 15:36:36.358353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.052 [2024-04-26 15:36:36.358704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.052 [2024-04-26 15:36:36.358719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.052 qpair failed and we were unable to recover it. 00:26:19.052 [2024-04-26 15:36:36.359077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.052 [2024-04-26 15:36:36.359448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.052 [2024-04-26 15:36:36.359463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.052 qpair failed and we were unable to recover it. 00:26:19.052 [2024-04-26 15:36:36.359671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.052 [2024-04-26 15:36:36.359909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.052 [2024-04-26 15:36:36.359924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.052 qpair failed and we were unable to recover it. 00:26:19.052 [2024-04-26 15:36:36.360171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.052 [2024-04-26 15:36:36.360514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.052 [2024-04-26 15:36:36.360528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.052 qpair failed and we were unable to recover it. 00:26:19.052 [2024-04-26 15:36:36.360862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.052 [2024-04-26 15:36:36.361110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.052 [2024-04-26 15:36:36.361124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.052 qpair failed and we were unable to recover it. 00:26:19.052 [2024-04-26 15:36:36.361483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.052 [2024-04-26 15:36:36.361826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.052 [2024-04-26 15:36:36.361850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.052 qpair failed and we were unable to recover it. 00:26:19.052 [2024-04-26 15:36:36.362178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.052 [2024-04-26 15:36:36.362522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.052 [2024-04-26 15:36:36.362543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.052 qpair failed and we were unable to recover it. 00:26:19.052 [2024-04-26 15:36:36.362873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.052 [2024-04-26 15:36:36.363231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.052 [2024-04-26 15:36:36.363245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.052 qpair failed and we were unable to recover it. 00:26:19.052 [2024-04-26 15:36:36.363607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.052 [2024-04-26 15:36:36.363968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.052 [2024-04-26 15:36:36.363983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.052 qpair failed and we were unable to recover it. 00:26:19.052 [2024-04-26 15:36:36.364326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.052 [2024-04-26 15:36:36.364638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.052 [2024-04-26 15:36:36.364653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.052 qpair failed and we were unable to recover it. 00:26:19.052 [2024-04-26 15:36:36.365000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.052 [2024-04-26 15:36:36.365368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.052 [2024-04-26 15:36:36.365383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.052 qpair failed and we were unable to recover it. 00:26:19.052 [2024-04-26 15:36:36.365603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.052 [2024-04-26 15:36:36.365972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.052 [2024-04-26 15:36:36.365987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.052 qpair failed and we were unable to recover it. 00:26:19.052 [2024-04-26 15:36:36.366326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.052 [2024-04-26 15:36:36.366624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.052 [2024-04-26 15:36:36.366638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.052 qpair failed and we were unable to recover it. 00:26:19.052 [2024-04-26 15:36:36.366959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.052 [2024-04-26 15:36:36.367319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.052 [2024-04-26 15:36:36.367333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.052 qpair failed and we were unable to recover it. 00:26:19.052 [2024-04-26 15:36:36.367658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.052 [2024-04-26 15:36:36.368010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.052 [2024-04-26 15:36:36.368026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.052 qpair failed and we were unable to recover it. 00:26:19.052 [2024-04-26 15:36:36.368363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.052 [2024-04-26 15:36:36.368714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.052 [2024-04-26 15:36:36.368728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.052 qpair failed and we were unable to recover it. 00:26:19.052 [2024-04-26 15:36:36.369072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.052 [2024-04-26 15:36:36.369428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.052 [2024-04-26 15:36:36.369442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.052 qpair failed and we were unable to recover it. 00:26:19.052 [2024-04-26 15:36:36.369764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.052 [2024-04-26 15:36:36.370017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.052 [2024-04-26 15:36:36.370032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.052 qpair failed and we were unable to recover it. 00:26:19.052 [2024-04-26 15:36:36.370387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.052 [2024-04-26 15:36:36.370797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.052 [2024-04-26 15:36:36.370812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.052 qpair failed and we were unable to recover it. 00:26:19.052 [2024-04-26 15:36:36.371035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.052 [2024-04-26 15:36:36.371392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.052 [2024-04-26 15:36:36.371407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.052 qpair failed and we were unable to recover it. 00:26:19.052 [2024-04-26 15:36:36.371623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.052 [2024-04-26 15:36:36.371817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.052 [2024-04-26 15:36:36.371833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.052 qpair failed and we were unable to recover it. 00:26:19.052 [2024-04-26 15:36:36.372046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.052 [2024-04-26 15:36:36.372376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.052 [2024-04-26 15:36:36.372391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.052 qpair failed and we were unable to recover it. 00:26:19.052 [2024-04-26 15:36:36.372590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.052 [2024-04-26 15:36:36.372957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.052 [2024-04-26 15:36:36.372973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.052 qpair failed and we were unable to recover it. 00:26:19.052 [2024-04-26 15:36:36.373398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.052 [2024-04-26 15:36:36.373474] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:19.052 [2024-04-26 15:36:36.373524] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:19.052 [2024-04-26 15:36:36.373532] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:19.052 [2024-04-26 15:36:36.373539] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:19.052 [2024-04-26 15:36:36.373545] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:19.052 [2024-04-26 15:36:36.373749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.052 [2024-04-26 15:36:36.373764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.053 qpair failed and we were unable to recover it. 00:26:19.053 [2024-04-26 15:36:36.373739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:26:19.053 [2024-04-26 15:36:36.373899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:26:19.053 [2024-04-26 15:36:36.374031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:26:19.053 [2024-04-26 15:36:36.374143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.053 [2024-04-26 15:36:36.374031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:26:19.053 [2024-04-26 15:36:36.374515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.053 [2024-04-26 15:36:36.374530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.053 qpair failed and we were unable to recover it. 00:26:19.053 [2024-04-26 15:36:36.374890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.053 [2024-04-26 15:36:36.375109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.053 [2024-04-26 15:36:36.375124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.053 qpair failed and we were unable to recover it. 00:26:19.053 [2024-04-26 15:36:36.375498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.053 [2024-04-26 15:36:36.375834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.053 [2024-04-26 15:36:36.375859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.053 qpair failed and we were unable to recover it. 00:26:19.053 [2024-04-26 15:36:36.376097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.053 [2024-04-26 15:36:36.376449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.053 [2024-04-26 15:36:36.376463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.053 qpair failed and we were unable to recover it. 00:26:19.053 [2024-04-26 15:36:36.376683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.053 [2024-04-26 15:36:36.377032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.053 [2024-04-26 15:36:36.377048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.053 qpair failed and we were unable to recover it. 00:26:19.053 [2024-04-26 15:36:36.377403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.053 [2024-04-26 15:36:36.377777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.053 [2024-04-26 15:36:36.377792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.053 qpair failed and we were unable to recover it. 00:26:19.053 [2024-04-26 15:36:36.377939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.053 [2024-04-26 15:36:36.378221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.053 [2024-04-26 15:36:36.378235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.053 qpair failed and we were unable to recover it. 00:26:19.053 [2024-04-26 15:36:36.378477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.053 [2024-04-26 15:36:36.378739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.053 [2024-04-26 15:36:36.378752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.053 qpair failed and we were unable to recover it. 00:26:19.053 [2024-04-26 15:36:36.379078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.053 [2024-04-26 15:36:36.379293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.053 [2024-04-26 15:36:36.379310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.053 qpair failed and we were unable to recover it. 00:26:19.053 [2024-04-26 15:36:36.379551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.053 [2024-04-26 15:36:36.379932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.053 [2024-04-26 15:36:36.379947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.053 qpair failed and we were unable to recover it. 00:26:19.053 [2024-04-26 15:36:36.380274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.053 [2024-04-26 15:36:36.380536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.053 [2024-04-26 15:36:36.380550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.053 qpair failed and we were unable to recover it. 00:26:19.053 [2024-04-26 15:36:36.380889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.053 [2024-04-26 15:36:36.381223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.053 [2024-04-26 15:36:36.381238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.053 qpair failed and we were unable to recover it. 00:26:19.053 [2024-04-26 15:36:36.381568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.053 [2024-04-26 15:36:36.381919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.053 [2024-04-26 15:36:36.381935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.053 qpair failed and we were unable to recover it. 00:26:19.053 [2024-04-26 15:36:36.382272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.053 [2024-04-26 15:36:36.382479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.053 [2024-04-26 15:36:36.382492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.053 qpair failed and we were unable to recover it. 00:26:19.053 [2024-04-26 15:36:36.382847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.053 [2024-04-26 15:36:36.383188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.053 [2024-04-26 15:36:36.383211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.053 qpair failed and we were unable to recover it. 00:26:19.053 [2024-04-26 15:36:36.383525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.053 [2024-04-26 15:36:36.383898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.053 [2024-04-26 15:36:36.383926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.053 qpair failed and we were unable to recover it. 00:26:19.053 [2024-04-26 15:36:36.384296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.053 [2024-04-26 15:36:36.384635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.053 [2024-04-26 15:36:36.384650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.053 qpair failed and we were unable to recover it. 00:26:19.053 [2024-04-26 15:36:36.384908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.053 [2024-04-26 15:36:36.385249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.053 [2024-04-26 15:36:36.385264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.053 qpair failed and we were unable to recover it. 00:26:19.053 [2024-04-26 15:36:36.385625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.053 [2024-04-26 15:36:36.385835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.053 [2024-04-26 15:36:36.385859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.053 qpair failed and we were unable to recover it. 00:26:19.053 [2024-04-26 15:36:36.386117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.053 [2024-04-26 15:36:36.386299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.053 [2024-04-26 15:36:36.386315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.053 qpair failed and we were unable to recover it. 00:26:19.053 [2024-04-26 15:36:36.386569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.053 [2024-04-26 15:36:36.386791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.053 [2024-04-26 15:36:36.386809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.053 qpair failed and we were unable to recover it. 00:26:19.053 [2024-04-26 15:36:36.387021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.053 [2024-04-26 15:36:36.387288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.053 [2024-04-26 15:36:36.387314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.053 qpair failed and we were unable to recover it. 00:26:19.053 [2024-04-26 15:36:36.387574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.053 [2024-04-26 15:36:36.387953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.053 [2024-04-26 15:36:36.387969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.053 qpair failed and we were unable to recover it. 00:26:19.053 [2024-04-26 15:36:36.388215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.053 [2024-04-26 15:36:36.388525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.053 [2024-04-26 15:36:36.388540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.053 qpair failed and we were unable to recover it. 00:26:19.053 [2024-04-26 15:36:36.388880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.053 [2024-04-26 15:36:36.389116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.053 [2024-04-26 15:36:36.389131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.053 qpair failed and we were unable to recover it. 00:26:19.053 [2024-04-26 15:36:36.389491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.053 [2024-04-26 15:36:36.389867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.053 [2024-04-26 15:36:36.389883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.053 qpair failed and we were unable to recover it. 00:26:19.053 [2024-04-26 15:36:36.390251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.053 [2024-04-26 15:36:36.390674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.053 [2024-04-26 15:36:36.390700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.053 qpair failed and we were unable to recover it. 00:26:19.053 [2024-04-26 15:36:36.390933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.053 [2024-04-26 15:36:36.391154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.053 [2024-04-26 15:36:36.391181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.053 qpair failed and we were unable to recover it. 00:26:19.053 [2024-04-26 15:36:36.391409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.053 [2024-04-26 15:36:36.391807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.053 [2024-04-26 15:36:36.391822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.053 qpair failed and we were unable to recover it. 00:26:19.053 [2024-04-26 15:36:36.392141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.053 [2024-04-26 15:36:36.392371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.053 [2024-04-26 15:36:36.392387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.054 qpair failed and we were unable to recover it. 00:26:19.054 [2024-04-26 15:36:36.392780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.054 [2024-04-26 15:36:36.393131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.054 [2024-04-26 15:36:36.393159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.054 qpair failed and we were unable to recover it. 00:26:19.054 [2024-04-26 15:36:36.393534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.054 [2024-04-26 15:36:36.393760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.054 [2024-04-26 15:36:36.393779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.054 qpair failed and we were unable to recover it. 00:26:19.054 [2024-04-26 15:36:36.394133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.054 [2024-04-26 15:36:36.394316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.054 [2024-04-26 15:36:36.394333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.054 qpair failed and we were unable to recover it. 00:26:19.054 [2024-04-26 15:36:36.394683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.054 [2024-04-26 15:36:36.395038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.054 [2024-04-26 15:36:36.395066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.054 qpair failed and we were unable to recover it. 00:26:19.054 [2024-04-26 15:36:36.395445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.054 [2024-04-26 15:36:36.395800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.054 [2024-04-26 15:36:36.395819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.054 qpair failed and we were unable to recover it. 00:26:19.054 [2024-04-26 15:36:36.396180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.054 [2024-04-26 15:36:36.396560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.054 [2024-04-26 15:36:36.396575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.054 qpair failed and we were unable to recover it. 00:26:19.054 [2024-04-26 15:36:36.396926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.054 [2024-04-26 15:36:36.397144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.054 [2024-04-26 15:36:36.397166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.054 qpair failed and we were unable to recover it. 00:26:19.054 [2024-04-26 15:36:36.397560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.054 [2024-04-26 15:36:36.397814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.054 [2024-04-26 15:36:36.397832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.054 qpair failed and we were unable to recover it. 00:26:19.054 [2024-04-26 15:36:36.398074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.054 [2024-04-26 15:36:36.398430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.054 [2024-04-26 15:36:36.398445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.054 qpair failed and we were unable to recover it. 00:26:19.054 [2024-04-26 15:36:36.398680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.054 [2024-04-26 15:36:36.399033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.054 [2024-04-26 15:36:36.399050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.054 qpair failed and we were unable to recover it. 00:26:19.054 [2024-04-26 15:36:36.399286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.054 [2024-04-26 15:36:36.399508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.054 [2024-04-26 15:36:36.399535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.054 qpair failed and we were unable to recover it. 00:26:19.054 [2024-04-26 15:36:36.399736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.054 [2024-04-26 15:36:36.399963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.054 [2024-04-26 15:36:36.399989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.054 qpair failed and we were unable to recover it. 00:26:19.054 [2024-04-26 15:36:36.400405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.054 [2024-04-26 15:36:36.400780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.054 [2024-04-26 15:36:36.400795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.054 qpair failed and we were unable to recover it. 00:26:19.054 [2024-04-26 15:36:36.401139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.054 [2024-04-26 15:36:36.401429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.054 [2024-04-26 15:36:36.401444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.054 qpair failed and we were unable to recover it. 00:26:19.054 [2024-04-26 15:36:36.401806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.054 [2024-04-26 15:36:36.402155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.054 [2024-04-26 15:36:36.402171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.054 qpair failed and we were unable to recover it. 00:26:19.054 [2024-04-26 15:36:36.402409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.054 [2024-04-26 15:36:36.402780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.054 [2024-04-26 15:36:36.402807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.054 qpair failed and we were unable to recover it. 00:26:19.054 [2024-04-26 15:36:36.402936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.054 [2024-04-26 15:36:36.403169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.054 [2024-04-26 15:36:36.403188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.054 qpair failed and we were unable to recover it. 00:26:19.054 [2024-04-26 15:36:36.403546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.054 [2024-04-26 15:36:36.403908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.054 [2024-04-26 15:36:36.403925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.054 qpair failed and we were unable to recover it. 00:26:19.054 [2024-04-26 15:36:36.404303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.054 [2024-04-26 15:36:36.404671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.054 [2024-04-26 15:36:36.404690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.054 qpair failed and we were unable to recover it. 00:26:19.054 [2024-04-26 15:36:36.404880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.054 [2024-04-26 15:36:36.405198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.054 [2024-04-26 15:36:36.405213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.054 qpair failed and we were unable to recover it. 00:26:19.054 [2024-04-26 15:36:36.405424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.054 [2024-04-26 15:36:36.405794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.054 [2024-04-26 15:36:36.405810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.054 qpair failed and we were unable to recover it. 00:26:19.054 [2024-04-26 15:36:36.406146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.054 [2024-04-26 15:36:36.406364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.054 [2024-04-26 15:36:36.406380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.054 qpair failed and we were unable to recover it. 00:26:19.054 [2024-04-26 15:36:36.406724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.054 [2024-04-26 15:36:36.406950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.054 [2024-04-26 15:36:36.406975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.054 qpair failed and we were unable to recover it. 00:26:19.054 [2024-04-26 15:36:36.407359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.054 [2024-04-26 15:36:36.407634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.054 [2024-04-26 15:36:36.407650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.054 qpair failed and we were unable to recover it. 00:26:19.054 [2024-04-26 15:36:36.407991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.054 [2024-04-26 15:36:36.408361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.054 [2024-04-26 15:36:36.408386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.054 qpair failed and we were unable to recover it. 00:26:19.054 [2024-04-26 15:36:36.408788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.054 [2024-04-26 15:36:36.409127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.054 [2024-04-26 15:36:36.409146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.054 qpair failed and we were unable to recover it. 00:26:19.054 [2024-04-26 15:36:36.409515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.054 [2024-04-26 15:36:36.409888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.054 [2024-04-26 15:36:36.409914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.054 qpair failed and we were unable to recover it. 00:26:19.054 [2024-04-26 15:36:36.410317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.054 [2024-04-26 15:36:36.410535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.054 [2024-04-26 15:36:36.410553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.054 qpair failed and we were unable to recover it. 00:26:19.054 [2024-04-26 15:36:36.410747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.054 [2024-04-26 15:36:36.410961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.054 [2024-04-26 15:36:36.410976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.054 qpair failed and we were unable to recover it. 00:26:19.054 [2024-04-26 15:36:36.411336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.054 [2024-04-26 15:36:36.411574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.054 [2024-04-26 15:36:36.411588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.054 qpair failed and we were unable to recover it. 00:26:19.054 [2024-04-26 15:36:36.411960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.054 [2024-04-26 15:36:36.412346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.054 [2024-04-26 15:36:36.412370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.055 qpair failed and we were unable to recover it. 00:26:19.055 [2024-04-26 15:36:36.412740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.055 [2024-04-26 15:36:36.413133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.055 [2024-04-26 15:36:36.413154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.055 qpair failed and we were unable to recover it. 00:26:19.055 [2024-04-26 15:36:36.413473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.055 [2024-04-26 15:36:36.413706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.055 [2024-04-26 15:36:36.413722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.055 qpair failed and we were unable to recover it. 00:26:19.055 [2024-04-26 15:36:36.413975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.055 [2024-04-26 15:36:36.414304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.055 [2024-04-26 15:36:36.414320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.055 qpair failed and we were unable to recover it. 00:26:19.055 [2024-04-26 15:36:36.414676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.055 [2024-04-26 15:36:36.414936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.055 [2024-04-26 15:36:36.414962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.055 qpair failed and we were unable to recover it. 00:26:19.055 [2024-04-26 15:36:36.415338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.055 [2024-04-26 15:36:36.415705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.055 [2024-04-26 15:36:36.415724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.055 qpair failed and we were unable to recover it. 00:26:19.055 [2024-04-26 15:36:36.415968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.055 [2024-04-26 15:36:36.416344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.055 [2024-04-26 15:36:36.416360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.055 qpair failed and we were unable to recover it. 00:26:19.055 [2024-04-26 15:36:36.416704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.055 [2024-04-26 15:36:36.417056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.055 [2024-04-26 15:36:36.417082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.055 qpair failed and we were unable to recover it. 00:26:19.055 [2024-04-26 15:36:36.417455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.055 [2024-04-26 15:36:36.417792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.055 [2024-04-26 15:36:36.417811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.055 qpair failed and we were unable to recover it. 00:26:19.055 [2024-04-26 15:36:36.417944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.055 [2024-04-26 15:36:36.418331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.055 [2024-04-26 15:36:36.418346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.055 qpair failed and we were unable to recover it. 00:26:19.055 [2024-04-26 15:36:36.418553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.055 [2024-04-26 15:36:36.418791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.055 [2024-04-26 15:36:36.418817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.055 qpair failed and we were unable to recover it. 00:26:19.055 [2024-04-26 15:36:36.419030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.055 [2024-04-26 15:36:36.419297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.055 [2024-04-26 15:36:36.419322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.055 qpair failed and we were unable to recover it. 00:26:19.055 [2024-04-26 15:36:36.419546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.055 [2024-04-26 15:36:36.419952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.055 [2024-04-26 15:36:36.419980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.055 qpair failed and we were unable to recover it. 00:26:19.055 [2024-04-26 15:36:36.420362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.055 [2024-04-26 15:36:36.420564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.055 [2024-04-26 15:36:36.420579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.055 qpair failed and we were unable to recover it. 00:26:19.055 [2024-04-26 15:36:36.420809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.055 [2024-04-26 15:36:36.421034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.055 [2024-04-26 15:36:36.421050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.055 qpair failed and we were unable to recover it. 00:26:19.055 [2024-04-26 15:36:36.421388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.055 [2024-04-26 15:36:36.421637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.055 [2024-04-26 15:36:36.421665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.055 qpair failed and we were unable to recover it. 00:26:19.055 [2024-04-26 15:36:36.421930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.055 [2024-04-26 15:36:36.422195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.055 [2024-04-26 15:36:36.422213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.055 qpair failed and we were unable to recover it. 00:26:19.055 [2024-04-26 15:36:36.422575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.055 [2024-04-26 15:36:36.422778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.055 [2024-04-26 15:36:36.422794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.055 qpair failed and we were unable to recover it. 00:26:19.055 [2024-04-26 15:36:36.423123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.055 [2024-04-26 15:36:36.423495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.055 [2024-04-26 15:36:36.423520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.055 qpair failed and we were unable to recover it. 00:26:19.055 [2024-04-26 15:36:36.423897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.055 [2024-04-26 15:36:36.424124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.055 [2024-04-26 15:36:36.424150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.055 qpair failed and we were unable to recover it. 00:26:19.055 [2024-04-26 15:36:36.424509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.055 [2024-04-26 15:36:36.424890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.055 [2024-04-26 15:36:36.424917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.055 qpair failed and we were unable to recover it. 00:26:19.055 [2024-04-26 15:36:36.425301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.055 [2024-04-26 15:36:36.425386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.055 [2024-04-26 15:36:36.425411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.055 qpair failed and we were unable to recover it. 00:26:19.055 [2024-04-26 15:36:36.425635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.055 [2024-04-26 15:36:36.425969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.055 [2024-04-26 15:36:36.425985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.055 qpair failed and we were unable to recover it. 00:26:19.055 [2024-04-26 15:36:36.426369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.055 [2024-04-26 15:36:36.426746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.055 [2024-04-26 15:36:36.426773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.055 qpair failed and we were unable to recover it. 00:26:19.055 [2024-04-26 15:36:36.426995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.055 [2024-04-26 15:36:36.427358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.055 [2024-04-26 15:36:36.427373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.055 qpair failed and we were unable to recover it. 00:26:19.055 [2024-04-26 15:36:36.427729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.055 [2024-04-26 15:36:36.428117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.055 [2024-04-26 15:36:36.428134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.055 qpair failed and we were unable to recover it. 00:26:19.055 [2024-04-26 15:36:36.428450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.055 [2024-04-26 15:36:36.428829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.055 [2024-04-26 15:36:36.428865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.055 qpair failed and we were unable to recover it. 00:26:19.055 [2024-04-26 15:36:36.429224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.055 [2024-04-26 15:36:36.429610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.055 [2024-04-26 15:36:36.429636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.055 qpair failed and we were unable to recover it. 00:26:19.055 [2024-04-26 15:36:36.429981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.055 [2024-04-26 15:36:36.430228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.055 [2024-04-26 15:36:36.430243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.055 qpair failed and we were unable to recover it. 00:26:19.055 [2024-04-26 15:36:36.430600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.055 [2024-04-26 15:36:36.430930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.055 [2024-04-26 15:36:36.430946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.055 qpair failed and we were unable to recover it. 00:26:19.055 [2024-04-26 15:36:36.431166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.055 [2024-04-26 15:36:36.431394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.055 [2024-04-26 15:36:36.431420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.055 qpair failed and we were unable to recover it. 00:26:19.055 [2024-04-26 15:36:36.431781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.055 [2024-04-26 15:36:36.431999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.056 [2024-04-26 15:36:36.432020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.056 qpair failed and we were unable to recover it. 00:26:19.056 [2024-04-26 15:36:36.432407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.056 [2024-04-26 15:36:36.432620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.056 [2024-04-26 15:36:36.432636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.056 qpair failed and we were unable to recover it. 00:26:19.056 [2024-04-26 15:36:36.432995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.056 [2024-04-26 15:36:36.433350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.056 [2024-04-26 15:36:36.433377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.056 qpair failed and we were unable to recover it. 00:26:19.056 [2024-04-26 15:36:36.433876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.056 [2024-04-26 15:36:36.434101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.056 [2024-04-26 15:36:36.434128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.056 qpair failed and we were unable to recover it. 00:26:19.056 [2024-04-26 15:36:36.434502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.056 [2024-04-26 15:36:36.434741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.056 [2024-04-26 15:36:36.434757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.056 qpair failed and we were unable to recover it. 00:26:19.056 [2024-04-26 15:36:36.435088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.056 [2024-04-26 15:36:36.435512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.056 [2024-04-26 15:36:36.435537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.056 qpair failed and we were unable to recover it. 00:26:19.056 [2024-04-26 15:36:36.435916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.056 [2024-04-26 15:36:36.436297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.056 [2024-04-26 15:36:36.436313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.056 qpair failed and we were unable to recover it. 00:26:19.056 [2024-04-26 15:36:36.436643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.056 [2024-04-26 15:36:36.437014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.056 [2024-04-26 15:36:36.437030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.056 qpair failed and we were unable to recover it. 00:26:19.056 [2024-04-26 15:36:36.437257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.056 [2024-04-26 15:36:36.437633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.056 [2024-04-26 15:36:36.437658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.056 qpair failed and we were unable to recover it. 00:26:19.056 [2024-04-26 15:36:36.437891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.056 [2024-04-26 15:36:36.438221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.056 [2024-04-26 15:36:36.438242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.056 qpair failed and we were unable to recover it. 00:26:19.056 [2024-04-26 15:36:36.438440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.056 [2024-04-26 15:36:36.438641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.056 [2024-04-26 15:36:36.438664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.056 qpair failed and we were unable to recover it. 00:26:19.056 [2024-04-26 15:36:36.438876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.056 [2024-04-26 15:36:36.439201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.056 [2024-04-26 15:36:36.439217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.056 qpair failed and we were unable to recover it. 00:26:19.056 [2024-04-26 15:36:36.439410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.056 [2024-04-26 15:36:36.439612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.056 [2024-04-26 15:36:36.439628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.056 qpair failed and we were unable to recover it. 00:26:19.056 [2024-04-26 15:36:36.439866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.056 [2024-04-26 15:36:36.439965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.056 [2024-04-26 15:36:36.439989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.056 qpair failed and we were unable to recover it. 00:26:19.056 [2024-04-26 15:36:36.440214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.056 [2024-04-26 15:36:36.440553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.056 [2024-04-26 15:36:36.440581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.056 qpair failed and we were unable to recover it. 00:26:19.056 [2024-04-26 15:36:36.440928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.056 [2024-04-26 15:36:36.441296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.056 [2024-04-26 15:36:36.441311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.056 qpair failed and we were unable to recover it. 00:26:19.056 [2024-04-26 15:36:36.441671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.056 [2024-04-26 15:36:36.442031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.056 [2024-04-26 15:36:36.442047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.056 qpair failed and we were unable to recover it. 00:26:19.056 [2024-04-26 15:36:36.442398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.056 [2024-04-26 15:36:36.442767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.056 [2024-04-26 15:36:36.442781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.056 qpair failed and we were unable to recover it. 00:26:19.056 [2024-04-26 15:36:36.443106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.056 [2024-04-26 15:36:36.443308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.056 [2024-04-26 15:36:36.443323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.056 qpair failed and we were unable to recover it. 00:26:19.056 [2024-04-26 15:36:36.443664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.056 [2024-04-26 15:36:36.444017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.056 [2024-04-26 15:36:36.444032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.056 qpair failed and we were unable to recover it. 00:26:19.056 [2024-04-26 15:36:36.444248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.056 [2024-04-26 15:36:36.444499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.056 [2024-04-26 15:36:36.444521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.056 qpair failed and we were unable to recover it. 00:26:19.056 [2024-04-26 15:36:36.444732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.056 [2024-04-26 15:36:36.444939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.056 [2024-04-26 15:36:36.444954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.056 qpair failed and we were unable to recover it. 00:26:19.056 [2024-04-26 15:36:36.445299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.056 [2024-04-26 15:36:36.445668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.056 [2024-04-26 15:36:36.445682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.056 qpair failed and we were unable to recover it. 00:26:19.056 [2024-04-26 15:36:36.445903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.056 [2024-04-26 15:36:36.446113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.056 [2024-04-26 15:36:36.446128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.056 qpair failed and we were unable to recover it. 00:26:19.056 [2024-04-26 15:36:36.446506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.056 [2024-04-26 15:36:36.446957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.056 [2024-04-26 15:36:36.446974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.056 qpair failed and we were unable to recover it. 00:26:19.056 [2024-04-26 15:36:36.447306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.056 [2024-04-26 15:36:36.447638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.056 [2024-04-26 15:36:36.447652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.056 qpair failed and we were unable to recover it. 00:26:19.056 [2024-04-26 15:36:36.447863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.056 [2024-04-26 15:36:36.448254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.056 [2024-04-26 15:36:36.448272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.056 qpair failed and we were unable to recover it. 00:26:19.056 [2024-04-26 15:36:36.448453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.056 [2024-04-26 15:36:36.448779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.056 [2024-04-26 15:36:36.448794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.056 qpair failed and we were unable to recover it. 00:26:19.056 [2024-04-26 15:36:36.449178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.056 [2024-04-26 15:36:36.449553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.056 [2024-04-26 15:36:36.449567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.056 qpair failed and we were unable to recover it. 00:26:19.056 [2024-04-26 15:36:36.449918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.056 [2024-04-26 15:36:36.450292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.056 [2024-04-26 15:36:36.450306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.056 qpair failed and we were unable to recover it. 00:26:19.057 [2024-04-26 15:36:36.450673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.057 [2024-04-26 15:36:36.451035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.057 [2024-04-26 15:36:36.451050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.057 qpair failed and we were unable to recover it. 00:26:19.057 [2024-04-26 15:36:36.451400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.057 [2024-04-26 15:36:36.451733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.057 [2024-04-26 15:36:36.451748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.057 qpair failed and we were unable to recover it. 00:26:19.057 [2024-04-26 15:36:36.452042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.057 [2024-04-26 15:36:36.452372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.057 [2024-04-26 15:36:36.452387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.057 qpair failed and we were unable to recover it. 00:26:19.057 [2024-04-26 15:36:36.452786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.057 [2024-04-26 15:36:36.453146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.057 [2024-04-26 15:36:36.453162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.057 qpair failed and we were unable to recover it. 00:26:19.057 [2024-04-26 15:36:36.453382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.057 [2024-04-26 15:36:36.453593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.057 [2024-04-26 15:36:36.453608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.057 qpair failed and we were unable to recover it. 00:26:19.057 [2024-04-26 15:36:36.453974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.057 [2024-04-26 15:36:36.454172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.057 [2024-04-26 15:36:36.454187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.057 qpair failed and we were unable to recover it. 00:26:19.057 [2024-04-26 15:36:36.454537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.057 [2024-04-26 15:36:36.454880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.057 [2024-04-26 15:36:36.454895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.057 qpair failed and we were unable to recover it. 00:26:19.057 [2024-04-26 15:36:36.455243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.057 [2024-04-26 15:36:36.455578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.057 [2024-04-26 15:36:36.455592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.057 qpair failed and we were unable to recover it. 00:26:19.057 [2024-04-26 15:36:36.455918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.057 [2024-04-26 15:36:36.456263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.057 [2024-04-26 15:36:36.456276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.057 qpair failed and we were unable to recover it. 00:26:19.057 [2024-04-26 15:36:36.456605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.057 [2024-04-26 15:36:36.456972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.057 [2024-04-26 15:36:36.456987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.057 qpair failed and we were unable to recover it. 00:26:19.057 [2024-04-26 15:36:36.457274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.057 [2024-04-26 15:36:36.457651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.057 [2024-04-26 15:36:36.457666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.057 qpair failed and we were unable to recover it. 00:26:19.057 [2024-04-26 15:36:36.457987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.057 [2024-04-26 15:36:36.458374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.057 [2024-04-26 15:36:36.458389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.057 qpair failed and we were unable to recover it. 00:26:19.057 [2024-04-26 15:36:36.458602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.057 [2024-04-26 15:36:36.458958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.057 [2024-04-26 15:36:36.458973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.057 qpair failed and we were unable to recover it. 00:26:19.057 [2024-04-26 15:36:36.459180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.057 [2024-04-26 15:36:36.459363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.057 [2024-04-26 15:36:36.459379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.057 qpair failed and we were unable to recover it. 00:26:19.057 [2024-04-26 15:36:36.459641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.057 [2024-04-26 15:36:36.459908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.057 [2024-04-26 15:36:36.459922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.057 qpair failed and we were unable to recover it. 00:26:19.057 [2024-04-26 15:36:36.460345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.057 [2024-04-26 15:36:36.460685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.057 [2024-04-26 15:36:36.460699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.057 qpair failed and we were unable to recover it. 00:26:19.057 [2024-04-26 15:36:36.460927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.057 [2024-04-26 15:36:36.461338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.057 [2024-04-26 15:36:36.461352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.057 qpair failed and we were unable to recover it. 00:26:19.057 [2024-04-26 15:36:36.461665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.057 [2024-04-26 15:36:36.462030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.057 [2024-04-26 15:36:36.462045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.057 qpair failed and we were unable to recover it. 00:26:19.057 [2024-04-26 15:36:36.462421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.057 [2024-04-26 15:36:36.462782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.057 [2024-04-26 15:36:36.462798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.057 qpair failed and we were unable to recover it. 00:26:19.057 [2024-04-26 15:36:36.463017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.057 [2024-04-26 15:36:36.463232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.057 [2024-04-26 15:36:36.463246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.057 qpair failed and we were unable to recover it. 00:26:19.057 [2024-04-26 15:36:36.463605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.057 [2024-04-26 15:36:36.463780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.057 [2024-04-26 15:36:36.463795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.057 qpair failed and we were unable to recover it. 00:26:19.057 [2024-04-26 15:36:36.464177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.057 [2024-04-26 15:36:36.464554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.057 [2024-04-26 15:36:36.464568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.057 qpair failed and we were unable to recover it. 00:26:19.057 [2024-04-26 15:36:36.464890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.057 [2024-04-26 15:36:36.465199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.057 [2024-04-26 15:36:36.465213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.057 qpair failed and we were unable to recover it. 00:26:19.057 [2024-04-26 15:36:36.465540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.057 [2024-04-26 15:36:36.465917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.057 [2024-04-26 15:36:36.465931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.057 qpair failed and we were unable to recover it. 00:26:19.057 [2024-04-26 15:36:36.466220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.057 [2024-04-26 15:36:36.466593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.057 [2024-04-26 15:36:36.466607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.057 qpair failed and we were unable to recover it. 00:26:19.057 [2024-04-26 15:36:36.466786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.057 [2024-04-26 15:36:36.467027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.057 [2024-04-26 15:36:36.467043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.057 qpair failed and we were unable to recover it. 00:26:19.057 [2024-04-26 15:36:36.467372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.057 [2024-04-26 15:36:36.467586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.057 [2024-04-26 15:36:36.467601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.057 qpair failed and we were unable to recover it. 00:26:19.057 [2024-04-26 15:36:36.467976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.057 [2024-04-26 15:36:36.468322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.057 [2024-04-26 15:36:36.468336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.057 qpair failed and we were unable to recover it. 00:26:19.057 [2024-04-26 15:36:36.468703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.057 [2024-04-26 15:36:36.468912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.058 [2024-04-26 15:36:36.468927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.058 qpair failed and we were unable to recover it. 00:26:19.058 [2024-04-26 15:36:36.469138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.058 [2024-04-26 15:36:36.469494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.058 [2024-04-26 15:36:36.469509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.058 qpair failed and we were unable to recover it. 00:26:19.058 [2024-04-26 15:36:36.469876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.058 [2024-04-26 15:36:36.470222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.058 [2024-04-26 15:36:36.470236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.058 qpair failed and we were unable to recover it. 00:26:19.058 [2024-04-26 15:36:36.470539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.058 [2024-04-26 15:36:36.470760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.058 [2024-04-26 15:36:36.470773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.058 qpair failed and we were unable to recover it. 00:26:19.058 [2024-04-26 15:36:36.471010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.058 [2024-04-26 15:36:36.471372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.058 [2024-04-26 15:36:36.471387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.058 qpair failed and we were unable to recover it. 00:26:19.058 [2024-04-26 15:36:36.471603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.058 [2024-04-26 15:36:36.471932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.058 [2024-04-26 15:36:36.471947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.058 qpair failed and we were unable to recover it. 00:26:19.058 [2024-04-26 15:36:36.472269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.058 [2024-04-26 15:36:36.472615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.058 [2024-04-26 15:36:36.472629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.058 qpair failed and we were unable to recover it. 00:26:19.058 [2024-04-26 15:36:36.472949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.058 [2024-04-26 15:36:36.473254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.058 [2024-04-26 15:36:36.473268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.058 qpair failed and we were unable to recover it. 00:26:19.058 [2024-04-26 15:36:36.473440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.058 [2024-04-26 15:36:36.473891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.058 [2024-04-26 15:36:36.473906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.058 qpair failed and we were unable to recover it. 00:26:19.058 [2024-04-26 15:36:36.474312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.058 [2024-04-26 15:36:36.474688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.058 [2024-04-26 15:36:36.474704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.058 qpair failed and we were unable to recover it. 00:26:19.058 [2024-04-26 15:36:36.475079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.058 [2024-04-26 15:36:36.475412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.058 [2024-04-26 15:36:36.475426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.058 qpair failed and we were unable to recover it. 00:26:19.058 [2024-04-26 15:36:36.475767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.058 [2024-04-26 15:36:36.476093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.058 [2024-04-26 15:36:36.476107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.058 qpair failed and we were unable to recover it. 00:26:19.058 [2024-04-26 15:36:36.476473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.058 [2024-04-26 15:36:36.476723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.058 [2024-04-26 15:36:36.476737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.058 qpair failed and we were unable to recover it. 00:26:19.058 [2024-04-26 15:36:36.476979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.058 [2024-04-26 15:36:36.477047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.058 [2024-04-26 15:36:36.477061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.058 qpair failed and we were unable to recover it. 00:26:19.058 [2024-04-26 15:36:36.477449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.058 [2024-04-26 15:36:36.477815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.058 [2024-04-26 15:36:36.477830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.058 qpair failed and we were unable to recover it. 00:26:19.058 [2024-04-26 15:36:36.478199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.058 [2024-04-26 15:36:36.478498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.058 [2024-04-26 15:36:36.478513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.058 qpair failed and we were unable to recover it. 00:26:19.058 [2024-04-26 15:36:36.478865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.058 [2024-04-26 15:36:36.479204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.058 [2024-04-26 15:36:36.479217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.058 qpair failed and we were unable to recover it. 00:26:19.058 [2024-04-26 15:36:36.479500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.058 [2024-04-26 15:36:36.479723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.058 [2024-04-26 15:36:36.479736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.058 qpair failed and we were unable to recover it. 00:26:19.058 [2024-04-26 15:36:36.480095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.058 [2024-04-26 15:36:36.480477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.058 [2024-04-26 15:36:36.480491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.058 qpair failed and we were unable to recover it. 00:26:19.058 [2024-04-26 15:36:36.480862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.058 [2024-04-26 15:36:36.481201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.058 [2024-04-26 15:36:36.481215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.058 qpair failed and we were unable to recover it. 00:26:19.058 [2024-04-26 15:36:36.481531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.058 [2024-04-26 15:36:36.481881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.058 [2024-04-26 15:36:36.481895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.058 qpair failed and we were unable to recover it. 00:26:19.058 [2024-04-26 15:36:36.482124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.058 [2024-04-26 15:36:36.482399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.058 [2024-04-26 15:36:36.482412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.058 qpair failed and we were unable to recover it. 00:26:19.058 [2024-04-26 15:36:36.482746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.058 [2024-04-26 15:36:36.483100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.058 [2024-04-26 15:36:36.483115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.058 qpair failed and we were unable to recover it. 00:26:19.332 [2024-04-26 15:36:36.483441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.332 [2024-04-26 15:36:36.483785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.332 [2024-04-26 15:36:36.483801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.332 qpair failed and we were unable to recover it. 00:26:19.332 [2024-04-26 15:36:36.484025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.333 [2024-04-26 15:36:36.484373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.333 [2024-04-26 15:36:36.484387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.333 qpair failed and we were unable to recover it. 00:26:19.333 [2024-04-26 15:36:36.484710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.333 [2024-04-26 15:36:36.485084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.333 [2024-04-26 15:36:36.485099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.333 qpair failed and we were unable to recover it. 00:26:19.333 [2024-04-26 15:36:36.485435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.333 [2024-04-26 15:36:36.485653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.333 [2024-04-26 15:36:36.485669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.333 qpair failed and we were unable to recover it. 00:26:19.333 [2024-04-26 15:36:36.485876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.333 [2024-04-26 15:36:36.486219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.333 [2024-04-26 15:36:36.486233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.333 qpair failed and we were unable to recover it. 00:26:19.333 [2024-04-26 15:36:36.486474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.333 [2024-04-26 15:36:36.486826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.333 [2024-04-26 15:36:36.486847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.333 qpair failed and we were unable to recover it. 00:26:19.333 [2024-04-26 15:36:36.487113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.333 [2024-04-26 15:36:36.487339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.333 [2024-04-26 15:36:36.487353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.333 qpair failed and we were unable to recover it. 00:26:19.333 [2024-04-26 15:36:36.487740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.333 [2024-04-26 15:36:36.487966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.333 [2024-04-26 15:36:36.487980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.333 qpair failed and we were unable to recover it. 00:26:19.333 [2024-04-26 15:36:36.488212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.333 [2024-04-26 15:36:36.488409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.333 [2024-04-26 15:36:36.488422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.333 qpair failed and we were unable to recover it. 00:26:19.333 [2024-04-26 15:36:36.488792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.333 [2024-04-26 15:36:36.489105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.333 [2024-04-26 15:36:36.489120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.333 qpair failed and we were unable to recover it. 00:26:19.333 [2024-04-26 15:36:36.489317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.333 [2024-04-26 15:36:36.489646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.333 [2024-04-26 15:36:36.489660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.333 qpair failed and we were unable to recover it. 00:26:19.333 [2024-04-26 15:36:36.489856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.333 [2024-04-26 15:36:36.490088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.333 [2024-04-26 15:36:36.490102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.333 qpair failed and we were unable to recover it. 00:26:19.333 [2024-04-26 15:36:36.490313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.333 [2024-04-26 15:36:36.490571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.333 [2024-04-26 15:36:36.490586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.333 qpair failed and we were unable to recover it. 00:26:19.333 [2024-04-26 15:36:36.490851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.333 [2024-04-26 15:36:36.491208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.333 [2024-04-26 15:36:36.491222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.333 qpair failed and we were unable to recover it. 00:26:19.333 [2024-04-26 15:36:36.491391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.333 [2024-04-26 15:36:36.491705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.333 [2024-04-26 15:36:36.491720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.333 qpair failed and we were unable to recover it. 00:26:19.333 [2024-04-26 15:36:36.492079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.333 [2024-04-26 15:36:36.492414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.333 [2024-04-26 15:36:36.492427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.333 qpair failed and we were unable to recover it. 00:26:19.333 [2024-04-26 15:36:36.492841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.333 [2024-04-26 15:36:36.493262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.333 [2024-04-26 15:36:36.493275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.333 qpair failed and we were unable to recover it. 00:26:19.333 [2024-04-26 15:36:36.493500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.333 [2024-04-26 15:36:36.493819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.333 [2024-04-26 15:36:36.493833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.333 qpair failed and we were unable to recover it. 00:26:19.333 [2024-04-26 15:36:36.494187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.333 [2024-04-26 15:36:36.494425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.333 [2024-04-26 15:36:36.494439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.333 qpair failed and we were unable to recover it. 00:26:19.333 [2024-04-26 15:36:36.494770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.333 [2024-04-26 15:36:36.495119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.333 [2024-04-26 15:36:36.495133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.333 qpair failed and we were unable to recover it. 00:26:19.333 [2024-04-26 15:36:36.495338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.333 [2024-04-26 15:36:36.495617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.333 [2024-04-26 15:36:36.495630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.333 qpair failed and we were unable to recover it. 00:26:19.333 [2024-04-26 15:36:36.495993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.333 [2024-04-26 15:36:36.496371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.333 [2024-04-26 15:36:36.496385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.333 qpair failed and we were unable to recover it. 00:26:19.333 [2024-04-26 15:36:36.496711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.333 [2024-04-26 15:36:36.496925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.333 [2024-04-26 15:36:36.496939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.333 qpair failed and we were unable to recover it. 00:26:19.333 [2024-04-26 15:36:36.497167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.333 [2024-04-26 15:36:36.497500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.333 [2024-04-26 15:36:36.497513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.333 qpair failed and we were unable to recover it. 00:26:19.333 [2024-04-26 15:36:36.497738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.333 [2024-04-26 15:36:36.497951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.333 [2024-04-26 15:36:36.497967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.333 qpair failed and we were unable to recover it. 00:26:19.333 [2024-04-26 15:36:36.498418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.333 [2024-04-26 15:36:36.498788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.333 [2024-04-26 15:36:36.498802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.333 qpair failed and we were unable to recover it. 00:26:19.333 [2024-04-26 15:36:36.499148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.333 [2024-04-26 15:36:36.499383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.333 [2024-04-26 15:36:36.499396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.333 qpair failed and we were unable to recover it. 00:26:19.333 [2024-04-26 15:36:36.499720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.333 [2024-04-26 15:36:36.499930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.333 [2024-04-26 15:36:36.499944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.333 qpair failed and we were unable to recover it. 00:26:19.333 [2024-04-26 15:36:36.500190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.333 [2024-04-26 15:36:36.500536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.333 [2024-04-26 15:36:36.500551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.333 qpair failed and we were unable to recover it. 00:26:19.333 [2024-04-26 15:36:36.500786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.333 [2024-04-26 15:36:36.501130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.334 [2024-04-26 15:36:36.501146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.334 qpair failed and we were unable to recover it. 00:26:19.334 [2024-04-26 15:36:36.501392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.334 [2024-04-26 15:36:36.501764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.334 [2024-04-26 15:36:36.501778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.334 qpair failed and we were unable to recover it. 00:26:19.334 [2024-04-26 15:36:36.501860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.334 [2024-04-26 15:36:36.502029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.334 [2024-04-26 15:36:36.502043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.334 qpair failed and we were unable to recover it. 00:26:19.334 [2024-04-26 15:36:36.502226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.334 [2024-04-26 15:36:36.502555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.334 [2024-04-26 15:36:36.502569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.334 qpair failed and we were unable to recover it. 00:26:19.334 [2024-04-26 15:36:36.502942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.334 [2024-04-26 15:36:36.503291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.334 [2024-04-26 15:36:36.503305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.334 qpair failed and we were unable to recover it. 00:26:19.334 [2024-04-26 15:36:36.503661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.334 [2024-04-26 15:36:36.504027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.334 [2024-04-26 15:36:36.504042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.334 qpair failed and we were unable to recover it. 00:26:19.334 [2024-04-26 15:36:36.504383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.334 [2024-04-26 15:36:36.504751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.334 [2024-04-26 15:36:36.504764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.334 qpair failed and we were unable to recover it. 00:26:19.334 [2024-04-26 15:36:36.505098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.334 [2024-04-26 15:36:36.505344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.334 [2024-04-26 15:36:36.505358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.334 qpair failed and we were unable to recover it. 00:26:19.334 [2024-04-26 15:36:36.505687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.334 [2024-04-26 15:36:36.506045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.334 [2024-04-26 15:36:36.506059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.334 qpair failed and we were unable to recover it. 00:26:19.334 [2024-04-26 15:36:36.506383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.334 [2024-04-26 15:36:36.506731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.334 [2024-04-26 15:36:36.506745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.334 qpair failed and we were unable to recover it. 00:26:19.334 [2024-04-26 15:36:36.507086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.334 [2024-04-26 15:36:36.507450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.334 [2024-04-26 15:36:36.507463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.334 qpair failed and we were unable to recover it. 00:26:19.334 [2024-04-26 15:36:36.507782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.334 [2024-04-26 15:36:36.508119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.334 [2024-04-26 15:36:36.508134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.334 qpair failed and we were unable to recover it. 00:26:19.334 [2024-04-26 15:36:36.508428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.334 [2024-04-26 15:36:36.508801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.334 [2024-04-26 15:36:36.508814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.334 qpair failed and we were unable to recover it. 00:26:19.334 [2024-04-26 15:36:36.508888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.334 [2024-04-26 15:36:36.509131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.334 [2024-04-26 15:36:36.509145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.334 qpair failed and we were unable to recover it. 00:26:19.334 [2024-04-26 15:36:36.509486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.334 [2024-04-26 15:36:36.509825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.334 [2024-04-26 15:36:36.509851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.334 qpair failed and we were unable to recover it. 00:26:19.334 [2024-04-26 15:36:36.510156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.334 [2024-04-26 15:36:36.510500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.334 [2024-04-26 15:36:36.510514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.334 qpair failed and we were unable to recover it. 00:26:19.334 [2024-04-26 15:36:36.510834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.334 [2024-04-26 15:36:36.511179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.334 [2024-04-26 15:36:36.511193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.334 qpair failed and we were unable to recover it. 00:26:19.334 [2024-04-26 15:36:36.511393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.334 [2024-04-26 15:36:36.511614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.334 [2024-04-26 15:36:36.511628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.334 qpair failed and we were unable to recover it. 00:26:19.334 [2024-04-26 15:36:36.511980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.334 [2024-04-26 15:36:36.512326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.334 [2024-04-26 15:36:36.512340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.334 qpair failed and we were unable to recover it. 00:26:19.334 [2024-04-26 15:36:36.512683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.334 [2024-04-26 15:36:36.513010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.334 [2024-04-26 15:36:36.513026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.334 qpair failed and we were unable to recover it. 00:26:19.334 [2024-04-26 15:36:36.513357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.334 [2024-04-26 15:36:36.513684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.334 [2024-04-26 15:36:36.513699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.334 qpair failed and we were unable to recover it. 00:26:19.334 [2024-04-26 15:36:36.514052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.334 [2024-04-26 15:36:36.514413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.334 [2024-04-26 15:36:36.514430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.334 qpair failed and we were unable to recover it. 00:26:19.334 [2024-04-26 15:36:36.514631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.334 [2024-04-26 15:36:36.514982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.334 [2024-04-26 15:36:36.514996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.334 qpair failed and we were unable to recover it. 00:26:19.334 [2024-04-26 15:36:36.515341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.334 [2024-04-26 15:36:36.515439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.334 [2024-04-26 15:36:36.515453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.334 qpair failed and we were unable to recover it. 00:26:19.334 [2024-04-26 15:36:36.515798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.334 [2024-04-26 15:36:36.516150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.334 [2024-04-26 15:36:36.516164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.334 qpair failed and we were unable to recover it. 00:26:19.334 [2024-04-26 15:36:36.516503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.334 [2024-04-26 15:36:36.516877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.334 [2024-04-26 15:36:36.516891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.334 qpair failed and we were unable to recover it. 00:26:19.334 [2024-04-26 15:36:36.517229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.334 [2024-04-26 15:36:36.517594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.334 [2024-04-26 15:36:36.517608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.334 qpair failed and we were unable to recover it. 00:26:19.334 [2024-04-26 15:36:36.517933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.334 [2024-04-26 15:36:36.518278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.334 [2024-04-26 15:36:36.518292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.334 qpair failed and we were unable to recover it. 00:26:19.334 [2024-04-26 15:36:36.518673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.334 [2024-04-26 15:36:36.519027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.334 [2024-04-26 15:36:36.519041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.334 qpair failed and we were unable to recover it. 00:26:19.335 [2024-04-26 15:36:36.519336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.335 [2024-04-26 15:36:36.519668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.335 [2024-04-26 15:36:36.519682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.335 qpair failed and we were unable to recover it. 00:26:19.335 [2024-04-26 15:36:36.520011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.335 [2024-04-26 15:36:36.520200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.335 [2024-04-26 15:36:36.520213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.335 qpair failed and we were unable to recover it. 00:26:19.335 [2024-04-26 15:36:36.520411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.335 [2024-04-26 15:36:36.520788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.335 [2024-04-26 15:36:36.520803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.335 qpair failed and we were unable to recover it. 00:26:19.335 [2024-04-26 15:36:36.521143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.335 [2024-04-26 15:36:36.521360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.335 [2024-04-26 15:36:36.521374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.335 qpair failed and we were unable to recover it. 00:26:19.335 [2024-04-26 15:36:36.521690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.335 [2024-04-26 15:36:36.522034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.335 [2024-04-26 15:36:36.522048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.335 qpair failed and we were unable to recover it. 00:26:19.335 [2024-04-26 15:36:36.522469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.335 [2024-04-26 15:36:36.522792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.335 [2024-04-26 15:36:36.522806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.335 qpair failed and we were unable to recover it. 00:26:19.335 [2024-04-26 15:36:36.523171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.335 [2024-04-26 15:36:36.523511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.335 [2024-04-26 15:36:36.523525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.335 qpair failed and we were unable to recover it. 00:26:19.335 [2024-04-26 15:36:36.523857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.335 [2024-04-26 15:36:36.524230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.335 [2024-04-26 15:36:36.524244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.335 qpair failed and we were unable to recover it. 00:26:19.335 [2024-04-26 15:36:36.524517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.335 [2024-04-26 15:36:36.524726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.335 [2024-04-26 15:36:36.524740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.335 qpair failed and we were unable to recover it. 00:26:19.335 [2024-04-26 15:36:36.525086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.335 [2024-04-26 15:36:36.525487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.335 [2024-04-26 15:36:36.525501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.335 qpair failed and we were unable to recover it. 00:26:19.335 [2024-04-26 15:36:36.525829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.335 [2024-04-26 15:36:36.526189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.335 [2024-04-26 15:36:36.526203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.335 qpair failed and we were unable to recover it. 00:26:19.335 [2024-04-26 15:36:36.526400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.335 [2024-04-26 15:36:36.526781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.335 [2024-04-26 15:36:36.526795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.335 qpair failed and we were unable to recover it. 00:26:19.335 [2024-04-26 15:36:36.527119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.335 [2024-04-26 15:36:36.527479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.335 [2024-04-26 15:36:36.527496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.335 qpair failed and we were unable to recover it. 00:26:19.335 [2024-04-26 15:36:36.527833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.335 [2024-04-26 15:36:36.528183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.335 [2024-04-26 15:36:36.528197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.335 qpair failed and we were unable to recover it. 00:26:19.335 [2024-04-26 15:36:36.528545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.335 [2024-04-26 15:36:36.528751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.335 [2024-04-26 15:36:36.528766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.335 qpair failed and we were unable to recover it. 00:26:19.335 [2024-04-26 15:36:36.529117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.335 [2024-04-26 15:36:36.529498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.335 [2024-04-26 15:36:36.529513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.335 qpair failed and we were unable to recover it. 00:26:19.335 [2024-04-26 15:36:36.529873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.335 [2024-04-26 15:36:36.530218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.335 [2024-04-26 15:36:36.530231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.335 qpair failed and we were unable to recover it. 00:26:19.335 [2024-04-26 15:36:36.530576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.335 [2024-04-26 15:36:36.530943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.335 [2024-04-26 15:36:36.530959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.335 qpair failed and we were unable to recover it. 00:26:19.335 [2024-04-26 15:36:36.531154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.335 [2024-04-26 15:36:36.531552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.335 [2024-04-26 15:36:36.531565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.335 qpair failed and we were unable to recover it. 00:26:19.335 [2024-04-26 15:36:36.531889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.335 [2024-04-26 15:36:36.532273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.335 [2024-04-26 15:36:36.532287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.335 qpair failed and we were unable to recover it. 00:26:19.335 [2024-04-26 15:36:36.532478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.335 [2024-04-26 15:36:36.532779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.335 [2024-04-26 15:36:36.532792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.335 qpair failed and we were unable to recover it. 00:26:19.335 [2024-04-26 15:36:36.532987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.335 [2024-04-26 15:36:36.533204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.335 [2024-04-26 15:36:36.533219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.335 qpair failed and we were unable to recover it. 00:26:19.335 [2024-04-26 15:36:36.533564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.335 [2024-04-26 15:36:36.533920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.335 [2024-04-26 15:36:36.533937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.335 qpair failed and we were unable to recover it. 00:26:19.335 [2024-04-26 15:36:36.534141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.335 [2024-04-26 15:36:36.534557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.335 [2024-04-26 15:36:36.534572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.335 qpair failed and we were unable to recover it. 00:26:19.335 [2024-04-26 15:36:36.534898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.335 [2024-04-26 15:36:36.535249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.335 [2024-04-26 15:36:36.535264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.335 qpair failed and we were unable to recover it. 00:26:19.335 [2024-04-26 15:36:36.535605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.335 [2024-04-26 15:36:36.535980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.335 [2024-04-26 15:36:36.535995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.335 qpair failed and we were unable to recover it. 00:26:19.335 [2024-04-26 15:36:36.536191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.335 [2024-04-26 15:36:36.536564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.335 [2024-04-26 15:36:36.536579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.335 qpair failed and we were unable to recover it. 00:26:19.335 [2024-04-26 15:36:36.536938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.335 [2024-04-26 15:36:36.537310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.335 [2024-04-26 15:36:36.537325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.335 qpair failed and we were unable to recover it. 00:26:19.335 [2024-04-26 15:36:36.537530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.335 [2024-04-26 15:36:36.537591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.336 [2024-04-26 15:36:36.537607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.336 qpair failed and we were unable to recover it. 00:26:19.336 [2024-04-26 15:36:36.537816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.336 [2024-04-26 15:36:36.538245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.336 [2024-04-26 15:36:36.538261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.336 qpair failed and we were unable to recover it. 00:26:19.336 [2024-04-26 15:36:36.538625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.336 [2024-04-26 15:36:36.538832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.336 [2024-04-26 15:36:36.538859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.336 qpair failed and we were unable to recover it. 00:26:19.336 [2024-04-26 15:36:36.539073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.336 [2024-04-26 15:36:36.539426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.336 [2024-04-26 15:36:36.539439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.336 qpair failed and we were unable to recover it. 00:26:19.336 [2024-04-26 15:36:36.539744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.336 [2024-04-26 15:36:36.540015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.336 [2024-04-26 15:36:36.540034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.336 qpair failed and we were unable to recover it. 00:26:19.336 [2024-04-26 15:36:36.540419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.336 [2024-04-26 15:36:36.540795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.336 [2024-04-26 15:36:36.540810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.336 qpair failed and we were unable to recover it. 00:26:19.336 [2024-04-26 15:36:36.541189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.336 [2024-04-26 15:36:36.541528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.336 [2024-04-26 15:36:36.541542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.336 qpair failed and we were unable to recover it. 00:26:19.336 [2024-04-26 15:36:36.541886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.336 [2024-04-26 15:36:36.542153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.336 [2024-04-26 15:36:36.542167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.336 qpair failed and we were unable to recover it. 00:26:19.336 [2024-04-26 15:36:36.542541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.336 [2024-04-26 15:36:36.542871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.336 [2024-04-26 15:36:36.542886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.336 qpair failed and we were unable to recover it. 00:26:19.336 [2024-04-26 15:36:36.543259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.336 [2024-04-26 15:36:36.543593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.336 [2024-04-26 15:36:36.543606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.336 qpair failed and we were unable to recover it. 00:26:19.336 [2024-04-26 15:36:36.543669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.336 [2024-04-26 15:36:36.543974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.336 [2024-04-26 15:36:36.543989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.336 qpair failed and we were unable to recover it. 00:26:19.336 [2024-04-26 15:36:36.544187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.336 [2024-04-26 15:36:36.544401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.336 [2024-04-26 15:36:36.544414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.336 qpair failed and we were unable to recover it. 00:26:19.336 [2024-04-26 15:36:36.544766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.336 [2024-04-26 15:36:36.545118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.336 [2024-04-26 15:36:36.545132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.336 qpair failed and we were unable to recover it. 00:26:19.336 [2024-04-26 15:36:36.545457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.336 [2024-04-26 15:36:36.545805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.336 [2024-04-26 15:36:36.545819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.336 qpair failed and we were unable to recover it. 00:26:19.336 [2024-04-26 15:36:36.546237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.336 [2024-04-26 15:36:36.546583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.336 [2024-04-26 15:36:36.546602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.336 qpair failed and we were unable to recover it. 00:26:19.336 [2024-04-26 15:36:36.546975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.336 [2024-04-26 15:36:36.547343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.336 [2024-04-26 15:36:36.547357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.336 qpair failed and we were unable to recover it. 00:26:19.336 [2024-04-26 15:36:36.547709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.336 [2024-04-26 15:36:36.548074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.336 [2024-04-26 15:36:36.548088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.336 qpair failed and we were unable to recover it. 00:26:19.336 [2024-04-26 15:36:36.548430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.336 [2024-04-26 15:36:36.548804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.336 [2024-04-26 15:36:36.548818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.336 qpair failed and we were unable to recover it. 00:26:19.336 [2024-04-26 15:36:36.549153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.336 [2024-04-26 15:36:36.549501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.336 [2024-04-26 15:36:36.549516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.336 qpair failed and we were unable to recover it. 00:26:19.336 [2024-04-26 15:36:36.549879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.336 [2024-04-26 15:36:36.550095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.336 [2024-04-26 15:36:36.550109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.336 qpair failed and we were unable to recover it. 00:26:19.336 [2024-04-26 15:36:36.550462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.336 [2024-04-26 15:36:36.550811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.336 [2024-04-26 15:36:36.550825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.336 qpair failed and we were unable to recover it. 00:26:19.336 [2024-04-26 15:36:36.550902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.336 [2024-04-26 15:36:36.551089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.336 [2024-04-26 15:36:36.551104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.336 qpair failed and we were unable to recover it. 00:26:19.336 [2024-04-26 15:36:36.551429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.336 [2024-04-26 15:36:36.551615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.336 [2024-04-26 15:36:36.551628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.336 qpair failed and we were unable to recover it. 00:26:19.336 [2024-04-26 15:36:36.551865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.336 [2024-04-26 15:36:36.552069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.336 [2024-04-26 15:36:36.552083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.336 qpair failed and we were unable to recover it. 00:26:19.336 [2024-04-26 15:36:36.552432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.336 [2024-04-26 15:36:36.552778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.336 [2024-04-26 15:36:36.552792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.336 qpair failed and we were unable to recover it. 00:26:19.336 [2024-04-26 15:36:36.553139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.336 [2024-04-26 15:36:36.553466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.336 [2024-04-26 15:36:36.553479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.336 qpair failed and we were unable to recover it. 00:26:19.336 [2024-04-26 15:36:36.553833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.336 [2024-04-26 15:36:36.554221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.336 [2024-04-26 15:36:36.554235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.336 qpair failed and we were unable to recover it. 00:26:19.336 [2024-04-26 15:36:36.554563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.336 [2024-04-26 15:36:36.554771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.336 [2024-04-26 15:36:36.554786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.336 qpair failed and we were unable to recover it. 00:26:19.336 [2024-04-26 15:36:36.555039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.336 [2024-04-26 15:36:36.555360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.336 [2024-04-26 15:36:36.555373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.336 qpair failed and we were unable to recover it. 00:26:19.336 [2024-04-26 15:36:36.555766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.337 [2024-04-26 15:36:36.555883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.337 [2024-04-26 15:36:36.555900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.337 qpair failed and we were unable to recover it. 00:26:19.337 [2024-04-26 15:36:36.556223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.337 [2024-04-26 15:36:36.556557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.337 [2024-04-26 15:36:36.556572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.337 qpair failed and we were unable to recover it. 00:26:19.337 [2024-04-26 15:36:36.556780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.337 [2024-04-26 15:36:36.557110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.337 [2024-04-26 15:36:36.557126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.337 qpair failed and we were unable to recover it. 00:26:19.337 [2024-04-26 15:36:36.557452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.337 [2024-04-26 15:36:36.557828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.337 [2024-04-26 15:36:36.557853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.337 qpair failed and we were unable to recover it. 00:26:19.337 [2024-04-26 15:36:36.558211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.337 [2024-04-26 15:36:36.558544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.337 [2024-04-26 15:36:36.558559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.337 qpair failed and we were unable to recover it. 00:26:19.337 [2024-04-26 15:36:36.558749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.337 [2024-04-26 15:36:36.559074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.337 [2024-04-26 15:36:36.559088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.337 qpair failed and we were unable to recover it. 00:26:19.337 [2024-04-26 15:36:36.559426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.337 [2024-04-26 15:36:36.559640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.337 [2024-04-26 15:36:36.559654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.337 qpair failed and we were unable to recover it. 00:26:19.337 [2024-04-26 15:36:36.559986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.337 [2024-04-26 15:36:36.560342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.337 [2024-04-26 15:36:36.560355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.337 qpair failed and we were unable to recover it. 00:26:19.337 [2024-04-26 15:36:36.560561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.337 [2024-04-26 15:36:36.560910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.337 [2024-04-26 15:36:36.560925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.337 qpair failed and we were unable to recover it. 00:26:19.337 [2024-04-26 15:36:36.561261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.337 [2024-04-26 15:36:36.561467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.337 [2024-04-26 15:36:36.561481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.337 qpair failed and we were unable to recover it. 00:26:19.337 [2024-04-26 15:36:36.561836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.337 [2024-04-26 15:36:36.562189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.337 [2024-04-26 15:36:36.562203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.337 qpair failed and we were unable to recover it. 00:26:19.337 [2024-04-26 15:36:36.562527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.337 [2024-04-26 15:36:36.562883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.337 [2024-04-26 15:36:36.562897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.337 qpair failed and we were unable to recover it. 00:26:19.337 [2024-04-26 15:36:36.563122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.337 [2024-04-26 15:36:36.563455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.337 [2024-04-26 15:36:36.563468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.337 qpair failed and we were unable to recover it. 00:26:19.337 [2024-04-26 15:36:36.563700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.337 [2024-04-26 15:36:36.563964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.337 [2024-04-26 15:36:36.563978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.337 qpair failed and we were unable to recover it. 00:26:19.337 [2024-04-26 15:36:36.564328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.337 [2024-04-26 15:36:36.564699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.337 [2024-04-26 15:36:36.564713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.337 qpair failed and we were unable to recover it. 00:26:19.337 [2024-04-26 15:36:36.565073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.337 [2024-04-26 15:36:36.565418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.337 [2024-04-26 15:36:36.565432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.337 qpair failed and we were unable to recover it. 00:26:19.337 [2024-04-26 15:36:36.565868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.337 [2024-04-26 15:36:36.566171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.337 [2024-04-26 15:36:36.566185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.337 qpair failed and we were unable to recover it. 00:26:19.337 [2024-04-26 15:36:36.566528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.337 [2024-04-26 15:36:36.566714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.337 [2024-04-26 15:36:36.566728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.337 qpair failed and we were unable to recover it. 00:26:19.337 [2024-04-26 15:36:36.567118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.337 [2024-04-26 15:36:36.567328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.337 [2024-04-26 15:36:36.567344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.337 qpair failed and we were unable to recover it. 00:26:19.337 [2024-04-26 15:36:36.567707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.337 [2024-04-26 15:36:36.568053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.337 [2024-04-26 15:36:36.568067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.337 qpair failed and we were unable to recover it. 00:26:19.337 [2024-04-26 15:36:36.568405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.337 [2024-04-26 15:36:36.568611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.337 [2024-04-26 15:36:36.568624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.337 qpair failed and we were unable to recover it. 00:26:19.337 [2024-04-26 15:36:36.568801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.337 [2024-04-26 15:36:36.569145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.337 [2024-04-26 15:36:36.569160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.337 qpair failed and we were unable to recover it. 00:26:19.337 [2024-04-26 15:36:36.569516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.337 [2024-04-26 15:36:36.569886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.337 [2024-04-26 15:36:36.569901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.338 qpair failed and we were unable to recover it. 00:26:19.338 [2024-04-26 15:36:36.570255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.338 [2024-04-26 15:36:36.570464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.338 [2024-04-26 15:36:36.570477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.338 qpair failed and we were unable to recover it. 00:26:19.338 [2024-04-26 15:36:36.570823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.338 [2024-04-26 15:36:36.571079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.338 [2024-04-26 15:36:36.571093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.338 qpair failed and we were unable to recover it. 00:26:19.338 [2024-04-26 15:36:36.571437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.338 [2024-04-26 15:36:36.571774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.338 [2024-04-26 15:36:36.571788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.338 qpair failed and we were unable to recover it. 00:26:19.338 [2024-04-26 15:36:36.572208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.338 [2024-04-26 15:36:36.572538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.338 [2024-04-26 15:36:36.572552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.338 qpair failed and we were unable to recover it. 00:26:19.338 [2024-04-26 15:36:36.572626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.338 [2024-04-26 15:36:36.572973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.338 [2024-04-26 15:36:36.572987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.338 qpair failed and we were unable to recover it. 00:26:19.338 [2024-04-26 15:36:36.573237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.338 [2024-04-26 15:36:36.573445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.338 [2024-04-26 15:36:36.573459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.338 qpair failed and we were unable to recover it. 00:26:19.338 [2024-04-26 15:36:36.573819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.338 [2024-04-26 15:36:36.574176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.338 [2024-04-26 15:36:36.574190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.338 qpair failed and we were unable to recover it. 00:26:19.338 [2024-04-26 15:36:36.574512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.338 [2024-04-26 15:36:36.574885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.338 [2024-04-26 15:36:36.574899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.338 qpair failed and we were unable to recover it. 00:26:19.338 [2024-04-26 15:36:36.575269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.338 [2024-04-26 15:36:36.575334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.338 [2024-04-26 15:36:36.575347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.338 qpair failed and we were unable to recover it. 00:26:19.338 [2024-04-26 15:36:36.575669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.338 [2024-04-26 15:36:36.576013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.338 [2024-04-26 15:36:36.576027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.338 qpair failed and we were unable to recover it. 00:26:19.338 [2024-04-26 15:36:36.576203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.338 [2024-04-26 15:36:36.576561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.338 [2024-04-26 15:36:36.576575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.338 qpair failed and we were unable to recover it. 00:26:19.338 [2024-04-26 15:36:36.576908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.338 [2024-04-26 15:36:36.577291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.338 [2024-04-26 15:36:36.577304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.338 qpair failed and we were unable to recover it. 00:26:19.338 [2024-04-26 15:36:36.577638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.338 [2024-04-26 15:36:36.578072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.338 [2024-04-26 15:36:36.578087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.338 qpair failed and we were unable to recover it. 00:26:19.338 [2024-04-26 15:36:36.578473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.338 [2024-04-26 15:36:36.578794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.338 [2024-04-26 15:36:36.578809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.338 qpair failed and we were unable to recover it. 00:26:19.338 [2024-04-26 15:36:36.579024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.338 [2024-04-26 15:36:36.579265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.338 [2024-04-26 15:36:36.579279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.338 qpair failed and we were unable to recover it. 00:26:19.338 [2024-04-26 15:36:36.579504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.338 [2024-04-26 15:36:36.579875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.338 [2024-04-26 15:36:36.579890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.338 qpair failed and we were unable to recover it. 00:26:19.338 [2024-04-26 15:36:36.580267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.338 [2024-04-26 15:36:36.580473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.338 [2024-04-26 15:36:36.580487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.338 qpair failed and we were unable to recover it. 00:26:19.338 [2024-04-26 15:36:36.580820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.338 [2024-04-26 15:36:36.581070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.338 [2024-04-26 15:36:36.581085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.338 qpair failed and we were unable to recover it. 00:26:19.338 [2024-04-26 15:36:36.581410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.338 [2024-04-26 15:36:36.581604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.338 [2024-04-26 15:36:36.581618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.338 qpair failed and we were unable to recover it. 00:26:19.338 [2024-04-26 15:36:36.582026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.338 [2024-04-26 15:36:36.582231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.338 [2024-04-26 15:36:36.582245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.338 qpair failed and we were unable to recover it. 00:26:19.338 [2024-04-26 15:36:36.582639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.338 [2024-04-26 15:36:36.583006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.338 [2024-04-26 15:36:36.583020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.338 qpair failed and we were unable to recover it. 00:26:19.338 [2024-04-26 15:36:36.583355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.338 [2024-04-26 15:36:36.583566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.338 [2024-04-26 15:36:36.583580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.338 qpair failed and we were unable to recover it. 00:26:19.338 [2024-04-26 15:36:36.584003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.338 [2024-04-26 15:36:36.584232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.338 [2024-04-26 15:36:36.584246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.338 qpair failed and we were unable to recover it. 00:26:19.338 [2024-04-26 15:36:36.584314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.338 [2024-04-26 15:36:36.584541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.338 [2024-04-26 15:36:36.584555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.338 qpair failed and we were unable to recover it. 00:26:19.338 [2024-04-26 15:36:36.584924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.338 [2024-04-26 15:36:36.585278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.338 [2024-04-26 15:36:36.585292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.338 qpair failed and we were unable to recover it. 00:26:19.338 [2024-04-26 15:36:36.585671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.338 [2024-04-26 15:36:36.585912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.338 [2024-04-26 15:36:36.585926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.338 qpair failed and we were unable to recover it. 00:26:19.338 [2024-04-26 15:36:36.586166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.338 [2024-04-26 15:36:36.586540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.338 [2024-04-26 15:36:36.586553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.338 qpair failed and we were unable to recover it. 00:26:19.338 [2024-04-26 15:36:36.587003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.338 [2024-04-26 15:36:36.587067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.338 [2024-04-26 15:36:36.587080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.338 qpair failed and we were unable to recover it. 00:26:19.338 [2024-04-26 15:36:36.587382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.339 [2024-04-26 15:36:36.587740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.339 [2024-04-26 15:36:36.587755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.339 qpair failed and we were unable to recover it. 00:26:19.339 [2024-04-26 15:36:36.587969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.339 [2024-04-26 15:36:36.588333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.339 [2024-04-26 15:36:36.588348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.339 qpair failed and we were unable to recover it. 00:26:19.339 [2024-04-26 15:36:36.588587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.339 [2024-04-26 15:36:36.588824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.339 [2024-04-26 15:36:36.588847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.339 qpair failed and we were unable to recover it. 00:26:19.339 [2024-04-26 15:36:36.588908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.339 [2024-04-26 15:36:36.589242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.339 [2024-04-26 15:36:36.589256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.339 qpair failed and we were unable to recover it. 00:26:19.339 [2024-04-26 15:36:36.589481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.339 [2024-04-26 15:36:36.589862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.339 [2024-04-26 15:36:36.589876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.339 qpair failed and we were unable to recover it. 00:26:19.339 [2024-04-26 15:36:36.590117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.339 [2024-04-26 15:36:36.590503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.339 [2024-04-26 15:36:36.590518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.339 qpair failed and we were unable to recover it. 00:26:19.339 [2024-04-26 15:36:36.590845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.339 [2024-04-26 15:36:36.591084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.339 [2024-04-26 15:36:36.591098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.339 qpair failed and we were unable to recover it. 00:26:19.339 [2024-04-26 15:36:36.591437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.339 [2024-04-26 15:36:36.591789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.339 [2024-04-26 15:36:36.591802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.339 qpair failed and we were unable to recover it. 00:26:19.339 [2024-04-26 15:36:36.592224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.339 [2024-04-26 15:36:36.592576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.339 [2024-04-26 15:36:36.592590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.339 qpair failed and we were unable to recover it. 00:26:19.339 [2024-04-26 15:36:36.592947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.339 [2024-04-26 15:36:36.593164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.339 [2024-04-26 15:36:36.593179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.339 qpair failed and we were unable to recover it. 00:26:19.339 [2024-04-26 15:36:36.593383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.339 [2024-04-26 15:36:36.593778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.339 [2024-04-26 15:36:36.593793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.339 qpair failed and we were unable to recover it. 00:26:19.339 [2024-04-26 15:36:36.594045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.339 [2024-04-26 15:36:36.594255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.339 [2024-04-26 15:36:36.594269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.339 qpair failed and we were unable to recover it. 00:26:19.339 [2024-04-26 15:36:36.594513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.339 [2024-04-26 15:36:36.594886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.339 [2024-04-26 15:36:36.594900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.339 qpair failed and we were unable to recover it. 00:26:19.339 [2024-04-26 15:36:36.595297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.339 [2024-04-26 15:36:36.595646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.339 [2024-04-26 15:36:36.595660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.339 qpair failed and we were unable to recover it. 00:26:19.339 [2024-04-26 15:36:36.595863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.339 [2024-04-26 15:36:36.596073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.339 [2024-04-26 15:36:36.596086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.339 qpair failed and we were unable to recover it. 00:26:19.339 [2024-04-26 15:36:36.596324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.339 [2024-04-26 15:36:36.596527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.339 [2024-04-26 15:36:36.596540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.339 qpair failed and we were unable to recover it. 00:26:19.339 [2024-04-26 15:36:36.596904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.339 [2024-04-26 15:36:36.597109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.339 [2024-04-26 15:36:36.597123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.339 qpair failed and we were unable to recover it. 00:26:19.339 [2024-04-26 15:36:36.597449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.339 [2024-04-26 15:36:36.597711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.339 [2024-04-26 15:36:36.597724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.339 qpair failed and we were unable to recover it. 00:26:19.339 [2024-04-26 15:36:36.598056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.339 [2024-04-26 15:36:36.598241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.339 [2024-04-26 15:36:36.598255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.339 qpair failed and we were unable to recover it. 00:26:19.339 [2024-04-26 15:36:36.598645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.339 [2024-04-26 15:36:36.599020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.339 [2024-04-26 15:36:36.599036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.339 qpair failed and we were unable to recover it. 00:26:19.339 [2024-04-26 15:36:36.599391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.339 [2024-04-26 15:36:36.599717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.339 [2024-04-26 15:36:36.599732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.339 qpair failed and we were unable to recover it. 00:26:19.339 [2024-04-26 15:36:36.599937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.339 [2024-04-26 15:36:36.600308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.339 [2024-04-26 15:36:36.600321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.339 qpair failed and we were unable to recover it. 00:26:19.339 [2024-04-26 15:36:36.600711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.339 [2024-04-26 15:36:36.600965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.339 [2024-04-26 15:36:36.600979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.339 qpair failed and we were unable to recover it. 00:26:19.339 [2024-04-26 15:36:36.601273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.339 [2024-04-26 15:36:36.601680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.339 [2024-04-26 15:36:36.601694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.339 qpair failed and we were unable to recover it. 00:26:19.339 [2024-04-26 15:36:36.602025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.339 [2024-04-26 15:36:36.602389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.339 [2024-04-26 15:36:36.602403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.339 qpair failed and we were unable to recover it. 00:26:19.339 [2024-04-26 15:36:36.602602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.339 [2024-04-26 15:36:36.602849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.339 [2024-04-26 15:36:36.602864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.339 qpair failed and we were unable to recover it. 00:26:19.339 [2024-04-26 15:36:36.603241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.339 [2024-04-26 15:36:36.603611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.339 [2024-04-26 15:36:36.603625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.339 qpair failed and we were unable to recover it. 00:26:19.339 [2024-04-26 15:36:36.603770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.339 [2024-04-26 15:36:36.604200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.339 [2024-04-26 15:36:36.604214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.339 qpair failed and we were unable to recover it. 00:26:19.339 [2024-04-26 15:36:36.604566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.339 [2024-04-26 15:36:36.604821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.339 [2024-04-26 15:36:36.604834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.340 qpair failed and we were unable to recover it. 00:26:19.340 [2024-04-26 15:36:36.605059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.340 [2024-04-26 15:36:36.605412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.340 [2024-04-26 15:36:36.605426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.340 qpair failed and we were unable to recover it. 00:26:19.340 [2024-04-26 15:36:36.605613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.340 [2024-04-26 15:36:36.605834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.340 [2024-04-26 15:36:36.605853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.340 qpair failed and we were unable to recover it. 00:26:19.340 [2024-04-26 15:36:36.606071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.340 [2024-04-26 15:36:36.606422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.340 [2024-04-26 15:36:36.606435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.340 qpair failed and we were unable to recover it. 00:26:19.340 [2024-04-26 15:36:36.606789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.340 [2024-04-26 15:36:36.607127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.340 [2024-04-26 15:36:36.607141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.340 qpair failed and we were unable to recover it. 00:26:19.340 [2024-04-26 15:36:36.607542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.340 [2024-04-26 15:36:36.607872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.340 [2024-04-26 15:36:36.607885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.340 qpair failed and we were unable to recover it. 00:26:19.340 [2024-04-26 15:36:36.608237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.340 [2024-04-26 15:36:36.608577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.340 [2024-04-26 15:36:36.608590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.340 qpair failed and we were unable to recover it. 00:26:19.340 [2024-04-26 15:36:36.608787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.340 [2024-04-26 15:36:36.609190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.340 [2024-04-26 15:36:36.609203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.340 qpair failed and we were unable to recover it. 00:26:19.340 [2024-04-26 15:36:36.609561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.340 [2024-04-26 15:36:36.609772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.340 [2024-04-26 15:36:36.609787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.340 qpair failed and we were unable to recover it. 00:26:19.340 [2024-04-26 15:36:36.609987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.340 [2024-04-26 15:36:36.610213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.340 [2024-04-26 15:36:36.610227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.340 qpair failed and we were unable to recover it. 00:26:19.340 [2024-04-26 15:36:36.610336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.340 [2024-04-26 15:36:36.610563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.340 [2024-04-26 15:36:36.610577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.340 qpair failed and we were unable to recover it. 00:26:19.340 [2024-04-26 15:36:36.610913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.340 [2024-04-26 15:36:36.611292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.340 [2024-04-26 15:36:36.611307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.340 qpair failed and we were unable to recover it. 00:26:19.340 [2024-04-26 15:36:36.611618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.340 [2024-04-26 15:36:36.611986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.340 [2024-04-26 15:36:36.612014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.340 qpair failed and we were unable to recover it. 00:26:19.340 [2024-04-26 15:36:36.612405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.340 [2024-04-26 15:36:36.612762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.340 [2024-04-26 15:36:36.612777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.340 qpair failed and we were unable to recover it. 00:26:19.340 [2024-04-26 15:36:36.613021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.340 [2024-04-26 15:36:36.613240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.340 [2024-04-26 15:36:36.613255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.340 qpair failed and we were unable to recover it. 00:26:19.340 [2024-04-26 15:36:36.613638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.340 [2024-04-26 15:36:36.614014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.340 [2024-04-26 15:36:36.614041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.340 qpair failed and we were unable to recover it. 00:26:19.340 [2024-04-26 15:36:36.614421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.340 [2024-04-26 15:36:36.614630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.340 [2024-04-26 15:36:36.614647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.340 qpair failed and we were unable to recover it. 00:26:19.340 [2024-04-26 15:36:36.614986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.340 [2024-04-26 15:36:36.615366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.340 [2024-04-26 15:36:36.615382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.340 qpair failed and we were unable to recover it. 00:26:19.340 [2024-04-26 15:36:36.615574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.340 [2024-04-26 15:36:36.615911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.340 [2024-04-26 15:36:36.615940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.340 qpair failed and we were unable to recover it. 00:26:19.340 [2024-04-26 15:36:36.616299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.340 [2024-04-26 15:36:36.616671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.340 [2024-04-26 15:36:36.616688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.340 qpair failed and we were unable to recover it. 00:26:19.340 [2024-04-26 15:36:36.616992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.340 [2024-04-26 15:36:36.617178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.340 [2024-04-26 15:36:36.617194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.340 qpair failed and we were unable to recover it. 00:26:19.340 [2024-04-26 15:36:36.617428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.340 [2024-04-26 15:36:36.617787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.340 [2024-04-26 15:36:36.617814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.340 qpair failed and we were unable to recover it. 00:26:19.340 [2024-04-26 15:36:36.618209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.340 [2024-04-26 15:36:36.618589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.340 [2024-04-26 15:36:36.618605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.340 qpair failed and we were unable to recover it. 00:26:19.340 [2024-04-26 15:36:36.618799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.340 [2024-04-26 15:36:36.618909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.340 [2024-04-26 15:36:36.618924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.340 qpair failed and we were unable to recover it. 00:26:19.340 [2024-04-26 15:36:36.619185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.340 [2024-04-26 15:36:36.619550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.340 [2024-04-26 15:36:36.619576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.340 qpair failed and we were unable to recover it. 00:26:19.340 [2024-04-26 15:36:36.619966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.340 [2024-04-26 15:36:36.620340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.340 [2024-04-26 15:36:36.620356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.340 qpair failed and we were unable to recover it. 00:26:19.340 [2024-04-26 15:36:36.620714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.340 [2024-04-26 15:36:36.621089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.340 [2024-04-26 15:36:36.621117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.340 qpair failed and we were unable to recover it. 00:26:19.340 [2024-04-26 15:36:36.621502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.340 [2024-04-26 15:36:36.621885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.340 [2024-04-26 15:36:36.621901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.340 qpair failed and we were unable to recover it. 00:26:19.340 [2024-04-26 15:36:36.622102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.340 [2024-04-26 15:36:36.622477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.340 [2024-04-26 15:36:36.622502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.340 qpair failed and we were unable to recover it. 00:26:19.340 [2024-04-26 15:36:36.622592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.341 [2024-04-26 15:36:36.622797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.341 [2024-04-26 15:36:36.622826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.341 qpair failed and we were unable to recover it. 00:26:19.341 [2024-04-26 15:36:36.623152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.341 [2024-04-26 15:36:36.623525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.341 [2024-04-26 15:36:36.623540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.341 qpair failed and we were unable to recover it. 00:26:19.341 [2024-04-26 15:36:36.623902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.341 [2024-04-26 15:36:36.624123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.341 [2024-04-26 15:36:36.624146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.341 qpair failed and we were unable to recover it. 00:26:19.341 [2024-04-26 15:36:36.624368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.341 [2024-04-26 15:36:36.624613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.341 [2024-04-26 15:36:36.624640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.341 qpair failed and we were unable to recover it. 00:26:19.341 [2024-04-26 15:36:36.625036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.341 [2024-04-26 15:36:36.625406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.341 [2024-04-26 15:36:36.625422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.341 qpair failed and we were unable to recover it. 00:26:19.341 [2024-04-26 15:36:36.625780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.341 [2024-04-26 15:36:36.626155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.341 [2024-04-26 15:36:36.626183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.341 qpair failed and we were unable to recover it. 00:26:19.341 [2024-04-26 15:36:36.626564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.341 [2024-04-26 15:36:36.626937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.341 [2024-04-26 15:36:36.626953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.341 qpair failed and we were unable to recover it. 00:26:19.341 [2024-04-26 15:36:36.627299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.341 [2024-04-26 15:36:36.627667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.341 [2024-04-26 15:36:36.627682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.341 qpair failed and we were unable to recover it. 00:26:19.341 [2024-04-26 15:36:36.627883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.341 [2024-04-26 15:36:36.628196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.341 [2024-04-26 15:36:36.628217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.341 qpair failed and we were unable to recover it. 00:26:19.341 [2024-04-26 15:36:36.628452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.341 [2024-04-26 15:36:36.628816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.341 [2024-04-26 15:36:36.628851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.341 qpair failed and we were unable to recover it. 00:26:19.341 [2024-04-26 15:36:36.629080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.341 [2024-04-26 15:36:36.629473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.341 [2024-04-26 15:36:36.629499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.341 qpair failed and we were unable to recover it. 00:26:19.341 [2024-04-26 15:36:36.629880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.341 [2024-04-26 15:36:36.630267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.341 [2024-04-26 15:36:36.630282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.341 qpair failed and we were unable to recover it. 00:26:19.341 [2024-04-26 15:36:36.630520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.341 [2024-04-26 15:36:36.630866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.341 [2024-04-26 15:36:36.630881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.341 qpair failed and we were unable to recover it. 00:26:19.341 [2024-04-26 15:36:36.631081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.341 [2024-04-26 15:36:36.631310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.341 [2024-04-26 15:36:36.631336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.341 qpair failed and we were unable to recover it. 00:26:19.341 [2024-04-26 15:36:36.631702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.341 [2024-04-26 15:36:36.631943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.341 [2024-04-26 15:36:36.631963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.341 qpair failed and we were unable to recover it. 00:26:19.341 [2024-04-26 15:36:36.632326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.341 [2024-04-26 15:36:36.632662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.341 [2024-04-26 15:36:36.632677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.341 qpair failed and we were unable to recover it. 00:26:19.341 [2024-04-26 15:36:36.633059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.341 [2024-04-26 15:36:36.633268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.341 [2024-04-26 15:36:36.633292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.341 qpair failed and we were unable to recover it. 00:26:19.341 [2024-04-26 15:36:36.633522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.341 [2024-04-26 15:36:36.633888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.341 [2024-04-26 15:36:36.633913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.341 qpair failed and we were unable to recover it. 00:26:19.341 [2024-04-26 15:36:36.634293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.341 [2024-04-26 15:36:36.634671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.341 [2024-04-26 15:36:36.634692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.341 qpair failed and we were unable to recover it. 00:26:19.341 [2024-04-26 15:36:36.635038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.341 [2024-04-26 15:36:36.635434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.341 [2024-04-26 15:36:36.635461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.341 qpair failed and we were unable to recover it. 00:26:19.341 [2024-04-26 15:36:36.635854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.341 [2024-04-26 15:36:36.636227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.341 [2024-04-26 15:36:36.636242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.341 qpair failed and we were unable to recover it. 00:26:19.341 [2024-04-26 15:36:36.636590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.341 [2024-04-26 15:36:36.636815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.341 [2024-04-26 15:36:36.636829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.341 qpair failed and we were unable to recover it. 00:26:19.341 [2024-04-26 15:36:36.637186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.341 [2024-04-26 15:36:36.637402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.341 [2024-04-26 15:36:36.637417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.341 qpair failed and we were unable to recover it. 00:26:19.341 [2024-04-26 15:36:36.637625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.341 [2024-04-26 15:36:36.637855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.341 [2024-04-26 15:36:36.637881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.341 qpair failed and we were unable to recover it. 00:26:19.341 [2024-04-26 15:36:36.638093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.341 [2024-04-26 15:36:36.638435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.341 [2024-04-26 15:36:36.638460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.341 qpair failed and we were unable to recover it. 00:26:19.341 [2024-04-26 15:36:36.638823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.341 [2024-04-26 15:36:36.639196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.341 [2024-04-26 15:36:36.639212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.341 qpair failed and we were unable to recover it. 00:26:19.341 [2024-04-26 15:36:36.639568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.341 [2024-04-26 15:36:36.639780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.341 [2024-04-26 15:36:36.639800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.341 qpair failed and we were unable to recover it. 00:26:19.341 [2024-04-26 15:36:36.640200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.341 [2024-04-26 15:36:36.640545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.341 [2024-04-26 15:36:36.640564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.342 qpair failed and we were unable to recover it. 00:26:19.342 [2024-04-26 15:36:36.640897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.342 [2024-04-26 15:36:36.641279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.342 [2024-04-26 15:36:36.641299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.342 qpair failed and we were unable to recover it. 00:26:19.342 [2024-04-26 15:36:36.641488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.342 [2024-04-26 15:36:36.641863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.342 [2024-04-26 15:36:36.641889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.342 qpair failed and we were unable to recover it. 00:26:19.342 [2024-04-26 15:36:36.642279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.342 [2024-04-26 15:36:36.642532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.342 [2024-04-26 15:36:36.642549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.342 qpair failed and we were unable to recover it. 00:26:19.342 [2024-04-26 15:36:36.642913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.342 [2024-04-26 15:36:36.643261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.342 [2024-04-26 15:36:36.643276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.342 qpair failed and we were unable to recover it. 00:26:19.342 [2024-04-26 15:36:36.643629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.342 [2024-04-26 15:36:36.643861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.342 [2024-04-26 15:36:36.643887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.342 qpair failed and we were unable to recover it. 00:26:19.342 [2024-04-26 15:36:36.643976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.342 [2024-04-26 15:36:36.644196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.342 [2024-04-26 15:36:36.644215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.342 qpair failed and we were unable to recover it. 00:26:19.342 [2024-04-26 15:36:36.644628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.342 [2024-04-26 15:36:36.644845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.342 [2024-04-26 15:36:36.644862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.342 qpair failed and we were unable to recover it. 00:26:19.342 [2024-04-26 15:36:36.645115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.342 [2024-04-26 15:36:36.645489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.342 [2024-04-26 15:36:36.645514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.342 qpair failed and we were unable to recover it. 00:26:19.342 [2024-04-26 15:36:36.645910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.342 [2024-04-26 15:36:36.646312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.342 [2024-04-26 15:36:36.646332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.342 qpair failed and we were unable to recover it. 00:26:19.342 [2024-04-26 15:36:36.646623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.342 [2024-04-26 15:36:36.646992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.342 [2024-04-26 15:36:36.647008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.342 qpair failed and we were unable to recover it. 00:26:19.342 [2024-04-26 15:36:36.647398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.342 [2024-04-26 15:36:36.647772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.342 [2024-04-26 15:36:36.647804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.342 qpair failed and we were unable to recover it. 00:26:19.342 [2024-04-26 15:36:36.648162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.342 [2024-04-26 15:36:36.648537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.342 [2024-04-26 15:36:36.648552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.342 qpair failed and we were unable to recover it. 00:26:19.342 [2024-04-26 15:36:36.648618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.342 [2024-04-26 15:36:36.648970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.342 [2024-04-26 15:36:36.648997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.342 qpair failed and we were unable to recover it. 00:26:19.342 [2024-04-26 15:36:36.649398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.342 [2024-04-26 15:36:36.649631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.342 [2024-04-26 15:36:36.649649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.342 qpair failed and we were unable to recover it. 00:26:19.342 [2024-04-26 15:36:36.649852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.342 [2024-04-26 15:36:36.650080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.342 [2024-04-26 15:36:36.650095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.342 qpair failed and we were unable to recover it. 00:26:19.342 [2024-04-26 15:36:36.650289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.342 [2024-04-26 15:36:36.650687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.342 [2024-04-26 15:36:36.650711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.342 qpair failed and we were unable to recover it. 00:26:19.342 [2024-04-26 15:36:36.650945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.342 [2024-04-26 15:36:36.651165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.342 [2024-04-26 15:36:36.651189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.342 qpair failed and we were unable to recover it. 00:26:19.342 [2024-04-26 15:36:36.651570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.342 [2024-04-26 15:36:36.651935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.342 [2024-04-26 15:36:36.651954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.342 qpair failed and we were unable to recover it. 00:26:19.342 [2024-04-26 15:36:36.652323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.342 [2024-04-26 15:36:36.652531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.342 [2024-04-26 15:36:36.652545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.342 qpair failed and we were unable to recover it. 00:26:19.342 [2024-04-26 15:36:36.652911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.342 [2024-04-26 15:36:36.653122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.342 [2024-04-26 15:36:36.653147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.342 qpair failed and we were unable to recover it. 00:26:19.342 [2024-04-26 15:36:36.653521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.342 [2024-04-26 15:36:36.653909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.342 [2024-04-26 15:36:36.653925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.342 qpair failed and we were unable to recover it. 00:26:19.342 [2024-04-26 15:36:36.654255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.342 [2024-04-26 15:36:36.654606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.342 [2024-04-26 15:36:36.654620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.342 qpair failed and we were unable to recover it. 00:26:19.342 [2024-04-26 15:36:36.654823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.342 [2024-04-26 15:36:36.655180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.342 [2024-04-26 15:36:36.655205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.342 qpair failed and we were unable to recover it. 00:26:19.343 [2024-04-26 15:36:36.655614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.343 [2024-04-26 15:36:36.656033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.343 [2024-04-26 15:36:36.656051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.343 qpair failed and we were unable to recover it. 00:26:19.343 [2024-04-26 15:36:36.656364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.343 [2024-04-26 15:36:36.656585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.343 [2024-04-26 15:36:36.656599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.343 qpair failed and we were unable to recover it. 00:26:19.343 [2024-04-26 15:36:36.657008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.343 [2024-04-26 15:36:36.657364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.343 [2024-04-26 15:36:36.657378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.343 qpair failed and we were unable to recover it. 00:26:19.343 [2024-04-26 15:36:36.657590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.343 [2024-04-26 15:36:36.657822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.343 [2024-04-26 15:36:36.657836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.343 qpair failed and we were unable to recover it. 00:26:19.343 [2024-04-26 15:36:36.658079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.343 [2024-04-26 15:36:36.658426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.343 [2024-04-26 15:36:36.658439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.343 qpair failed and we were unable to recover it. 00:26:19.343 [2024-04-26 15:36:36.658791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.343 [2024-04-26 15:36:36.659145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.343 [2024-04-26 15:36:36.659160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.343 qpair failed and we were unable to recover it. 00:26:19.343 [2024-04-26 15:36:36.659362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.343 [2024-04-26 15:36:36.659697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.343 [2024-04-26 15:36:36.659712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.343 qpair failed and we were unable to recover it. 00:26:19.343 [2024-04-26 15:36:36.659785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.343 [2024-04-26 15:36:36.659977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.343 [2024-04-26 15:36:36.659994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.343 qpair failed and we were unable to recover it. 00:26:19.343 [2024-04-26 15:36:36.660213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.343 [2024-04-26 15:36:36.660606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.343 [2024-04-26 15:36:36.660621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.343 qpair failed and we were unable to recover it. 00:26:19.343 [2024-04-26 15:36:36.660816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.343 [2024-04-26 15:36:36.661064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.343 [2024-04-26 15:36:36.661080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.343 qpair failed and we were unable to recover it. 00:26:19.343 [2024-04-26 15:36:36.661411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.343 [2024-04-26 15:36:36.661622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.343 [2024-04-26 15:36:36.661639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.343 qpair failed and we were unable to recover it. 00:26:19.343 [2024-04-26 15:36:36.662004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.343 [2024-04-26 15:36:36.662368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.343 [2024-04-26 15:36:36.662381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.343 qpair failed and we were unable to recover it. 00:26:19.343 [2024-04-26 15:36:36.662753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.343 [2024-04-26 15:36:36.663100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.343 [2024-04-26 15:36:36.663114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.343 qpair failed and we were unable to recover it. 00:26:19.343 [2024-04-26 15:36:36.663175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.343 [2024-04-26 15:36:36.663516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.343 [2024-04-26 15:36:36.663531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.343 qpair failed and we were unable to recover it. 00:26:19.343 [2024-04-26 15:36:36.663908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.343 [2024-04-26 15:36:36.664106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.343 [2024-04-26 15:36:36.664120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.343 qpair failed and we were unable to recover it. 00:26:19.343 [2024-04-26 15:36:36.664505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.343 [2024-04-26 15:36:36.664858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.343 [2024-04-26 15:36:36.664873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.343 qpair failed and we were unable to recover it. 00:26:19.343 [2024-04-26 15:36:36.665264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.343 [2024-04-26 15:36:36.665620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.343 [2024-04-26 15:36:36.665634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.343 qpair failed and we were unable to recover it. 00:26:19.343 [2024-04-26 15:36:36.665963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.343 [2024-04-26 15:36:36.666289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.343 [2024-04-26 15:36:36.666304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.343 qpair failed and we were unable to recover it. 00:26:19.343 [2024-04-26 15:36:36.666527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.343 [2024-04-26 15:36:36.666861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.343 [2024-04-26 15:36:36.666876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.343 qpair failed and we were unable to recover it. 00:26:19.343 [2024-04-26 15:36:36.667243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.343 [2024-04-26 15:36:36.667601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.343 [2024-04-26 15:36:36.667615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.343 qpair failed and we were unable to recover it. 00:26:19.343 [2024-04-26 15:36:36.667975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.343 [2024-04-26 15:36:36.668362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.343 [2024-04-26 15:36:36.668376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.343 qpair failed and we were unable to recover it. 00:26:19.343 [2024-04-26 15:36:36.668704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.343 [2024-04-26 15:36:36.668914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.343 [2024-04-26 15:36:36.668929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.343 qpair failed and we were unable to recover it. 00:26:19.343 [2024-04-26 15:36:36.669320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.343 [2024-04-26 15:36:36.669674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.343 [2024-04-26 15:36:36.669689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.343 qpair failed and we were unable to recover it. 00:26:19.343 [2024-04-26 15:36:36.669804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.343 [2024-04-26 15:36:36.670113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.343 [2024-04-26 15:36:36.670127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.343 qpair failed and we were unable to recover it. 00:26:19.343 [2024-04-26 15:36:36.670324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.343 [2024-04-26 15:36:36.670655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.343 [2024-04-26 15:36:36.670669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.343 qpair failed and we were unable to recover it. 00:26:19.343 [2024-04-26 15:36:36.670869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.343 [2024-04-26 15:36:36.671186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.343 [2024-04-26 15:36:36.671200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.343 qpair failed and we were unable to recover it. 00:26:19.343 [2024-04-26 15:36:36.671352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.343 [2024-04-26 15:36:36.671705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.343 [2024-04-26 15:36:36.671719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.343 qpair failed and we were unable to recover it. 00:26:19.343 [2024-04-26 15:36:36.671947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.343 [2024-04-26 15:36:36.672328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.343 [2024-04-26 15:36:36.672343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.343 qpair failed and we were unable to recover it. 00:26:19.343 [2024-04-26 15:36:36.672680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.344 [2024-04-26 15:36:36.673066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.344 [2024-04-26 15:36:36.673081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.344 qpair failed and we were unable to recover it. 00:26:19.344 [2024-04-26 15:36:36.673411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.344 [2024-04-26 15:36:36.673621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.344 [2024-04-26 15:36:36.673635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.344 qpair failed and we were unable to recover it. 00:26:19.344 [2024-04-26 15:36:36.673851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.344 [2024-04-26 15:36:36.673993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.344 [2024-04-26 15:36:36.674006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.344 qpair failed and we were unable to recover it. 00:26:19.344 [2024-04-26 15:36:36.674224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.344 [2024-04-26 15:36:36.674583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.344 [2024-04-26 15:36:36.674597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.344 qpair failed and we were unable to recover it. 00:26:19.344 [2024-04-26 15:36:36.674956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.344 [2024-04-26 15:36:36.675313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.344 [2024-04-26 15:36:36.675327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.344 qpair failed and we were unable to recover it. 00:26:19.344 [2024-04-26 15:36:36.675680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.344 [2024-04-26 15:36:36.676029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.344 [2024-04-26 15:36:36.676044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.344 qpair failed and we were unable to recover it. 00:26:19.344 [2024-04-26 15:36:36.676245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.344 [2024-04-26 15:36:36.676555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.344 [2024-04-26 15:36:36.676569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.344 qpair failed and we were unable to recover it. 00:26:19.344 [2024-04-26 15:36:36.676793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.344 [2024-04-26 15:36:36.677137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.344 [2024-04-26 15:36:36.677151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.344 qpair failed and we were unable to recover it. 00:26:19.344 [2024-04-26 15:36:36.677502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.344 [2024-04-26 15:36:36.677880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.344 [2024-04-26 15:36:36.677895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.344 qpair failed and we were unable to recover it. 00:26:19.344 [2024-04-26 15:36:36.678234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.344 [2024-04-26 15:36:36.678588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.344 [2024-04-26 15:36:36.678602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.344 qpair failed and we were unable to recover it. 00:26:19.344 [2024-04-26 15:36:36.678930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.344 [2024-04-26 15:36:36.679308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.344 [2024-04-26 15:36:36.679322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.344 qpair failed and we were unable to recover it. 00:26:19.344 [2024-04-26 15:36:36.679580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.344 [2024-04-26 15:36:36.679790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.344 [2024-04-26 15:36:36.679804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.344 qpair failed and we were unable to recover it. 00:26:19.344 [2024-04-26 15:36:36.679993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.344 [2024-04-26 15:36:36.680209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.344 [2024-04-26 15:36:36.680223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.344 qpair failed and we were unable to recover it. 00:26:19.344 [2024-04-26 15:36:36.680420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.344 [2024-04-26 15:36:36.680770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.344 [2024-04-26 15:36:36.680784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.344 qpair failed and we were unable to recover it. 00:26:19.344 [2024-04-26 15:36:36.681129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.344 [2024-04-26 15:36:36.681343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.344 [2024-04-26 15:36:36.681357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.344 qpair failed and we were unable to recover it. 00:26:19.344 [2024-04-26 15:36:36.681774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.344 [2024-04-26 15:36:36.682147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.344 [2024-04-26 15:36:36.682162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.344 qpair failed and we were unable to recover it. 00:26:19.344 [2024-04-26 15:36:36.682405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.344 [2024-04-26 15:36:36.682748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.344 [2024-04-26 15:36:36.682763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.344 qpair failed and we were unable to recover it. 00:26:19.344 [2024-04-26 15:36:36.682997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.344 [2024-04-26 15:36:36.683369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.344 [2024-04-26 15:36:36.683384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.344 qpair failed and we were unable to recover it. 00:26:19.344 [2024-04-26 15:36:36.683581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.344 [2024-04-26 15:36:36.683928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.344 [2024-04-26 15:36:36.683943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.344 qpair failed and we were unable to recover it. 00:26:19.344 [2024-04-26 15:36:36.684149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.344 [2024-04-26 15:36:36.684548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.344 [2024-04-26 15:36:36.684562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.344 qpair failed and we were unable to recover it. 00:26:19.344 [2024-04-26 15:36:36.684886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.344 [2024-04-26 15:36:36.685268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.344 [2024-04-26 15:36:36.685282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.344 qpair failed and we were unable to recover it. 00:26:19.344 [2024-04-26 15:36:36.685490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.344 [2024-04-26 15:36:36.685704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.344 [2024-04-26 15:36:36.685718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.344 qpair failed and we were unable to recover it. 00:26:19.344 [2024-04-26 15:36:36.685931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.344 [2024-04-26 15:36:36.686178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.344 [2024-04-26 15:36:36.686193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.344 qpair failed and we were unable to recover it. 00:26:19.344 [2024-04-26 15:36:36.686539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.344 [2024-04-26 15:36:36.686897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.344 [2024-04-26 15:36:36.686911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.344 qpair failed and we were unable to recover it. 00:26:19.344 [2024-04-26 15:36:36.687272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.344 [2024-04-26 15:36:36.687653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.344 [2024-04-26 15:36:36.687667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.344 qpair failed and we were unable to recover it. 00:26:19.344 [2024-04-26 15:36:36.688069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.344 [2024-04-26 15:36:36.688289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.344 [2024-04-26 15:36:36.688302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.344 qpair failed and we were unable to recover it. 00:26:19.344 [2024-04-26 15:36:36.688367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.344 [2024-04-26 15:36:36.688693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.344 [2024-04-26 15:36:36.688706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.344 qpair failed and we were unable to recover it. 00:26:19.344 [2024-04-26 15:36:36.689075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.344 [2024-04-26 15:36:36.689325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.344 [2024-04-26 15:36:36.689339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.344 qpair failed and we were unable to recover it. 00:26:19.344 [2024-04-26 15:36:36.689688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.344 [2024-04-26 15:36:36.690054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.344 [2024-04-26 15:36:36.690069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.344 qpair failed and we were unable to recover it. 00:26:19.345 [2024-04-26 15:36:36.690214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.345 [2024-04-26 15:36:36.690412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.345 [2024-04-26 15:36:36.690425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.345 qpair failed and we were unable to recover it. 00:26:19.345 [2024-04-26 15:36:36.690620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.345 [2024-04-26 15:36:36.690956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.345 [2024-04-26 15:36:36.690971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.345 qpair failed and we were unable to recover it. 00:26:19.345 [2024-04-26 15:36:36.691324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.345 [2024-04-26 15:36:36.691674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.345 [2024-04-26 15:36:36.691688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.345 qpair failed and we were unable to recover it. 00:26:19.345 [2024-04-26 15:36:36.692030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.345 [2024-04-26 15:36:36.692379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.345 [2024-04-26 15:36:36.692394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.345 qpair failed and we were unable to recover it. 00:26:19.345 [2024-04-26 15:36:36.692735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.345 [2024-04-26 15:36:36.692948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.345 [2024-04-26 15:36:36.692962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.345 qpair failed and we were unable to recover it. 00:26:19.345 [2024-04-26 15:36:36.693328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.345 [2024-04-26 15:36:36.693697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.345 [2024-04-26 15:36:36.693711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.345 qpair failed and we were unable to recover it. 00:26:19.345 [2024-04-26 15:36:36.693936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.345 [2024-04-26 15:36:36.694151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.345 [2024-04-26 15:36:36.694164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.345 qpair failed and we were unable to recover it. 00:26:19.345 [2024-04-26 15:36:36.694375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.345 [2024-04-26 15:36:36.694573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.345 [2024-04-26 15:36:36.694587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.345 qpair failed and we were unable to recover it. 00:26:19.345 [2024-04-26 15:36:36.694782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.345 [2024-04-26 15:36:36.695030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.345 [2024-04-26 15:36:36.695045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.345 qpair failed and we were unable to recover it. 00:26:19.345 [2024-04-26 15:36:36.695378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.345 [2024-04-26 15:36:36.695730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.345 [2024-04-26 15:36:36.695744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.345 qpair failed and we were unable to recover it. 00:26:19.345 [2024-04-26 15:36:36.696105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.345 [2024-04-26 15:36:36.696328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.345 [2024-04-26 15:36:36.696342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.345 qpair failed and we were unable to recover it. 00:26:19.345 [2024-04-26 15:36:36.696744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.345 [2024-04-26 15:36:36.697098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.345 [2024-04-26 15:36:36.697113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.345 qpair failed and we were unable to recover it. 00:26:19.345 [2024-04-26 15:36:36.697453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.345 [2024-04-26 15:36:36.697755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.345 [2024-04-26 15:36:36.697769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.345 qpair failed and we were unable to recover it. 00:26:19.345 [2024-04-26 15:36:36.697968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.345 [2024-04-26 15:36:36.698285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.345 [2024-04-26 15:36:36.698299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.345 qpair failed and we were unable to recover it. 00:26:19.345 [2024-04-26 15:36:36.698523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.345 [2024-04-26 15:36:36.698874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.345 [2024-04-26 15:36:36.698889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.345 qpair failed and we were unable to recover it. 00:26:19.345 [2024-04-26 15:36:36.699250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.345 [2024-04-26 15:36:36.699625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.345 [2024-04-26 15:36:36.699639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.345 qpair failed and we were unable to recover it. 00:26:19.345 [2024-04-26 15:36:36.699982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.345 [2024-04-26 15:36:36.700160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.345 [2024-04-26 15:36:36.700173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.345 qpair failed and we were unable to recover it. 00:26:19.345 [2024-04-26 15:36:36.700464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.345 [2024-04-26 15:36:36.700680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.345 [2024-04-26 15:36:36.700693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.345 qpair failed and we were unable to recover it. 00:26:19.345 [2024-04-26 15:36:36.700878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.345 [2024-04-26 15:36:36.701199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.345 [2024-04-26 15:36:36.701213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.345 qpair failed and we were unable to recover it. 00:26:19.345 [2024-04-26 15:36:36.701577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.345 [2024-04-26 15:36:36.701793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.345 [2024-04-26 15:36:36.701808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.345 qpair failed and we were unable to recover it. 00:26:19.345 [2024-04-26 15:36:36.702036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.345 [2024-04-26 15:36:36.702405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.345 [2024-04-26 15:36:36.702419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.345 qpair failed and we were unable to recover it. 00:26:19.345 [2024-04-26 15:36:36.702820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.345 [2024-04-26 15:36:36.702992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.345 [2024-04-26 15:36:36.703007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.345 qpair failed and we were unable to recover it. 00:26:19.345 [2024-04-26 15:36:36.703423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.345 [2024-04-26 15:36:36.703631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.345 [2024-04-26 15:36:36.703645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.345 qpair failed and we were unable to recover it. 00:26:19.345 [2024-04-26 15:36:36.704021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.345 [2024-04-26 15:36:36.704369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.345 [2024-04-26 15:36:36.704383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.345 qpair failed and we were unable to recover it. 00:26:19.345 [2024-04-26 15:36:36.704729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.345 [2024-04-26 15:36:36.705102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.345 [2024-04-26 15:36:36.705116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.345 qpair failed and we were unable to recover it. 00:26:19.345 [2024-04-26 15:36:36.705444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.345 [2024-04-26 15:36:36.705819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.345 [2024-04-26 15:36:36.705833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.345 qpair failed and we were unable to recover it. 00:26:19.345 [2024-04-26 15:36:36.706247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.345 [2024-04-26 15:36:36.706576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.345 [2024-04-26 15:36:36.706591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.345 qpair failed and we were unable to recover it. 00:26:19.345 [2024-04-26 15:36:36.706785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.345 [2024-04-26 15:36:36.707131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.345 [2024-04-26 15:36:36.707145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.345 qpair failed and we were unable to recover it. 00:26:19.345 [2024-04-26 15:36:36.707369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.345 [2024-04-26 15:36:36.707574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.346 [2024-04-26 15:36:36.707589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.346 qpair failed and we were unable to recover it. 00:26:19.346 [2024-04-26 15:36:36.707938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.346 [2024-04-26 15:36:36.708165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.346 [2024-04-26 15:36:36.708179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.346 qpair failed and we were unable to recover it. 00:26:19.346 [2024-04-26 15:36:36.708410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.346 [2024-04-26 15:36:36.708619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.346 [2024-04-26 15:36:36.708633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.346 qpair failed and we were unable to recover it. 00:26:19.346 [2024-04-26 15:36:36.708988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.346 [2024-04-26 15:36:36.709362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.346 [2024-04-26 15:36:36.709376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.346 qpair failed and we were unable to recover it. 00:26:19.346 [2024-04-26 15:36:36.709716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.346 [2024-04-26 15:36:36.710081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.346 [2024-04-26 15:36:36.710095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.346 qpair failed and we were unable to recover it. 00:26:19.346 [2024-04-26 15:36:36.710425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.346 [2024-04-26 15:36:36.710638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.346 [2024-04-26 15:36:36.710653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.346 qpair failed and we were unable to recover it. 00:26:19.346 [2024-04-26 15:36:36.711017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.346 [2024-04-26 15:36:36.711365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.346 [2024-04-26 15:36:36.711379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.346 qpair failed and we were unable to recover it. 00:26:19.346 [2024-04-26 15:36:36.711633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.346 [2024-04-26 15:36:36.711946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.346 [2024-04-26 15:36:36.711960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.346 qpair failed and we were unable to recover it. 00:26:19.346 [2024-04-26 15:36:36.712305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.346 [2024-04-26 15:36:36.712707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.346 [2024-04-26 15:36:36.712720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.346 qpair failed and we were unable to recover it. 00:26:19.346 [2024-04-26 15:36:36.712912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.346 [2024-04-26 15:36:36.713240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.346 [2024-04-26 15:36:36.713254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.346 qpair failed and we were unable to recover it. 00:26:19.346 [2024-04-26 15:36:36.713483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.346 [2024-04-26 15:36:36.713893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.346 [2024-04-26 15:36:36.713907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.346 qpair failed and we were unable to recover it. 00:26:19.346 [2024-04-26 15:36:36.714254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.346 [2024-04-26 15:36:36.714628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.346 [2024-04-26 15:36:36.714642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.346 qpair failed and we were unable to recover it. 00:26:19.346 [2024-04-26 15:36:36.714985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.346 [2024-04-26 15:36:36.715346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.346 [2024-04-26 15:36:36.715360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.346 qpair failed and we were unable to recover it. 00:26:19.346 [2024-04-26 15:36:36.715716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.346 [2024-04-26 15:36:36.716077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.346 [2024-04-26 15:36:36.716092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.346 qpair failed and we were unable to recover it. 00:26:19.346 [2024-04-26 15:36:36.716336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.346 [2024-04-26 15:36:36.716719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.346 [2024-04-26 15:36:36.716733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.346 qpair failed and we were unable to recover it. 00:26:19.346 [2024-04-26 15:36:36.717080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.346 [2024-04-26 15:36:36.717431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.346 [2024-04-26 15:36:36.717445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.346 qpair failed and we were unable to recover it. 00:26:19.346 [2024-04-26 15:36:36.717649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.346 [2024-04-26 15:36:36.717965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.346 [2024-04-26 15:36:36.717979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.346 qpair failed and we were unable to recover it. 00:26:19.346 [2024-04-26 15:36:36.718311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.346 [2024-04-26 15:36:36.718663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.346 [2024-04-26 15:36:36.718676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.346 qpair failed and we were unable to recover it. 00:26:19.346 [2024-04-26 15:36:36.719052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.346 [2024-04-26 15:36:36.719410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.346 [2024-04-26 15:36:36.719424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.346 qpair failed and we were unable to recover it. 00:26:19.346 [2024-04-26 15:36:36.719780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.346 [2024-04-26 15:36:36.720122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.346 [2024-04-26 15:36:36.720136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.346 qpair failed and we were unable to recover it. 00:26:19.346 [2024-04-26 15:36:36.720398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.346 [2024-04-26 15:36:36.720747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.346 [2024-04-26 15:36:36.720761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.346 qpair failed and we were unable to recover it. 00:26:19.346 [2024-04-26 15:36:36.720964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.346 [2024-04-26 15:36:36.721291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.346 [2024-04-26 15:36:36.721304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.346 qpair failed and we were unable to recover it. 00:26:19.346 [2024-04-26 15:36:36.721657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.346 [2024-04-26 15:36:36.722032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.346 [2024-04-26 15:36:36.722047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.346 qpair failed and we were unable to recover it. 00:26:19.346 [2024-04-26 15:36:36.722411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.346 [2024-04-26 15:36:36.722630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.346 [2024-04-26 15:36:36.722648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.346 qpair failed and we were unable to recover it. 00:26:19.346 [2024-04-26 15:36:36.722997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.347 [2024-04-26 15:36:36.723356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.347 [2024-04-26 15:36:36.723371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.347 qpair failed and we were unable to recover it. 00:26:19.347 [2024-04-26 15:36:36.723721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.347 [2024-04-26 15:36:36.724068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.347 [2024-04-26 15:36:36.724083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.347 qpair failed and we were unable to recover it. 00:26:19.347 [2024-04-26 15:36:36.724449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.347 [2024-04-26 15:36:36.724827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.347 [2024-04-26 15:36:36.724847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.347 qpair failed and we were unable to recover it. 00:26:19.347 [2024-04-26 15:36:36.725199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.347 [2024-04-26 15:36:36.725575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.347 [2024-04-26 15:36:36.725588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.347 qpair failed and we were unable to recover it. 00:26:19.347 [2024-04-26 15:36:36.725793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.347 [2024-04-26 15:36:36.726145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.347 [2024-04-26 15:36:36.726159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.347 qpair failed and we were unable to recover it. 00:26:19.347 [2024-04-26 15:36:36.726508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.347 [2024-04-26 15:36:36.726876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.347 [2024-04-26 15:36:36.726890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.347 qpair failed and we were unable to recover it. 00:26:19.347 [2024-04-26 15:36:36.727219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.347 [2024-04-26 15:36:36.727573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.347 [2024-04-26 15:36:36.727586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.347 qpair failed and we were unable to recover it. 00:26:19.347 [2024-04-26 15:36:36.727814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.347 [2024-04-26 15:36:36.728141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.347 [2024-04-26 15:36:36.728155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.347 qpair failed and we were unable to recover it. 00:26:19.347 [2024-04-26 15:36:36.728480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.347 [2024-04-26 15:36:36.728857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.347 [2024-04-26 15:36:36.728873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.347 qpair failed and we were unable to recover it. 00:26:19.347 [2024-04-26 15:36:36.729092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.347 [2024-04-26 15:36:36.729466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.347 [2024-04-26 15:36:36.729483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.347 qpair failed and we were unable to recover it. 00:26:19.347 [2024-04-26 15:36:36.729849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.347 [2024-04-26 15:36:36.730212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.347 [2024-04-26 15:36:36.730226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.347 qpair failed and we were unable to recover it. 00:26:19.347 [2024-04-26 15:36:36.730554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.347 [2024-04-26 15:36:36.730910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.347 [2024-04-26 15:36:36.730924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.347 qpair failed and we were unable to recover it. 00:26:19.347 [2024-04-26 15:36:36.731128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.347 [2024-04-26 15:36:36.731510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.347 [2024-04-26 15:36:36.731523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.347 qpair failed and we were unable to recover it. 00:26:19.347 [2024-04-26 15:36:36.731693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.347 [2024-04-26 15:36:36.732009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.347 [2024-04-26 15:36:36.732023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.347 qpair failed and we were unable to recover it. 00:26:19.347 [2024-04-26 15:36:36.732373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.347 [2024-04-26 15:36:36.732593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.347 [2024-04-26 15:36:36.732607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.347 qpair failed and we were unable to recover it. 00:26:19.347 [2024-04-26 15:36:36.732945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.347 [2024-04-26 15:36:36.733302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.347 [2024-04-26 15:36:36.733315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.347 qpair failed and we were unable to recover it. 00:26:19.347 [2024-04-26 15:36:36.733648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.347 [2024-04-26 15:36:36.733861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.347 [2024-04-26 15:36:36.733876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.347 qpair failed and we were unable to recover it. 00:26:19.347 [2024-04-26 15:36:36.734085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.347 [2024-04-26 15:36:36.734457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.347 [2024-04-26 15:36:36.734470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.347 qpair failed and we were unable to recover it. 00:26:19.347 [2024-04-26 15:36:36.734698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.347 [2024-04-26 15:36:36.735037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.347 [2024-04-26 15:36:36.735052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.347 qpair failed and we were unable to recover it. 00:26:19.347 [2024-04-26 15:36:36.735269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.347 [2024-04-26 15:36:36.735466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.347 [2024-04-26 15:36:36.735484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.347 qpair failed and we were unable to recover it. 00:26:19.347 [2024-04-26 15:36:36.735886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.347 [2024-04-26 15:36:36.736236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.347 [2024-04-26 15:36:36.736250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.347 qpair failed and we were unable to recover it. 00:26:19.347 [2024-04-26 15:36:36.736601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.347 [2024-04-26 15:36:36.736947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.347 [2024-04-26 15:36:36.736961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.347 qpair failed and we were unable to recover it. 00:26:19.347 [2024-04-26 15:36:36.737360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.347 [2024-04-26 15:36:36.737777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.347 [2024-04-26 15:36:36.737790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.347 qpair failed and we were unable to recover it. 00:26:19.347 [2024-04-26 15:36:36.738113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.347 [2024-04-26 15:36:36.738481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.347 [2024-04-26 15:36:36.738495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.347 qpair failed and we were unable to recover it. 00:26:19.347 [2024-04-26 15:36:36.738822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.347 [2024-04-26 15:36:36.739019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.347 [2024-04-26 15:36:36.739033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.347 qpair failed and we were unable to recover it. 00:26:19.347 [2024-04-26 15:36:36.739389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.347 [2024-04-26 15:36:36.739765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.347 [2024-04-26 15:36:36.739779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.347 qpair failed and we were unable to recover it. 00:26:19.347 [2024-04-26 15:36:36.740171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.347 [2024-04-26 15:36:36.740383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.347 [2024-04-26 15:36:36.740397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.347 qpair failed and we were unable to recover it. 00:26:19.347 [2024-04-26 15:36:36.740617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.347 [2024-04-26 15:36:36.740967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.347 [2024-04-26 15:36:36.740981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.347 qpair failed and we were unable to recover it. 00:26:19.347 [2024-04-26 15:36:36.741317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.347 [2024-04-26 15:36:36.741573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.347 [2024-04-26 15:36:36.741586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.348 qpair failed and we were unable to recover it. 00:26:19.348 [2024-04-26 15:36:36.741782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.348 [2024-04-26 15:36:36.742125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.348 [2024-04-26 15:36:36.742142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.348 qpair failed and we were unable to recover it. 00:26:19.348 [2024-04-26 15:36:36.742515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.348 [2024-04-26 15:36:36.742853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.348 [2024-04-26 15:36:36.742869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.348 qpair failed and we were unable to recover it. 00:26:19.348 [2024-04-26 15:36:36.743228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.348 [2024-04-26 15:36:36.743609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.348 [2024-04-26 15:36:36.743624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.348 qpair failed and we were unable to recover it. 00:26:19.348 [2024-04-26 15:36:36.743980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.348 [2024-04-26 15:36:36.744317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.348 [2024-04-26 15:36:36.744332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.348 qpair failed and we were unable to recover it. 00:26:19.348 [2024-04-26 15:36:36.744706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.348 [2024-04-26 15:36:36.745132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.348 [2024-04-26 15:36:36.745146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.348 qpair failed and we were unable to recover it. 00:26:19.348 [2024-04-26 15:36:36.745579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.348 [2024-04-26 15:36:36.745905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.348 [2024-04-26 15:36:36.745919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.348 qpair failed and we were unable to recover it. 00:26:19.348 [2024-04-26 15:36:36.746284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.348 [2024-04-26 15:36:36.746626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.348 [2024-04-26 15:36:36.746640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.348 qpair failed and we were unable to recover it. 00:26:19.348 [2024-04-26 15:36:36.747060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.348 [2024-04-26 15:36:36.747417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.348 [2024-04-26 15:36:36.747431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.348 qpair failed and we were unable to recover it. 00:26:19.348 [2024-04-26 15:36:36.747654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.348 [2024-04-26 15:36:36.747972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.348 [2024-04-26 15:36:36.747986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.348 qpair failed and we were unable to recover it. 00:26:19.348 [2024-04-26 15:36:36.748347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.348 [2024-04-26 15:36:36.748729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.348 [2024-04-26 15:36:36.748743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.348 qpair failed and we were unable to recover it. 00:26:19.348 [2024-04-26 15:36:36.749082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.348 [2024-04-26 15:36:36.749292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.348 [2024-04-26 15:36:36.749306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.348 qpair failed and we were unable to recover it. 00:26:19.348 [2024-04-26 15:36:36.749639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.348 [2024-04-26 15:36:36.750008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.348 [2024-04-26 15:36:36.750022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.348 qpair failed and we were unable to recover it. 00:26:19.348 [2024-04-26 15:36:36.750343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.348 [2024-04-26 15:36:36.750407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.348 [2024-04-26 15:36:36.750420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.348 qpair failed and we were unable to recover it. 00:26:19.348 [2024-04-26 15:36:36.750781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.348 [2024-04-26 15:36:36.750889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.348 [2024-04-26 15:36:36.750903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.348 qpair failed and we were unable to recover it. 00:26:19.348 [2024-04-26 15:36:36.751259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.348 [2024-04-26 15:36:36.751643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.348 [2024-04-26 15:36:36.751658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.348 qpair failed and we were unable to recover it. 00:26:19.348 [2024-04-26 15:36:36.752048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.348 [2024-04-26 15:36:36.752407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.348 [2024-04-26 15:36:36.752421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.348 qpair failed and we were unable to recover it. 00:26:19.348 [2024-04-26 15:36:36.752818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.348 [2024-04-26 15:36:36.753169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.348 [2024-04-26 15:36:36.753184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.348 qpair failed and we were unable to recover it. 00:26:19.348 [2024-04-26 15:36:36.753555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.348 [2024-04-26 15:36:36.753889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.348 [2024-04-26 15:36:36.753904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.348 qpair failed and we were unable to recover it. 00:26:19.348 [2024-04-26 15:36:36.754303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.348 [2024-04-26 15:36:36.754669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.348 [2024-04-26 15:36:36.754683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.348 qpair failed and we were unable to recover it. 00:26:19.348 [2024-04-26 15:36:36.755042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.348 [2024-04-26 15:36:36.755380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.348 [2024-04-26 15:36:36.755394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.348 qpair failed and we were unable to recover it. 00:26:19.348 [2024-04-26 15:36:36.755721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.348 [2024-04-26 15:36:36.756061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.348 [2024-04-26 15:36:36.756076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.348 qpair failed and we were unable to recover it. 00:26:19.348 [2024-04-26 15:36:36.756398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.348 [2024-04-26 15:36:36.756774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.348 [2024-04-26 15:36:36.756788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.348 qpair failed and we were unable to recover it. 00:26:19.348 [2024-04-26 15:36:36.757024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.348 [2024-04-26 15:36:36.757379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.348 [2024-04-26 15:36:36.757393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.348 qpair failed and we were unable to recover it. 00:26:19.348 [2024-04-26 15:36:36.757726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.348 [2024-04-26 15:36:36.758048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.348 [2024-04-26 15:36:36.758063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.348 qpair failed and we were unable to recover it. 00:26:19.348 [2024-04-26 15:36:36.758250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.348 [2024-04-26 15:36:36.758627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.348 [2024-04-26 15:36:36.758640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.348 qpair failed and we were unable to recover it. 00:26:19.348 [2024-04-26 15:36:36.759012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.348 [2024-04-26 15:36:36.759221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.348 [2024-04-26 15:36:36.759237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.348 qpair failed and we were unable to recover it. 00:26:19.348 [2024-04-26 15:36:36.759455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.348 [2024-04-26 15:36:36.759800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.348 [2024-04-26 15:36:36.759814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.348 qpair failed and we were unable to recover it. 00:26:19.348 [2024-04-26 15:36:36.760019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.348 [2024-04-26 15:36:36.760375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.348 [2024-04-26 15:36:36.760390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.348 qpair failed and we were unable to recover it. 00:26:19.348 [2024-04-26 15:36:36.760592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.349 [2024-04-26 15:36:36.760962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.349 [2024-04-26 15:36:36.760976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.349 qpair failed and we were unable to recover it. 00:26:19.349 [2024-04-26 15:36:36.761235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.349 [2024-04-26 15:36:36.761589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.349 [2024-04-26 15:36:36.761603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.349 qpair failed and we were unable to recover it. 00:26:19.349 [2024-04-26 15:36:36.761928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.349 [2024-04-26 15:36:36.762308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.349 [2024-04-26 15:36:36.762323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.349 qpair failed and we were unable to recover it. 00:26:19.349 [2024-04-26 15:36:36.762649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.349 [2024-04-26 15:36:36.762889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.349 [2024-04-26 15:36:36.762903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.349 qpair failed and we were unable to recover it. 00:26:19.349 [2024-04-26 15:36:36.762969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.349 [2024-04-26 15:36:36.763169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.349 [2024-04-26 15:36:36.763183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.349 qpair failed and we were unable to recover it. 00:26:19.349 [2024-04-26 15:36:36.763526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.349 [2024-04-26 15:36:36.763849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.349 [2024-04-26 15:36:36.763863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.349 qpair failed and we were unable to recover it. 00:26:19.349 [2024-04-26 15:36:36.764212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.349 [2024-04-26 15:36:36.764554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.349 [2024-04-26 15:36:36.764568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.349 qpair failed and we were unable to recover it. 00:26:19.349 [2024-04-26 15:36:36.764914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.349 [2024-04-26 15:36:36.765292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.349 [2024-04-26 15:36:36.765307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.349 qpair failed and we were unable to recover it. 00:26:19.349 [2024-04-26 15:36:36.765553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.349 [2024-04-26 15:36:36.765909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.349 [2024-04-26 15:36:36.765923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.349 qpair failed and we were unable to recover it. 00:26:19.349 [2024-04-26 15:36:36.765996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.349 [2024-04-26 15:36:36.766339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.349 [2024-04-26 15:36:36.766354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.349 qpair failed and we were unable to recover it. 00:26:19.349 [2024-04-26 15:36:36.766751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.349 [2024-04-26 15:36:36.766968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.349 [2024-04-26 15:36:36.766983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.349 qpair failed and we were unable to recover it. 00:26:19.349 [2024-04-26 15:36:36.767306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.349 [2024-04-26 15:36:36.767502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.349 [2024-04-26 15:36:36.767516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.349 qpair failed and we were unable to recover it. 00:26:19.349 [2024-04-26 15:36:36.767871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.349 [2024-04-26 15:36:36.768102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.349 [2024-04-26 15:36:36.768117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.349 qpair failed and we were unable to recover it. 00:26:19.619 [2024-04-26 15:36:36.768482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.619 [2024-04-26 15:36:36.768815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.619 [2024-04-26 15:36:36.768829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.619 qpair failed and we were unable to recover it. 00:26:19.619 [2024-04-26 15:36:36.769182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.619 [2024-04-26 15:36:36.769395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.619 [2024-04-26 15:36:36.769410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.619 qpair failed and we were unable to recover it. 00:26:19.619 [2024-04-26 15:36:36.769779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.619 [2024-04-26 15:36:36.769849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.619 [2024-04-26 15:36:36.769863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.619 qpair failed and we were unable to recover it. 00:26:19.619 [2024-04-26 15:36:36.770183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.619 [2024-04-26 15:36:36.770384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.619 [2024-04-26 15:36:36.770400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.619 qpair failed and we were unable to recover it. 00:26:19.619 [2024-04-26 15:36:36.770759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.619 [2024-04-26 15:36:36.771127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.619 [2024-04-26 15:36:36.771143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.619 qpair failed and we were unable to recover it. 00:26:19.619 [2024-04-26 15:36:36.771501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.619 [2024-04-26 15:36:36.771872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.619 [2024-04-26 15:36:36.771887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.619 qpair failed and we were unable to recover it. 00:26:19.619 [2024-04-26 15:36:36.772141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.619 [2024-04-26 15:36:36.772389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.619 [2024-04-26 15:36:36.772404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.619 qpair failed and we were unable to recover it. 00:26:19.619 [2024-04-26 15:36:36.772754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.619 [2024-04-26 15:36:36.773092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.619 [2024-04-26 15:36:36.773107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.619 qpair failed and we were unable to recover it. 00:26:19.619 [2024-04-26 15:36:36.773451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.619 [2024-04-26 15:36:36.773820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.619 [2024-04-26 15:36:36.773834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.619 qpair failed and we were unable to recover it. 00:26:19.619 [2024-04-26 15:36:36.774210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.619 [2024-04-26 15:36:36.774584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.619 [2024-04-26 15:36:36.774599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.619 qpair failed and we were unable to recover it. 00:26:19.619 [2024-04-26 15:36:36.774942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.619 [2024-04-26 15:36:36.775287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.619 [2024-04-26 15:36:36.775301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.619 qpair failed and we were unable to recover it. 00:26:19.619 [2024-04-26 15:36:36.775531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.619 [2024-04-26 15:36:36.775880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.619 [2024-04-26 15:36:36.775894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.619 qpair failed and we were unable to recover it. 00:26:19.619 [2024-04-26 15:36:36.776235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.619 [2024-04-26 15:36:36.776614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.619 [2024-04-26 15:36:36.776627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.619 qpair failed and we were unable to recover it. 00:26:19.619 [2024-04-26 15:36:36.776823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.619 [2024-04-26 15:36:36.777044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.619 [2024-04-26 15:36:36.777058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.619 qpair failed and we were unable to recover it. 00:26:19.619 [2024-04-26 15:36:36.777397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.619 [2024-04-26 15:36:36.777640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.619 [2024-04-26 15:36:36.777654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.619 qpair failed and we were unable to recover it. 00:26:19.619 [2024-04-26 15:36:36.777997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.619 [2024-04-26 15:36:36.778378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.619 [2024-04-26 15:36:36.778393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.619 qpair failed and we were unable to recover it. 00:26:19.619 [2024-04-26 15:36:36.778611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.619 [2024-04-26 15:36:36.779005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.619 [2024-04-26 15:36:36.779019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.619 qpair failed and we were unable to recover it. 00:26:19.619 [2024-04-26 15:36:36.779386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.619 [2024-04-26 15:36:36.779763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.619 [2024-04-26 15:36:36.779778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.619 qpair failed and we were unable to recover it. 00:26:19.619 [2024-04-26 15:36:36.780132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.619 [2024-04-26 15:36:36.780339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.619 [2024-04-26 15:36:36.780353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.619 qpair failed and we were unable to recover it. 00:26:19.619 [2024-04-26 15:36:36.780688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.619 [2024-04-26 15:36:36.781027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.619 [2024-04-26 15:36:36.781042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.619 qpair failed and we were unable to recover it. 00:26:19.619 [2024-04-26 15:36:36.781262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.619 [2024-04-26 15:36:36.781623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.619 [2024-04-26 15:36:36.781637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.619 qpair failed and we were unable to recover it. 00:26:19.619 [2024-04-26 15:36:36.781965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.619 [2024-04-26 15:36:36.782171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.619 [2024-04-26 15:36:36.782185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.619 qpair failed and we were unable to recover it. 00:26:19.619 [2024-04-26 15:36:36.782548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.619 [2024-04-26 15:36:36.782887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.619 [2024-04-26 15:36:36.782901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.619 qpair failed and we were unable to recover it. 00:26:19.619 [2024-04-26 15:36:36.783261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.619 [2024-04-26 15:36:36.783649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.619 [2024-04-26 15:36:36.783662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.619 qpair failed and we were unable to recover it. 00:26:19.619 [2024-04-26 15:36:36.783905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.619 [2024-04-26 15:36:36.784223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.619 [2024-04-26 15:36:36.784237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.619 qpair failed and we were unable to recover it. 00:26:19.619 [2024-04-26 15:36:36.784590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.620 [2024-04-26 15:36:36.784986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.620 [2024-04-26 15:36:36.785001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.620 qpair failed and we were unable to recover it. 00:26:19.620 [2024-04-26 15:36:36.785331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.620 [2024-04-26 15:36:36.785681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.620 [2024-04-26 15:36:36.785695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.620 qpair failed and we were unable to recover it. 00:26:19.620 [2024-04-26 15:36:36.786019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.620 [2024-04-26 15:36:36.786393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.620 [2024-04-26 15:36:36.786406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.620 qpair failed and we were unable to recover it. 00:26:19.620 [2024-04-26 15:36:36.786730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.620 [2024-04-26 15:36:36.787102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.620 [2024-04-26 15:36:36.787116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.620 qpair failed and we were unable to recover it. 00:26:19.620 [2024-04-26 15:36:36.787348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.620 [2024-04-26 15:36:36.787698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.620 [2024-04-26 15:36:36.787712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.620 qpair failed and we were unable to recover it. 00:26:19.620 [2024-04-26 15:36:36.788047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.620 [2024-04-26 15:36:36.788409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.620 [2024-04-26 15:36:36.788423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.620 qpair failed and we were unable to recover it. 00:26:19.620 [2024-04-26 15:36:36.788753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.620 [2024-04-26 15:36:36.788955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.620 [2024-04-26 15:36:36.788970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.620 qpair failed and we were unable to recover it. 00:26:19.620 [2024-04-26 15:36:36.789260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.620 [2024-04-26 15:36:36.789631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.620 [2024-04-26 15:36:36.789645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.620 qpair failed and we were unable to recover it. 00:26:19.620 [2024-04-26 15:36:36.789981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.620 [2024-04-26 15:36:36.790339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.620 [2024-04-26 15:36:36.790353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.620 qpair failed and we were unable to recover it. 00:26:19.620 [2024-04-26 15:36:36.790678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.620 [2024-04-26 15:36:36.790892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.620 [2024-04-26 15:36:36.790907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.620 qpair failed and we were unable to recover it. 00:26:19.620 [2024-04-26 15:36:36.791264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.620 [2024-04-26 15:36:36.791603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.620 [2024-04-26 15:36:36.791616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.620 qpair failed and we were unable to recover it. 00:26:19.620 [2024-04-26 15:36:36.791941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.620 [2024-04-26 15:36:36.792296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.620 [2024-04-26 15:36:36.792309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.620 qpair failed and we were unable to recover it. 00:26:19.620 [2024-04-26 15:36:36.792642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.620 [2024-04-26 15:36:36.792994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.620 [2024-04-26 15:36:36.793009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.620 qpair failed and we were unable to recover it. 00:26:19.620 [2024-04-26 15:36:36.793186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.620 [2024-04-26 15:36:36.793504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.620 [2024-04-26 15:36:36.793517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.620 qpair failed and we were unable to recover it. 00:26:19.620 [2024-04-26 15:36:36.793861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.620 [2024-04-26 15:36:36.794203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.620 [2024-04-26 15:36:36.794217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.620 qpair failed and we were unable to recover it. 00:26:19.620 [2024-04-26 15:36:36.794537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.620 [2024-04-26 15:36:36.794730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.620 [2024-04-26 15:36:36.794744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.620 qpair failed and we were unable to recover it. 00:26:19.620 [2024-04-26 15:36:36.795104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.620 [2024-04-26 15:36:36.795518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.620 [2024-04-26 15:36:36.795532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.620 qpair failed and we were unable to recover it. 00:26:19.620 [2024-04-26 15:36:36.795860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.620 [2024-04-26 15:36:36.796108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.620 [2024-04-26 15:36:36.796121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.620 qpair failed and we were unable to recover it. 00:26:19.620 [2024-04-26 15:36:36.796537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.620 [2024-04-26 15:36:36.796789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.620 [2024-04-26 15:36:36.796804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.620 qpair failed and we were unable to recover it. 00:26:19.620 [2024-04-26 15:36:36.797009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.620 [2024-04-26 15:36:36.797243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.620 [2024-04-26 15:36:36.797257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.620 qpair failed and we were unable to recover it. 00:26:19.620 [2024-04-26 15:36:36.797452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.620 [2024-04-26 15:36:36.797820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.620 [2024-04-26 15:36:36.797836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.620 qpair failed and we were unable to recover it. 00:26:19.620 [2024-04-26 15:36:36.798203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.620 [2024-04-26 15:36:36.798573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.620 [2024-04-26 15:36:36.798588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.620 qpair failed and we were unable to recover it. 00:26:19.620 [2024-04-26 15:36:36.798824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.620 [2024-04-26 15:36:36.799158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.620 [2024-04-26 15:36:36.799172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.620 qpair failed and we were unable to recover it. 00:26:19.620 [2024-04-26 15:36:36.799369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.620 [2024-04-26 15:36:36.799684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.620 [2024-04-26 15:36:36.799698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.620 qpair failed and we were unable to recover it. 00:26:19.620 [2024-04-26 15:36:36.799892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.620 [2024-04-26 15:36:36.800199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.620 [2024-04-26 15:36:36.800212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.620 qpair failed and we were unable to recover it. 00:26:19.620 [2024-04-26 15:36:36.800550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.620 [2024-04-26 15:36:36.800771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.620 [2024-04-26 15:36:36.800787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.620 qpair failed and we were unable to recover it. 00:26:19.620 [2024-04-26 15:36:36.801186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.620 [2024-04-26 15:36:36.801549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.620 [2024-04-26 15:36:36.801564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.620 qpair failed and we were unable to recover it. 00:26:19.620 [2024-04-26 15:36:36.801733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.620 [2024-04-26 15:36:36.802053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.620 [2024-04-26 15:36:36.802068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.620 qpair failed and we were unable to recover it. 00:26:19.620 [2024-04-26 15:36:36.802420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.620 [2024-04-26 15:36:36.802781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.621 [2024-04-26 15:36:36.802795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.621 qpair failed and we were unable to recover it. 00:26:19.621 [2024-04-26 15:36:36.803244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.621 [2024-04-26 15:36:36.803603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.621 [2024-04-26 15:36:36.803619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.621 qpair failed and we were unable to recover it. 00:26:19.621 [2024-04-26 15:36:36.803991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.621 [2024-04-26 15:36:36.804308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.621 [2024-04-26 15:36:36.804323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.621 qpair failed and we were unable to recover it. 00:26:19.621 [2024-04-26 15:36:36.804500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.621 [2024-04-26 15:36:36.804695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.621 [2024-04-26 15:36:36.804710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.621 qpair failed and we were unable to recover it. 00:26:19.621 [2024-04-26 15:36:36.805018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.621 [2024-04-26 15:36:36.805344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.621 [2024-04-26 15:36:36.805359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.621 qpair failed and we were unable to recover it. 00:26:19.621 [2024-04-26 15:36:36.805744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.621 [2024-04-26 15:36:36.806089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.621 [2024-04-26 15:36:36.806105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.621 qpair failed and we were unable to recover it. 00:26:19.621 [2024-04-26 15:36:36.806463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.621 [2024-04-26 15:36:36.806802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.621 [2024-04-26 15:36:36.806815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.621 qpair failed and we were unable to recover it. 00:26:19.621 [2024-04-26 15:36:36.807288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.621 [2024-04-26 15:36:36.807637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.621 [2024-04-26 15:36:36.807651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.621 qpair failed and we were unable to recover it. 00:26:19.621 [2024-04-26 15:36:36.808005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.621 [2024-04-26 15:36:36.808346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.621 [2024-04-26 15:36:36.808363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.621 qpair failed and we were unable to recover it. 00:26:19.621 [2024-04-26 15:36:36.808593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.621 [2024-04-26 15:36:36.808946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.621 [2024-04-26 15:36:36.808964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.621 qpair failed and we were unable to recover it. 00:26:19.621 [2024-04-26 15:36:36.809330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.621 [2024-04-26 15:36:36.809699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.621 [2024-04-26 15:36:36.809714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.621 qpair failed and we were unable to recover it. 00:26:19.621 [2024-04-26 15:36:36.809971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.621 [2024-04-26 15:36:36.810322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.621 [2024-04-26 15:36:36.810336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.621 qpair failed and we were unable to recover it. 00:26:19.621 [2024-04-26 15:36:36.810566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.621 [2024-04-26 15:36:36.810918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.621 [2024-04-26 15:36:36.810932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.621 qpair failed and we were unable to recover it. 00:26:19.621 [2024-04-26 15:36:36.811279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.621 [2024-04-26 15:36:36.811648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.621 [2024-04-26 15:36:36.811662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.621 qpair failed and we were unable to recover it. 00:26:19.621 [2024-04-26 15:36:36.812003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.621 [2024-04-26 15:36:36.812216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.621 [2024-04-26 15:36:36.812231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.621 qpair failed and we were unable to recover it. 00:26:19.621 [2024-04-26 15:36:36.812572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.621 [2024-04-26 15:36:36.812779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.621 [2024-04-26 15:36:36.812793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.621 qpair failed and we were unable to recover it. 00:26:19.621 [2024-04-26 15:36:36.813132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.621 [2024-04-26 15:36:36.813359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.621 [2024-04-26 15:36:36.813373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.621 qpair failed and we were unable to recover it. 00:26:19.621 [2024-04-26 15:36:36.813574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.621 [2024-04-26 15:36:36.813888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.621 [2024-04-26 15:36:36.813903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.621 qpair failed and we were unable to recover it. 00:26:19.621 [2024-04-26 15:36:36.814270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.621 [2024-04-26 15:36:36.814639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.621 [2024-04-26 15:36:36.814653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.621 qpair failed and we were unable to recover it. 00:26:19.621 [2024-04-26 15:36:36.814990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.621 [2024-04-26 15:36:36.815338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.621 [2024-04-26 15:36:36.815352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.621 qpair failed and we were unable to recover it. 00:26:19.621 [2024-04-26 15:36:36.815716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.621 [2024-04-26 15:36:36.816033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.621 [2024-04-26 15:36:36.816048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.621 qpair failed and we were unable to recover it. 00:26:19.621 [2024-04-26 15:36:36.816383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.621 [2024-04-26 15:36:36.816712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.621 [2024-04-26 15:36:36.816726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.621 qpair failed and we were unable to recover it. 00:26:19.621 [2024-04-26 15:36:36.817076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.621 [2024-04-26 15:36:36.817443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.621 [2024-04-26 15:36:36.817458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.621 qpair failed and we were unable to recover it. 00:26:19.621 [2024-04-26 15:36:36.817791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.621 [2024-04-26 15:36:36.818155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.621 [2024-04-26 15:36:36.818170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.621 qpair failed and we were unable to recover it. 00:26:19.621 [2024-04-26 15:36:36.818499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.621 [2024-04-26 15:36:36.818569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.621 [2024-04-26 15:36:36.818583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.621 qpair failed and we were unable to recover it. 00:26:19.621 [2024-04-26 15:36:36.818932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.621 [2024-04-26 15:36:36.819145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.621 [2024-04-26 15:36:36.819159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.621 qpair failed and we were unable to recover it. 00:26:19.621 [2024-04-26 15:36:36.819232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.621 [2024-04-26 15:36:36.819458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.621 [2024-04-26 15:36:36.819471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.621 qpair failed and we were unable to recover it. 00:26:19.621 [2024-04-26 15:36:36.819733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.621 [2024-04-26 15:36:36.820086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.621 [2024-04-26 15:36:36.820103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.621 qpair failed and we were unable to recover it. 00:26:19.621 [2024-04-26 15:36:36.820430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.621 [2024-04-26 15:36:36.820813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.621 [2024-04-26 15:36:36.820827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.622 qpair failed and we were unable to recover it. 00:26:19.622 [2024-04-26 15:36:36.821170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.622 [2024-04-26 15:36:36.821546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.622 [2024-04-26 15:36:36.821561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.622 qpair failed and we were unable to recover it. 00:26:19.622 [2024-04-26 15:36:36.821875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.622 [2024-04-26 15:36:36.822223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.622 [2024-04-26 15:36:36.822237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.622 qpair failed and we were unable to recover it. 00:26:19.622 [2024-04-26 15:36:36.822432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.622 [2024-04-26 15:36:36.822697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.622 [2024-04-26 15:36:36.822712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.622 qpair failed and we were unable to recover it. 00:26:19.622 [2024-04-26 15:36:36.823060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.622 [2024-04-26 15:36:36.823270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.622 [2024-04-26 15:36:36.823285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.622 qpair failed and we were unable to recover it. 00:26:19.622 [2024-04-26 15:36:36.823552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.622 [2024-04-26 15:36:36.823768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.622 [2024-04-26 15:36:36.823784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.622 qpair failed and we were unable to recover it. 00:26:19.622 [2024-04-26 15:36:36.824128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.622 [2024-04-26 15:36:36.824508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.622 [2024-04-26 15:36:36.824524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.622 qpair failed and we were unable to recover it. 00:26:19.622 [2024-04-26 15:36:36.824886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.622 [2024-04-26 15:36:36.825244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.622 [2024-04-26 15:36:36.825266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.622 qpair failed and we were unable to recover it. 00:26:19.622 [2024-04-26 15:36:36.825614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.622 [2024-04-26 15:36:36.825956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.622 [2024-04-26 15:36:36.825971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.622 qpair failed and we were unable to recover it. 00:26:19.622 [2024-04-26 15:36:36.826317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.622 [2024-04-26 15:36:36.826682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.622 [2024-04-26 15:36:36.826699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.622 qpair failed and we were unable to recover it. 00:26:19.622 [2024-04-26 15:36:36.826897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.622 [2024-04-26 15:36:36.827104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.622 [2024-04-26 15:36:36.827118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.622 qpair failed and we were unable to recover it. 00:26:19.622 [2024-04-26 15:36:36.827193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.622 [2024-04-26 15:36:36.827487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.622 [2024-04-26 15:36:36.827502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.622 qpair failed and we were unable to recover it. 00:26:19.622 [2024-04-26 15:36:36.827861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.622 [2024-04-26 15:36:36.828284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.622 [2024-04-26 15:36:36.828299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.622 qpair failed and we were unable to recover it. 00:26:19.622 [2024-04-26 15:36:36.828650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.622 [2024-04-26 15:36:36.828896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.622 [2024-04-26 15:36:36.828912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.622 qpair failed and we were unable to recover it. 00:26:19.622 [2024-04-26 15:36:36.829297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.622 [2024-04-26 15:36:36.829624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.622 [2024-04-26 15:36:36.829639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.622 qpair failed and we were unable to recover it. 00:26:19.622 [2024-04-26 15:36:36.829827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.622 [2024-04-26 15:36:36.830158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.622 [2024-04-26 15:36:36.830175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.622 qpair failed and we were unable to recover it. 00:26:19.622 [2024-04-26 15:36:36.830538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.622 [2024-04-26 15:36:36.830747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.622 [2024-04-26 15:36:36.830763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.622 qpair failed and we were unable to recover it. 00:26:19.622 [2024-04-26 15:36:36.831134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.622 [2024-04-26 15:36:36.831456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.622 [2024-04-26 15:36:36.831471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.622 qpair failed and we were unable to recover it. 00:26:19.622 [2024-04-26 15:36:36.831815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.622 [2024-04-26 15:36:36.832158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.622 [2024-04-26 15:36:36.832174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.622 qpair failed and we were unable to recover it. 00:26:19.622 [2024-04-26 15:36:36.832500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.622 [2024-04-26 15:36:36.832726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.622 [2024-04-26 15:36:36.832745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.622 qpair failed and we were unable to recover it. 00:26:19.622 [2024-04-26 15:36:36.832986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.622 [2024-04-26 15:36:36.833335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.622 [2024-04-26 15:36:36.833350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.622 qpair failed and we were unable to recover it. 00:26:19.622 [2024-04-26 15:36:36.833690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.622 [2024-04-26 15:36:36.834065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.622 [2024-04-26 15:36:36.834081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.622 qpair failed and we were unable to recover it. 00:26:19.622 [2024-04-26 15:36:36.834404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.622 [2024-04-26 15:36:36.834759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.622 [2024-04-26 15:36:36.834773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.622 qpair failed and we were unable to recover it. 00:26:19.622 [2024-04-26 15:36:36.834990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.622 [2024-04-26 15:36:36.835053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.622 [2024-04-26 15:36:36.835067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.622 qpair failed and we were unable to recover it. 00:26:19.622 [2024-04-26 15:36:36.835385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.622 [2024-04-26 15:36:36.835772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.622 [2024-04-26 15:36:36.835787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.622 qpair failed and we were unable to recover it. 00:26:19.622 [2024-04-26 15:36:36.836118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.622 [2024-04-26 15:36:36.836487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.622 [2024-04-26 15:36:36.836502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.622 qpair failed and we were unable to recover it. 00:26:19.622 [2024-04-26 15:36:36.836701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.622 [2024-04-26 15:36:36.837021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.622 [2024-04-26 15:36:36.837037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.622 qpair failed and we were unable to recover it. 00:26:19.622 [2024-04-26 15:36:36.837396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.622 [2024-04-26 15:36:36.837746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.622 [2024-04-26 15:36:36.837759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.622 qpair failed and we were unable to recover it. 00:26:19.622 [2024-04-26 15:36:36.838090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.622 [2024-04-26 15:36:36.838468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.622 [2024-04-26 15:36:36.838483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.622 qpair failed and we were unable to recover it. 00:26:19.623 [2024-04-26 15:36:36.838890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.623 [2024-04-26 15:36:36.839113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.623 [2024-04-26 15:36:36.839131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.623 qpair failed and we were unable to recover it. 00:26:19.623 [2024-04-26 15:36:36.839367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.623 [2024-04-26 15:36:36.839706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.623 [2024-04-26 15:36:36.839720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.623 qpair failed and we were unable to recover it. 00:26:19.623 [2024-04-26 15:36:36.839982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.623 [2024-04-26 15:36:36.840265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.623 [2024-04-26 15:36:36.840281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.623 qpair failed and we were unable to recover it. 00:26:19.623 [2024-04-26 15:36:36.840474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.623 [2024-04-26 15:36:36.840855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.623 [2024-04-26 15:36:36.840871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.623 qpair failed and we were unable to recover it. 00:26:19.623 [2024-04-26 15:36:36.841126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.623 [2024-04-26 15:36:36.841472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.623 [2024-04-26 15:36:36.841485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.623 qpair failed and we were unable to recover it. 00:26:19.623 [2024-04-26 15:36:36.841849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.623 [2024-04-26 15:36:36.842210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.623 [2024-04-26 15:36:36.842224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.623 qpair failed and we were unable to recover it. 00:26:19.623 [2024-04-26 15:36:36.842557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.623 [2024-04-26 15:36:36.842945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.623 [2024-04-26 15:36:36.842959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.623 qpair failed and we were unable to recover it. 00:26:19.623 [2024-04-26 15:36:36.843325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.623 [2024-04-26 15:36:36.843697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.623 [2024-04-26 15:36:36.843710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.623 qpair failed and we were unable to recover it. 00:26:19.623 [2024-04-26 15:36:36.844038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.623 [2024-04-26 15:36:36.844407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.623 [2024-04-26 15:36:36.844421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.623 qpair failed and we were unable to recover it. 00:26:19.623 [2024-04-26 15:36:36.844816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.623 [2024-04-26 15:36:36.845171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.623 [2024-04-26 15:36:36.845186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.623 qpair failed and we were unable to recover it. 00:26:19.623 [2024-04-26 15:36:36.845417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.623 [2024-04-26 15:36:36.845796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.623 [2024-04-26 15:36:36.845814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.623 qpair failed and we were unable to recover it. 00:26:19.623 [2024-04-26 15:36:36.846178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.623 [2024-04-26 15:36:36.846528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.623 [2024-04-26 15:36:36.846543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.623 qpair failed and we were unable to recover it. 00:26:19.623 [2024-04-26 15:36:36.846942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.623 [2024-04-26 15:36:36.847166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.623 [2024-04-26 15:36:36.847180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.623 qpair failed and we were unable to recover it. 00:26:19.623 [2024-04-26 15:36:36.847365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.623 [2024-04-26 15:36:36.847596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.623 [2024-04-26 15:36:36.847610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.623 qpair failed and we were unable to recover it. 00:26:19.623 [2024-04-26 15:36:36.847972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.623 [2024-04-26 15:36:36.848177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.623 [2024-04-26 15:36:36.848191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.623 qpair failed and we were unable to recover it. 00:26:19.623 [2024-04-26 15:36:36.848549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.623 [2024-04-26 15:36:36.848921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.623 [2024-04-26 15:36:36.848936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.623 qpair failed and we were unable to recover it. 00:26:19.623 [2024-04-26 15:36:36.849294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.623 [2024-04-26 15:36:36.849510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.623 [2024-04-26 15:36:36.849524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.623 qpair failed and we were unable to recover it. 00:26:19.623 [2024-04-26 15:36:36.849859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.623 [2024-04-26 15:36:36.850227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.623 [2024-04-26 15:36:36.850242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.623 qpair failed and we were unable to recover it. 00:26:19.623 [2024-04-26 15:36:36.850437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.623 [2024-04-26 15:36:36.850511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.623 [2024-04-26 15:36:36.850525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.623 qpair failed and we were unable to recover it. 00:26:19.623 [2024-04-26 15:36:36.850897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.623 [2024-04-26 15:36:36.851322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.623 [2024-04-26 15:36:36.851336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.623 qpair failed and we were unable to recover it. 00:26:19.623 [2024-04-26 15:36:36.851539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.623 [2024-04-26 15:36:36.851740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.623 [2024-04-26 15:36:36.851754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.623 qpair failed and we were unable to recover it. 00:26:19.623 [2024-04-26 15:36:36.851999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.623 [2024-04-26 15:36:36.852290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.623 [2024-04-26 15:36:36.852304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.623 qpair failed and we were unable to recover it. 00:26:19.623 [2024-04-26 15:36:36.852664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.624 [2024-04-26 15:36:36.852872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.624 [2024-04-26 15:36:36.852887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.624 qpair failed and we were unable to recover it. 00:26:19.624 [2024-04-26 15:36:36.853228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.624 [2024-04-26 15:36:36.853448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.624 [2024-04-26 15:36:36.853463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.624 qpair failed and we were unable to recover it. 00:26:19.624 [2024-04-26 15:36:36.853814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.624 [2024-04-26 15:36:36.854173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.624 [2024-04-26 15:36:36.854188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.624 qpair failed and we were unable to recover it. 00:26:19.624 [2024-04-26 15:36:36.854521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.624 [2024-04-26 15:36:36.854726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.624 [2024-04-26 15:36:36.854739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.624 qpair failed and we were unable to recover it. 00:26:19.624 [2024-04-26 15:36:36.855093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.624 [2024-04-26 15:36:36.855443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.624 [2024-04-26 15:36:36.855457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.624 qpair failed and we were unable to recover it. 00:26:19.624 [2024-04-26 15:36:36.855683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.624 [2024-04-26 15:36:36.856031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.624 [2024-04-26 15:36:36.856045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.624 qpair failed and we were unable to recover it. 00:26:19.624 [2024-04-26 15:36:36.856249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.624 [2024-04-26 15:36:36.856320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.624 [2024-04-26 15:36:36.856334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.624 qpair failed and we were unable to recover it. 00:26:19.624 [2024-04-26 15:36:36.856656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.624 [2024-04-26 15:36:36.856988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.624 [2024-04-26 15:36:36.857003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.624 qpair failed and we were unable to recover it. 00:26:19.624 [2024-04-26 15:36:36.857368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.624 [2024-04-26 15:36:36.857451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.624 [2024-04-26 15:36:36.857464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.624 qpair failed and we were unable to recover it. 00:26:19.624 [2024-04-26 15:36:36.857818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.624 [2024-04-26 15:36:36.858161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.624 [2024-04-26 15:36:36.858176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.624 qpair failed and we were unable to recover it. 00:26:19.624 [2024-04-26 15:36:36.858510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.624 [2024-04-26 15:36:36.858889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.624 [2024-04-26 15:36:36.858903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.624 qpair failed and we were unable to recover it. 00:26:19.624 [2024-04-26 15:36:36.859240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.624 [2024-04-26 15:36:36.859594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.624 [2024-04-26 15:36:36.859608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.624 qpair failed and we were unable to recover it. 00:26:19.624 [2024-04-26 15:36:36.859941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.624 [2024-04-26 15:36:36.860131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.624 [2024-04-26 15:36:36.860145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.624 qpair failed and we were unable to recover it. 00:26:19.624 [2024-04-26 15:36:36.860474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.624 [2024-04-26 15:36:36.860833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.624 [2024-04-26 15:36:36.860852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.624 qpair failed and we were unable to recover it. 00:26:19.624 [2024-04-26 15:36:36.861178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.624 [2024-04-26 15:36:36.861552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.624 [2024-04-26 15:36:36.861566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.624 qpair failed and we were unable to recover it. 00:26:19.624 [2024-04-26 15:36:36.861899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.624 [2024-04-26 15:36:36.862280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.624 [2024-04-26 15:36:36.862293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.624 qpair failed and we were unable to recover it. 00:26:19.624 [2024-04-26 15:36:36.862495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.624 [2024-04-26 15:36:36.862805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.624 [2024-04-26 15:36:36.862819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.624 qpair failed and we were unable to recover it. 00:26:19.624 [2024-04-26 15:36:36.863027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.624 [2024-04-26 15:36:36.863395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.624 [2024-04-26 15:36:36.863408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.624 qpair failed and we were unable to recover it. 00:26:19.624 [2024-04-26 15:36:36.863807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.624 [2024-04-26 15:36:36.864170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.624 [2024-04-26 15:36:36.864184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.624 qpair failed and we were unable to recover it. 00:26:19.624 [2024-04-26 15:36:36.864543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.624 [2024-04-26 15:36:36.864604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.624 [2024-04-26 15:36:36.864618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.624 qpair failed and we were unable to recover it. 00:26:19.624 [2024-04-26 15:36:36.864971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.624 [2024-04-26 15:36:36.865322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.624 [2024-04-26 15:36:36.865337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.624 qpair failed and we were unable to recover it. 00:26:19.624 [2024-04-26 15:36:36.865567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.624 [2024-04-26 15:36:36.865945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.624 [2024-04-26 15:36:36.865962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.624 qpair failed and we were unable to recover it. 00:26:19.624 [2024-04-26 15:36:36.866029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.624 [2024-04-26 15:36:36.866362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.624 [2024-04-26 15:36:36.866377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.624 qpair failed and we were unable to recover it. 00:26:19.624 [2024-04-26 15:36:36.866726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.624 [2024-04-26 15:36:36.867023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.624 [2024-04-26 15:36:36.867039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.624 qpair failed and we were unable to recover it. 00:26:19.624 [2024-04-26 15:36:36.867402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.624 [2024-04-26 15:36:36.867744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.624 [2024-04-26 15:36:36.867758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.624 qpair failed and we were unable to recover it. 00:26:19.624 [2024-04-26 15:36:36.868113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.624 [2024-04-26 15:36:36.868484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.624 [2024-04-26 15:36:36.868499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.624 qpair failed and we were unable to recover it. 00:26:19.624 [2024-04-26 15:36:36.868870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.624 [2024-04-26 15:36:36.869223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.624 [2024-04-26 15:36:36.869238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.624 qpair failed and we were unable to recover it. 00:26:19.624 [2024-04-26 15:36:36.869477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.624 [2024-04-26 15:36:36.869855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.624 [2024-04-26 15:36:36.869870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.624 qpair failed and we were unable to recover it. 00:26:19.624 [2024-04-26 15:36:36.870196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.625 [2024-04-26 15:36:36.870576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.625 [2024-04-26 15:36:36.870589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.625 qpair failed and we were unable to recover it. 00:26:19.625 [2024-04-26 15:36:36.870917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.625 [2024-04-26 15:36:36.871277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.625 [2024-04-26 15:36:36.871291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.625 qpair failed and we were unable to recover it. 00:26:19.625 [2024-04-26 15:36:36.871616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.625 [2024-04-26 15:36:36.871854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.625 [2024-04-26 15:36:36.871869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.625 qpair failed and we were unable to recover it. 00:26:19.625 [2024-04-26 15:36:36.872109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.625 [2024-04-26 15:36:36.872412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.625 [2024-04-26 15:36:36.872427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.625 qpair failed and we were unable to recover it. 00:26:19.625 [2024-04-26 15:36:36.872630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.625 [2024-04-26 15:36:36.873005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.625 [2024-04-26 15:36:36.873021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.625 qpair failed and we were unable to recover it. 00:26:19.625 [2024-04-26 15:36:36.873219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.625 [2024-04-26 15:36:36.873314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.625 [2024-04-26 15:36:36.873326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.625 qpair failed and we were unable to recover it. 00:26:19.625 [2024-04-26 15:36:36.873607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.625 [2024-04-26 15:36:36.873822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.625 [2024-04-26 15:36:36.873836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.625 qpair failed and we were unable to recover it. 00:26:19.625 [2024-04-26 15:36:36.873914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.625 [2024-04-26 15:36:36.874266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.625 [2024-04-26 15:36:36.874279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.625 qpair failed and we were unable to recover it. 00:26:19.625 [2024-04-26 15:36:36.874610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.625 [2024-04-26 15:36:36.874816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.625 [2024-04-26 15:36:36.874830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.625 qpair failed and we were unable to recover it. 00:26:19.625 [2024-04-26 15:36:36.875060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.625 [2024-04-26 15:36:36.875227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.625 [2024-04-26 15:36:36.875243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.625 qpair failed and we were unable to recover it. 00:26:19.625 [2024-04-26 15:36:36.875629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.625 [2024-04-26 15:36:36.875722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.625 [2024-04-26 15:36:36.875736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.625 qpair failed and we were unable to recover it. 00:26:19.625 [2024-04-26 15:36:36.876084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.625 [2024-04-26 15:36:36.876452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.625 [2024-04-26 15:36:36.876468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.625 qpair failed and we were unable to recover it. 00:26:19.625 [2024-04-26 15:36:36.876829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.625 [2024-04-26 15:36:36.877047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.625 [2024-04-26 15:36:36.877062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.625 qpair failed and we were unable to recover it. 00:26:19.625 [2024-04-26 15:36:36.877437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.625 [2024-04-26 15:36:36.877771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.625 [2024-04-26 15:36:36.877785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.625 qpair failed and we were unable to recover it. 00:26:19.625 [2024-04-26 15:36:36.878154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.625 [2024-04-26 15:36:36.878535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.625 [2024-04-26 15:36:36.878550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.625 qpair failed and we were unable to recover it. 00:26:19.625 [2024-04-26 15:36:36.878878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.625 [2024-04-26 15:36:36.879123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.625 [2024-04-26 15:36:36.879137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.625 qpair failed and we were unable to recover it. 00:26:19.625 [2024-04-26 15:36:36.879493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.625 [2024-04-26 15:36:36.879851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.625 [2024-04-26 15:36:36.879865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.625 qpair failed and we were unable to recover it. 00:26:19.625 [2024-04-26 15:36:36.880074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.625 [2024-04-26 15:36:36.880259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.625 [2024-04-26 15:36:36.880274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.625 qpair failed and we were unable to recover it. 00:26:19.625 [2024-04-26 15:36:36.880528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.625 [2024-04-26 15:36:36.880720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.625 [2024-04-26 15:36:36.880736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.625 qpair failed and we were unable to recover it. 00:26:19.625 [2024-04-26 15:36:36.880984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.625 [2024-04-26 15:36:36.881228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.625 [2024-04-26 15:36:36.881243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.625 qpair failed and we were unable to recover it. 00:26:19.625 [2024-04-26 15:36:36.881631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.625 [2024-04-26 15:36:36.882003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.625 [2024-04-26 15:36:36.882019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.625 qpair failed and we were unable to recover it. 00:26:19.625 [2024-04-26 15:36:36.882379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.625 [2024-04-26 15:36:36.882700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.625 [2024-04-26 15:36:36.882713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.625 qpair failed and we were unable to recover it. 00:26:19.625 [2024-04-26 15:36:36.883076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.625 [2024-04-26 15:36:36.883318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.625 [2024-04-26 15:36:36.883332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.625 qpair failed and we were unable to recover it. 00:26:19.625 [2024-04-26 15:36:36.883728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.625 [2024-04-26 15:36:36.883920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.625 [2024-04-26 15:36:36.883935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.625 qpair failed and we were unable to recover it. 00:26:19.625 [2024-04-26 15:36:36.884311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.625 [2024-04-26 15:36:36.884521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.625 [2024-04-26 15:36:36.884536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.625 qpair failed and we were unable to recover it. 00:26:19.625 [2024-04-26 15:36:36.884799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.625 [2024-04-26 15:36:36.885169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.625 [2024-04-26 15:36:36.885198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.625 qpair failed and we were unable to recover it. 00:26:19.625 [2024-04-26 15:36:36.885570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.625 [2024-04-26 15:36:36.885978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.625 [2024-04-26 15:36:36.885994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.625 qpair failed and we were unable to recover it. 00:26:19.625 [2024-04-26 15:36:36.886208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.625 [2024-04-26 15:36:36.886408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.625 [2024-04-26 15:36:36.886424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.625 qpair failed and we were unable to recover it. 00:26:19.625 [2024-04-26 15:36:36.886780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.625 [2024-04-26 15:36:36.887131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.626 [2024-04-26 15:36:36.887159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.626 qpair failed and we were unable to recover it. 00:26:19.626 [2024-04-26 15:36:36.887415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.626 [2024-04-26 15:36:36.887785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.626 [2024-04-26 15:36:36.887803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.626 qpair failed and we were unable to recover it. 00:26:19.626 [2024-04-26 15:36:36.888137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.626 [2024-04-26 15:36:36.888507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.626 [2024-04-26 15:36:36.888522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.626 qpair failed and we were unable to recover it. 00:26:19.626 [2024-04-26 15:36:36.888851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.626 [2024-04-26 15:36:36.889198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.626 [2024-04-26 15:36:36.889222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.626 qpair failed and we were unable to recover it. 00:26:19.626 [2024-04-26 15:36:36.889485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.626 [2024-04-26 15:36:36.889574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.626 [2024-04-26 15:36:36.889597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.626 qpair failed and we were unable to recover it. 00:26:19.626 [2024-04-26 15:36:36.889965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.626 [2024-04-26 15:36:36.890343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.626 [2024-04-26 15:36:36.890359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.626 qpair failed and we were unable to recover it. 00:26:19.626 [2024-04-26 15:36:36.890742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.626 [2024-04-26 15:36:36.891101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.626 [2024-04-26 15:36:36.891121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.626 qpair failed and we were unable to recover it. 00:26:19.626 [2024-04-26 15:36:36.891357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.626 [2024-04-26 15:36:36.891561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.626 [2024-04-26 15:36:36.891587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.626 qpair failed and we were unable to recover it. 00:26:19.626 [2024-04-26 15:36:36.891963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.626 [2024-04-26 15:36:36.892327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.626 [2024-04-26 15:36:36.892342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.626 qpair failed and we were unable to recover it. 00:26:19.626 [2024-04-26 15:36:36.892703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.626 [2024-04-26 15:36:36.893064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.626 [2024-04-26 15:36:36.893080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.626 qpair failed and we were unable to recover it. 00:26:19.626 [2024-04-26 15:36:36.893433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.626 [2024-04-26 15:36:36.893637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.626 [2024-04-26 15:36:36.893653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.626 qpair failed and we were unable to recover it. 00:26:19.626 [2024-04-26 15:36:36.894008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.626 [2024-04-26 15:36:36.894252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.626 [2024-04-26 15:36:36.894278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.626 qpair failed and we were unable to recover it. 00:26:19.626 [2024-04-26 15:36:36.894669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.626 [2024-04-26 15:36:36.895030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.626 [2024-04-26 15:36:36.895046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.626 qpair failed and we were unable to recover it. 00:26:19.626 [2024-04-26 15:36:36.895409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.626 [2024-04-26 15:36:36.895616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.626 [2024-04-26 15:36:36.895630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.626 qpair failed and we were unable to recover it. 00:26:19.626 [2024-04-26 15:36:36.895697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.626 [2024-04-26 15:36:36.895937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.626 [2024-04-26 15:36:36.895953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.626 qpair failed and we were unable to recover it. 00:26:19.626 [2024-04-26 15:36:36.896300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.626 [2024-04-26 15:36:36.896516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.626 [2024-04-26 15:36:36.896543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.626 qpair failed and we were unable to recover it. 00:26:19.626 [2024-04-26 15:36:36.897023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.626 [2024-04-26 15:36:36.897342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.626 [2024-04-26 15:36:36.897357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.626 qpair failed and we were unable to recover it. 00:26:19.626 [2024-04-26 15:36:36.897578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.626 [2024-04-26 15:36:36.897959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.626 [2024-04-26 15:36:36.897975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.626 qpair failed and we were unable to recover it. 00:26:19.626 [2024-04-26 15:36:36.898333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.626 [2024-04-26 15:36:36.898710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.626 [2024-04-26 15:36:36.898725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.626 qpair failed and we were unable to recover it. 00:26:19.626 [2024-04-26 15:36:36.898807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.626 [2024-04-26 15:36:36.899143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.626 [2024-04-26 15:36:36.899159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.626 qpair failed and we were unable to recover it. 00:26:19.626 [2024-04-26 15:36:36.899484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.626 [2024-04-26 15:36:36.899820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.626 [2024-04-26 15:36:36.899857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.626 qpair failed and we were unable to recover it. 00:26:19.626 [2024-04-26 15:36:36.900237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.626 [2024-04-26 15:36:36.900607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.626 [2024-04-26 15:36:36.900636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.626 qpair failed and we were unable to recover it. 00:26:19.626 [2024-04-26 15:36:36.900737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.626 [2024-04-26 15:36:36.900962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.626 [2024-04-26 15:36:36.900980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.626 qpair failed and we were unable to recover it. 00:26:19.626 [2024-04-26 15:36:36.901353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.626 [2024-04-26 15:36:36.901725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.626 [2024-04-26 15:36:36.901741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.626 qpair failed and we were unable to recover it. 00:26:19.626 [2024-04-26 15:36:36.901933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.626 [2024-04-26 15:36:36.902319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.626 [2024-04-26 15:36:36.902346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.626 qpair failed and we were unable to recover it. 00:26:19.626 [2024-04-26 15:36:36.902582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.626 [2024-04-26 15:36:36.902860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.626 [2024-04-26 15:36:36.902887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.626 qpair failed and we were unable to recover it. 00:26:19.626 [2024-04-26 15:36:36.903143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.626 [2024-04-26 15:36:36.903496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.626 [2024-04-26 15:36:36.903525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.626 qpair failed and we were unable to recover it. 00:26:19.626 [2024-04-26 15:36:36.903945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.626 [2024-04-26 15:36:36.904162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.626 [2024-04-26 15:36:36.904181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.626 qpair failed and we were unable to recover it. 00:26:19.626 [2024-04-26 15:36:36.904624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.626 [2024-04-26 15:36:36.904969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.626 [2024-04-26 15:36:36.904985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.626 qpair failed and we were unable to recover it. 00:26:19.626 [2024-04-26 15:36:36.905352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.627 [2024-04-26 15:36:36.905605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.627 [2024-04-26 15:36:36.905620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.627 qpair failed and we were unable to recover it. 00:26:19.627 [2024-04-26 15:36:36.905958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.627 [2024-04-26 15:36:36.906319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.627 [2024-04-26 15:36:36.906346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.627 qpair failed and we were unable to recover it. 00:26:19.627 [2024-04-26 15:36:36.906698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.627 [2024-04-26 15:36:36.907032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.627 [2024-04-26 15:36:36.907055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.627 qpair failed and we were unable to recover it. 00:26:19.627 [2024-04-26 15:36:36.907366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.627 [2024-04-26 15:36:36.907732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.627 [2024-04-26 15:36:36.907749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.627 qpair failed and we were unable to recover it. 00:26:19.627 [2024-04-26 15:36:36.908117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.627 [2024-04-26 15:36:36.908487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.627 [2024-04-26 15:36:36.908514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.627 qpair failed and we were unable to recover it. 00:26:19.627 [2024-04-26 15:36:36.908912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.627 [2024-04-26 15:36:36.909257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.627 [2024-04-26 15:36:36.909273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.627 qpair failed and we were unable to recover it. 00:26:19.627 [2024-04-26 15:36:36.909623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.627 [2024-04-26 15:36:36.909844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.627 [2024-04-26 15:36:36.909860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.627 qpair failed and we were unable to recover it. 00:26:19.627 [2024-04-26 15:36:36.910245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.627 [2024-04-26 15:36:36.910479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.627 [2024-04-26 15:36:36.910505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.627 qpair failed and we were unable to recover it. 00:26:19.627 [2024-04-26 15:36:36.910886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.627 [2024-04-26 15:36:36.911134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.627 [2024-04-26 15:36:36.911161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.627 qpair failed and we were unable to recover it. 00:26:19.627 [2024-04-26 15:36:36.911596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.627 [2024-04-26 15:36:36.911977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.627 [2024-04-26 15:36:36.911995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.627 qpair failed and we were unable to recover it. 00:26:19.627 [2024-04-26 15:36:36.912357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.627 [2024-04-26 15:36:36.912576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.627 [2024-04-26 15:36:36.912600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.627 qpair failed and we were unable to recover it. 00:26:19.627 [2024-04-26 15:36:36.912802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.627 [2024-04-26 15:36:36.913155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.627 [2024-04-26 15:36:36.913181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.627 qpair failed and we were unable to recover it. 00:26:19.627 [2024-04-26 15:36:36.913565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.627 [2024-04-26 15:36:36.913822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.627 [2024-04-26 15:36:36.913861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.627 qpair failed and we were unable to recover it. 00:26:19.627 [2024-04-26 15:36:36.914237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.627 [2024-04-26 15:36:36.914610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.627 [2024-04-26 15:36:36.914628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.627 qpair failed and we were unable to recover it. 00:26:19.627 [2024-04-26 15:36:36.914878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.627 [2024-04-26 15:36:36.915134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.627 [2024-04-26 15:36:36.915163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.627 qpair failed and we were unable to recover it. 00:26:19.627 [2024-04-26 15:36:36.915429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.627 [2024-04-26 15:36:36.915669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.627 [2024-04-26 15:36:36.915696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.627 qpair failed and we were unable to recover it. 00:26:19.627 [2024-04-26 15:36:36.916091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.627 [2024-04-26 15:36:36.916306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.627 [2024-04-26 15:36:36.916322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.627 qpair failed and we were unable to recover it. 00:26:19.627 [2024-04-26 15:36:36.916570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.627 [2024-04-26 15:36:36.916925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.627 [2024-04-26 15:36:36.916951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.627 qpair failed and we were unable to recover it. 00:26:19.627 [2024-04-26 15:36:36.917329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.627 [2024-04-26 15:36:36.917711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.627 [2024-04-26 15:36:36.917737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.627 qpair failed and we were unable to recover it. 00:26:19.627 [2024-04-26 15:36:36.918097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.627 [2024-04-26 15:36:36.918467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.627 [2024-04-26 15:36:36.918495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.627 qpair failed and we were unable to recover it. 00:26:19.627 [2024-04-26 15:36:36.918865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.627 [2024-04-26 15:36:36.919082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.627 [2024-04-26 15:36:36.919100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.627 qpair failed and we were unable to recover it. 00:26:19.627 [2024-04-26 15:36:36.919453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.627 [2024-04-26 15:36:36.919793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.627 [2024-04-26 15:36:36.919808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.627 qpair failed and we were unable to recover it. 00:26:19.627 [2024-04-26 15:36:36.920245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.627 [2024-04-26 15:36:36.920568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.627 [2024-04-26 15:36:36.920595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.627 qpair failed and we were unable to recover it. 00:26:19.627 [2024-04-26 15:36:36.920835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.627 [2024-04-26 15:36:36.921214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.627 [2024-04-26 15:36:36.921235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.627 qpair failed and we were unable to recover it. 00:26:19.627 [2024-04-26 15:36:36.921621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.627 [2024-04-26 15:36:36.921994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.627 [2024-04-26 15:36:36.922011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.627 qpair failed and we were unable to recover it. 00:26:19.627 [2024-04-26 15:36:36.922255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.627 [2024-04-26 15:36:36.922470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.627 [2024-04-26 15:36:36.922492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.627 qpair failed and we were unable to recover it. 00:26:19.627 [2024-04-26 15:36:36.922590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.627 [2024-04-26 15:36:36.922791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.627 [2024-04-26 15:36:36.922819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.627 qpair failed and we were unable to recover it. 00:26:19.627 [2024-04-26 15:36:36.923071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.627 [2024-04-26 15:36:36.923413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.627 [2024-04-26 15:36:36.923432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.627 qpair failed and we were unable to recover it. 00:26:19.627 [2024-04-26 15:36:36.923622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.627 [2024-04-26 15:36:36.924006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.628 [2024-04-26 15:36:36.924023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.628 qpair failed and we were unable to recover it. 00:26:19.628 [2024-04-26 15:36:36.924365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.628 [2024-04-26 15:36:36.924711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.628 [2024-04-26 15:36:36.924738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.628 qpair failed and we were unable to recover it. 00:26:19.628 [2024-04-26 15:36:36.925143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.628 [2024-04-26 15:36:36.925514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.628 [2024-04-26 15:36:36.925531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.628 qpair failed and we were unable to recover it. 00:26:19.628 [2024-04-26 15:36:36.925783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.628 [2024-04-26 15:36:36.926153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.628 [2024-04-26 15:36:36.926171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.628 qpair failed and we were unable to recover it. 00:26:19.628 [2024-04-26 15:36:36.926532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.628 [2024-04-26 15:36:36.926907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.628 [2024-04-26 15:36:36.926929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.628 qpair failed and we were unable to recover it. 00:26:19.628 [2024-04-26 15:36:36.927318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.628 [2024-04-26 15:36:36.927732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.628 [2024-04-26 15:36:36.927751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.628 qpair failed and we were unable to recover it. 00:26:19.628 [2024-04-26 15:36:36.927941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.628 [2024-04-26 15:36:36.928290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.628 [2024-04-26 15:36:36.928312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.628 qpair failed and we were unable to recover it. 00:26:19.628 [2024-04-26 15:36:36.928542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.628 [2024-04-26 15:36:36.928918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.628 [2024-04-26 15:36:36.928942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.628 qpair failed and we were unable to recover it. 00:26:19.628 [2024-04-26 15:36:36.929089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.628 [2024-04-26 15:36:36.929414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.628 [2024-04-26 15:36:36.929441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.628 qpair failed and we were unable to recover it. 00:26:19.628 [2024-04-26 15:36:36.929606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.628 [2024-04-26 15:36:36.929876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.628 [2024-04-26 15:36:36.929905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.628 qpair failed and we were unable to recover it. 00:26:19.628 [2024-04-26 15:36:36.930151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.628 [2024-04-26 15:36:36.930521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.628 [2024-04-26 15:36:36.930542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.628 qpair failed and we were unable to recover it. 00:26:19.628 [2024-04-26 15:36:36.930907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.628 [2024-04-26 15:36:36.931309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.628 [2024-04-26 15:36:36.931326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.628 qpair failed and we were unable to recover it. 00:26:19.628 [2024-04-26 15:36:36.931687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.628 [2024-04-26 15:36:36.931940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.628 [2024-04-26 15:36:36.931966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.628 qpair failed and we were unable to recover it. 00:26:19.628 [2024-04-26 15:36:36.932343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.628 [2024-04-26 15:36:36.932718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.628 [2024-04-26 15:36:36.932735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.628 qpair failed and we were unable to recover it. 00:26:19.628 [2024-04-26 15:36:36.933083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.628 [2024-04-26 15:36:36.933175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.628 [2024-04-26 15:36:36.933191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.628 qpair failed and we were unable to recover it. 00:26:19.628 [2024-04-26 15:36:36.933542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.628 [2024-04-26 15:36:36.933609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.628 [2024-04-26 15:36:36.933624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.628 qpair failed and we were unable to recover it. 00:26:19.628 [2024-04-26 15:36:36.933976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.628 [2024-04-26 15:36:36.934364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.628 [2024-04-26 15:36:36.934386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.628 qpair failed and we were unable to recover it. 00:26:19.628 [2024-04-26 15:36:36.934729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.628 [2024-04-26 15:36:36.934976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.628 [2024-04-26 15:36:36.935002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.628 qpair failed and we were unable to recover it. 00:26:19.628 [2024-04-26 15:36:36.935252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.628 [2024-04-26 15:36:36.935482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.628 [2024-04-26 15:36:36.935508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.628 qpair failed and we were unable to recover it. 00:26:19.628 [2024-04-26 15:36:36.935601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.628 [2024-04-26 15:36:36.935832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.628 [2024-04-26 15:36:36.935859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.628 qpair failed and we were unable to recover it. 00:26:19.628 [2024-04-26 15:36:36.936275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.628 [2024-04-26 15:36:36.936649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.628 [2024-04-26 15:36:36.936665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.628 qpair failed and we were unable to recover it. 00:26:19.628 [2024-04-26 15:36:36.937030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.628 [2024-04-26 15:36:36.937414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.628 [2024-04-26 15:36:36.937441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.628 qpair failed and we were unable to recover it. 00:26:19.628 [2024-04-26 15:36:36.937879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.628 [2024-04-26 15:36:36.938105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.628 [2024-04-26 15:36:36.938123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.628 qpair failed and we were unable to recover it. 00:26:19.628 [2024-04-26 15:36:36.938329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.628 [2024-04-26 15:36:36.938536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.628 [2024-04-26 15:36:36.938551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.628 qpair failed and we were unable to recover it. 00:26:19.628 [2024-04-26 15:36:36.938905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.628 [2024-04-26 15:36:36.939284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.628 [2024-04-26 15:36:36.939300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.628 qpair failed and we were unable to recover it. 00:26:19.629 [2024-04-26 15:36:36.939499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.629 [2024-04-26 15:36:36.939767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.629 [2024-04-26 15:36:36.939796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.629 qpair failed and we were unable to recover it. 00:26:19.629 [2024-04-26 15:36:36.940089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.629 [2024-04-26 15:36:36.940487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.629 [2024-04-26 15:36:36.940521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.629 qpair failed and we were unable to recover it. 00:26:19.629 [2024-04-26 15:36:36.940872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.629 [2024-04-26 15:36:36.941236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.629 [2024-04-26 15:36:36.941252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.629 qpair failed and we were unable to recover it. 00:26:19.629 [2024-04-26 15:36:36.941486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.629 [2024-04-26 15:36:36.941902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.629 [2024-04-26 15:36:36.941918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.629 qpair failed and we were unable to recover it. 00:26:19.629 [2024-04-26 15:36:36.942108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.629 [2024-04-26 15:36:36.942378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.629 [2024-04-26 15:36:36.942394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.629 qpair failed and we were unable to recover it. 00:26:19.629 [2024-04-26 15:36:36.942751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.629 [2024-04-26 15:36:36.943147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.629 [2024-04-26 15:36:36.943175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.629 qpair failed and we were unable to recover it. 00:26:19.629 [2024-04-26 15:36:36.943571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.629 [2024-04-26 15:36:36.943931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.629 [2024-04-26 15:36:36.943948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.629 qpair failed and we were unable to recover it. 00:26:19.629 [2024-04-26 15:36:36.944186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.629 [2024-04-26 15:36:36.944253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.629 [2024-04-26 15:36:36.944268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.629 qpair failed and we were unable to recover it. 00:26:19.629 [2024-04-26 15:36:36.944613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.629 [2024-04-26 15:36:36.944835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.629 [2024-04-26 15:36:36.944871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.629 qpair failed and we were unable to recover it. 00:26:19.629 [2024-04-26 15:36:36.945119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.629 [2024-04-26 15:36:36.945337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.629 [2024-04-26 15:36:36.945356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.629 qpair failed and we were unable to recover it. 00:26:19.629 [2024-04-26 15:36:36.945668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.629 [2024-04-26 15:36:36.946034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.629 [2024-04-26 15:36:36.946050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.629 qpair failed and we were unable to recover it. 00:26:19.629 [2024-04-26 15:36:36.946246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.629 [2024-04-26 15:36:36.946619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.629 [2024-04-26 15:36:36.946651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.629 qpair failed and we were unable to recover it. 00:26:19.629 [2024-04-26 15:36:36.947083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.629 [2024-04-26 15:36:36.947350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.629 [2024-04-26 15:36:36.947368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.629 qpair failed and we were unable to recover it. 00:26:19.629 [2024-04-26 15:36:36.947744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.629 [2024-04-26 15:36:36.948118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.629 [2024-04-26 15:36:36.948145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.629 qpair failed and we were unable to recover it. 00:26:19.629 [2024-04-26 15:36:36.948313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.629 [2024-04-26 15:36:36.948600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.629 [2024-04-26 15:36:36.948626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.629 qpair failed and we were unable to recover it. 00:26:19.629 [2024-04-26 15:36:36.948821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.629 [2024-04-26 15:36:36.949072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.629 [2024-04-26 15:36:36.949089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.629 qpair failed and we were unable to recover it. 00:26:19.629 [2024-04-26 15:36:36.949274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.629 [2024-04-26 15:36:36.949603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.629 [2024-04-26 15:36:36.949619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.629 qpair failed and we were unable to recover it. 00:26:19.629 [2024-04-26 15:36:36.949816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.629 [2024-04-26 15:36:36.950096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.629 [2024-04-26 15:36:36.950112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.629 qpair failed and we were unable to recover it. 00:26:19.629 [2024-04-26 15:36:36.950469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.629 [2024-04-26 15:36:36.950858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.629 [2024-04-26 15:36:36.950873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.629 qpair failed and we were unable to recover it. 00:26:19.629 [2024-04-26 15:36:36.951213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.629 [2024-04-26 15:36:36.951285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.629 [2024-04-26 15:36:36.951297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.629 qpair failed and we were unable to recover it. 00:26:19.629 [2024-04-26 15:36:36.951632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.629 [2024-04-26 15:36:36.951983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.629 [2024-04-26 15:36:36.951998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.629 qpair failed and we were unable to recover it. 00:26:19.629 [2024-04-26 15:36:36.952363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.629 [2024-04-26 15:36:36.952556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.629 [2024-04-26 15:36:36.952570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.629 qpair failed and we were unable to recover it. 00:26:19.629 [2024-04-26 15:36:36.952997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.629 [2024-04-26 15:36:36.953346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.629 [2024-04-26 15:36:36.953360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.629 qpair failed and we were unable to recover it. 00:26:19.629 [2024-04-26 15:36:36.953703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.629 [2024-04-26 15:36:36.954019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.629 [2024-04-26 15:36:36.954034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.629 qpair failed and we were unable to recover it. 00:26:19.629 [2024-04-26 15:36:36.954260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.629 [2024-04-26 15:36:36.954337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.629 [2024-04-26 15:36:36.954352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.629 qpair failed and we were unable to recover it. 00:26:19.629 [2024-04-26 15:36:36.954691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.629 [2024-04-26 15:36:36.954916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.629 [2024-04-26 15:36:36.954930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.629 qpair failed and we were unable to recover it. 00:26:19.629 [2024-04-26 15:36:36.955141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.629 [2024-04-26 15:36:36.955479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.629 [2024-04-26 15:36:36.955493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.629 qpair failed and we were unable to recover it. 00:26:19.629 [2024-04-26 15:36:36.955697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.629 [2024-04-26 15:36:36.955926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.629 [2024-04-26 15:36:36.955940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.629 qpair failed and we were unable to recover it. 00:26:19.630 [2024-04-26 15:36:36.956323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.630 [2024-04-26 15:36:36.956677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.630 [2024-04-26 15:36:36.956693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.630 qpair failed and we were unable to recover it. 00:26:19.630 [2024-04-26 15:36:36.957033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.630 [2024-04-26 15:36:36.957387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.630 [2024-04-26 15:36:36.957402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.630 qpair failed and we were unable to recover it. 00:26:19.630 [2024-04-26 15:36:36.957760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.630 [2024-04-26 15:36:36.958100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.630 [2024-04-26 15:36:36.958114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.630 qpair failed and we were unable to recover it. 00:26:19.630 [2024-04-26 15:36:36.958475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.630 [2024-04-26 15:36:36.958751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.630 [2024-04-26 15:36:36.958766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.630 qpair failed and we were unable to recover it. 00:26:19.630 [2024-04-26 15:36:36.959135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.630 [2024-04-26 15:36:36.959490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.630 [2024-04-26 15:36:36.959503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.630 qpair failed and we were unable to recover it. 00:26:19.630 [2024-04-26 15:36:36.959882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.630 [2024-04-26 15:36:36.960275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.630 [2024-04-26 15:36:36.960289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.630 qpair failed and we were unable to recover it. 00:26:19.630 [2024-04-26 15:36:36.960642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.630 [2024-04-26 15:36:36.961005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.630 [2024-04-26 15:36:36.961020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.630 qpair failed and we were unable to recover it. 00:26:19.630 [2024-04-26 15:36:36.961387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.630 [2024-04-26 15:36:36.961759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.630 [2024-04-26 15:36:36.961774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.630 qpair failed and we were unable to recover it. 00:26:19.630 [2024-04-26 15:36:36.961852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.630 [2024-04-26 15:36:36.962065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.630 [2024-04-26 15:36:36.962081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.630 qpair failed and we were unable to recover it. 00:26:19.630 [2024-04-26 15:36:36.962454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.630 [2024-04-26 15:36:36.962845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.630 [2024-04-26 15:36:36.962859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.630 qpair failed and we were unable to recover it. 00:26:19.630 [2024-04-26 15:36:36.963221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.630 [2024-04-26 15:36:36.963429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.630 [2024-04-26 15:36:36.963442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.630 qpair failed and we were unable to recover it. 00:26:19.630 [2024-04-26 15:36:36.963647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.630 [2024-04-26 15:36:36.964031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.630 [2024-04-26 15:36:36.964045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.630 qpair failed and we were unable to recover it. 00:26:19.630 [2024-04-26 15:36:36.964392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.630 [2024-04-26 15:36:36.964766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.630 [2024-04-26 15:36:36.964779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.630 qpair failed and we were unable to recover it. 00:26:19.630 [2024-04-26 15:36:36.965154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.630 [2024-04-26 15:36:36.965534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.630 [2024-04-26 15:36:36.965548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.630 qpair failed and we were unable to recover it. 00:26:19.630 [2024-04-26 15:36:36.965915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.630 [2024-04-26 15:36:36.966336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.630 [2024-04-26 15:36:36.966350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.630 qpair failed and we were unable to recover it. 00:26:19.630 [2024-04-26 15:36:36.966583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.630 [2024-04-26 15:36:36.966805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.630 [2024-04-26 15:36:36.966820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.630 qpair failed and we were unable to recover it. 00:26:19.630 [2024-04-26 15:36:36.967096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.630 [2024-04-26 15:36:36.967302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.630 [2024-04-26 15:36:36.967318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.630 qpair failed and we were unable to recover it. 00:26:19.630 [2024-04-26 15:36:36.967573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.630 [2024-04-26 15:36:36.967821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.630 [2024-04-26 15:36:36.967844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.630 qpair failed and we were unable to recover it. 00:26:19.630 [2024-04-26 15:36:36.968212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.630 [2024-04-26 15:36:36.968583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.630 [2024-04-26 15:36:36.968600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.630 qpair failed and we were unable to recover it. 00:26:19.630 [2024-04-26 15:36:36.968932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.630 [2024-04-26 15:36:36.969311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.630 [2024-04-26 15:36:36.969327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.630 qpair failed and we were unable to recover it. 00:26:19.630 [2024-04-26 15:36:36.969702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.630 [2024-04-26 15:36:36.970095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.630 [2024-04-26 15:36:36.970110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.630 qpair failed and we were unable to recover it. 00:26:19.630 [2024-04-26 15:36:36.970341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.630 [2024-04-26 15:36:36.970694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.630 [2024-04-26 15:36:36.970708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.630 qpair failed and we were unable to recover it. 00:26:19.630 [2024-04-26 15:36:36.970986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.630 [2024-04-26 15:36:36.971184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.630 [2024-04-26 15:36:36.971197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.630 qpair failed and we were unable to recover it. 00:26:19.630 [2024-04-26 15:36:36.971543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.630 [2024-04-26 15:36:36.971753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.630 [2024-04-26 15:36:36.971768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.630 qpair failed and we were unable to recover it. 00:26:19.630 [2024-04-26 15:36:36.972102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.630 [2024-04-26 15:36:36.972477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.630 [2024-04-26 15:36:36.972492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.630 qpair failed and we were unable to recover it. 00:26:19.630 [2024-04-26 15:36:36.972732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.630 [2024-04-26 15:36:36.973115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.630 [2024-04-26 15:36:36.973131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.630 qpair failed and we were unable to recover it. 00:26:19.630 [2024-04-26 15:36:36.973465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.630 [2024-04-26 15:36:36.973720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.630 [2024-04-26 15:36:36.973733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.630 qpair failed and we were unable to recover it. 00:26:19.630 [2024-04-26 15:36:36.974207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.630 [2024-04-26 15:36:36.974524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.630 [2024-04-26 15:36:36.974538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.630 qpair failed and we were unable to recover it. 00:26:19.630 [2024-04-26 15:36:36.974971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.631 [2024-04-26 15:36:36.975328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.631 [2024-04-26 15:36:36.975342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.631 qpair failed and we were unable to recover it. 00:26:19.631 [2024-04-26 15:36:36.975686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.631 [2024-04-26 15:36:36.975753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.631 [2024-04-26 15:36:36.975767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.631 qpair failed and we were unable to recover it. 00:26:19.631 [2024-04-26 15:36:36.975997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.631 [2024-04-26 15:36:36.976358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.631 [2024-04-26 15:36:36.976372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.631 qpair failed and we were unable to recover it. 00:26:19.631 [2024-04-26 15:36:36.976721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.631 [2024-04-26 15:36:36.977053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.631 [2024-04-26 15:36:36.977068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.631 qpair failed and we were unable to recover it. 00:26:19.631 [2024-04-26 15:36:36.977488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.631 [2024-04-26 15:36:36.977701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.631 [2024-04-26 15:36:36.977716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.631 qpair failed and we were unable to recover it. 00:26:19.631 [2024-04-26 15:36:36.978079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.631 [2024-04-26 15:36:36.978452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.631 [2024-04-26 15:36:36.978466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.631 qpair failed and we were unable to recover it. 00:26:19.631 [2024-04-26 15:36:36.978672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.631 [2024-04-26 15:36:36.978870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.631 [2024-04-26 15:36:36.978886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.631 qpair failed and we were unable to recover it. 00:26:19.631 [2024-04-26 15:36:36.979240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.631 [2024-04-26 15:36:36.979589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.631 [2024-04-26 15:36:36.979602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.631 qpair failed and we were unable to recover it. 00:26:19.631 [2024-04-26 15:36:36.979940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.631 [2024-04-26 15:36:36.980327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.631 [2024-04-26 15:36:36.980341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.631 qpair failed and we were unable to recover it. 00:26:19.631 [2024-04-26 15:36:36.980672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.631 [2024-04-26 15:36:36.981022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.631 [2024-04-26 15:36:36.981036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.631 qpair failed and we were unable to recover it. 00:26:19.631 [2024-04-26 15:36:36.981390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.631 [2024-04-26 15:36:36.981742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.631 [2024-04-26 15:36:36.981756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.631 qpair failed and we were unable to recover it. 00:26:19.631 [2024-04-26 15:36:36.982094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.631 [2024-04-26 15:36:36.982433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.631 [2024-04-26 15:36:36.982447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.631 qpair failed and we were unable to recover it. 00:26:19.631 [2024-04-26 15:36:36.982801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.631 [2024-04-26 15:36:36.983017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.631 [2024-04-26 15:36:36.983031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.631 qpair failed and we were unable to recover it. 00:26:19.631 [2024-04-26 15:36:36.983264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.631 [2024-04-26 15:36:36.983507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.631 [2024-04-26 15:36:36.983520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.631 qpair failed and we were unable to recover it. 00:26:19.631 [2024-04-26 15:36:36.983727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.631 [2024-04-26 15:36:36.983955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.631 [2024-04-26 15:36:36.983971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.631 qpair failed and we were unable to recover it. 00:26:19.631 [2024-04-26 15:36:36.984284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.631 [2024-04-26 15:36:36.984628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.631 [2024-04-26 15:36:36.984643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.631 qpair failed and we were unable to recover it. 00:26:19.631 [2024-04-26 15:36:36.984885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.631 [2024-04-26 15:36:36.985233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.631 [2024-04-26 15:36:36.985248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.631 qpair failed and we were unable to recover it. 00:26:19.631 15:36:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:19.631 [2024-04-26 15:36:36.985687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.631 15:36:36 -- common/autotest_common.sh@850 -- # return 0 00:26:19.631 [2024-04-26 15:36:36.985910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.631 [2024-04-26 15:36:36.985925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.631 qpair failed and we were unable to recover it. 00:26:19.631 15:36:36 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:26:19.631 [2024-04-26 15:36:36.986279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.631 15:36:36 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:19.631 [2024-04-26 15:36:36.986620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.631 [2024-04-26 15:36:36.986634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.631 15:36:36 -- common/autotest_common.sh@10 -- # set +x 00:26:19.631 qpair failed and we were unable to recover it. 00:26:19.631 [2024-04-26 15:36:36.986848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.631 [2024-04-26 15:36:36.987132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.631 [2024-04-26 15:36:36.987147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.631 qpair failed and we were unable to recover it. 00:26:19.631 [2024-04-26 15:36:36.987487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.631 [2024-04-26 15:36:36.987704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.631 [2024-04-26 15:36:36.987721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.631 qpair failed and we were unable to recover it. 00:26:19.631 [2024-04-26 15:36:36.988134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.631 [2024-04-26 15:36:36.988335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.631 [2024-04-26 15:36:36.988349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.631 qpair failed and we were unable to recover it. 00:26:19.631 [2024-04-26 15:36:36.988655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.631 [2024-04-26 15:36:36.988997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.631 [2024-04-26 15:36:36.989011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.631 qpair failed and we were unable to recover it. 00:26:19.631 [2024-04-26 15:36:36.989357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.631 [2024-04-26 15:36:36.989703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.631 [2024-04-26 15:36:36.989717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.631 qpair failed and we were unable to recover it. 00:26:19.631 [2024-04-26 15:36:36.990050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.631 [2024-04-26 15:36:36.990421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.631 [2024-04-26 15:36:36.990436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.631 qpair failed and we were unable to recover it. 00:26:19.632 [2024-04-26 15:36:36.990830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.632 [2024-04-26 15:36:36.991202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.632 [2024-04-26 15:36:36.991218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.632 qpair failed and we were unable to recover it. 00:26:19.632 [2024-04-26 15:36:36.991592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.632 [2024-04-26 15:36:36.991941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.632 [2024-04-26 15:36:36.991957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.632 qpair failed and we were unable to recover it. 00:26:19.632 [2024-04-26 15:36:36.992390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.632 [2024-04-26 15:36:36.992730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.632 [2024-04-26 15:36:36.992744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.632 qpair failed and we were unable to recover it. 00:26:19.632 [2024-04-26 15:36:36.992811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.632 [2024-04-26 15:36:36.993020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.632 [2024-04-26 15:36:36.993036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.632 qpair failed and we were unable to recover it. 00:26:19.632 [2024-04-26 15:36:36.993237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.632 [2024-04-26 15:36:36.993524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.632 [2024-04-26 15:36:36.993538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.632 qpair failed and we were unable to recover it. 00:26:19.632 [2024-04-26 15:36:36.993871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.632 [2024-04-26 15:36:36.994220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.632 [2024-04-26 15:36:36.994234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.632 qpair failed and we were unable to recover it. 00:26:19.632 [2024-04-26 15:36:36.994608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.632 [2024-04-26 15:36:36.994997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.632 [2024-04-26 15:36:36.995013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.632 qpair failed and we were unable to recover it. 00:26:19.632 [2024-04-26 15:36:36.995388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.632 [2024-04-26 15:36:36.995744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.632 [2024-04-26 15:36:36.995759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.632 qpair failed and we were unable to recover it. 00:26:19.632 [2024-04-26 15:36:36.996105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.632 [2024-04-26 15:36:36.996449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.632 [2024-04-26 15:36:36.996466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.632 qpair failed and we were unable to recover it. 00:26:19.632 [2024-04-26 15:36:36.996657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.632 [2024-04-26 15:36:36.997018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.632 [2024-04-26 15:36:36.997036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.632 qpair failed and we were unable to recover it. 00:26:19.632 [2024-04-26 15:36:36.997379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.632 [2024-04-26 15:36:36.997781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.632 [2024-04-26 15:36:36.997804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.632 qpair failed and we were unable to recover it. 00:26:19.632 [2024-04-26 15:36:36.998208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.632 [2024-04-26 15:36:36.998564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.632 [2024-04-26 15:36:36.998581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.632 qpair failed and we were unable to recover it. 00:26:19.632 [2024-04-26 15:36:36.998929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.632 [2024-04-26 15:36:36.999160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.632 [2024-04-26 15:36:36.999176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.632 qpair failed and we were unable to recover it. 00:26:19.632 [2024-04-26 15:36:36.999357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.632 [2024-04-26 15:36:36.999561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.632 [2024-04-26 15:36:36.999576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.632 qpair failed and we were unable to recover it. 00:26:19.632 [2024-04-26 15:36:36.999933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.632 [2024-04-26 15:36:37.000142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.632 [2024-04-26 15:36:37.000157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.632 qpair failed and we were unable to recover it. 00:26:19.632 [2024-04-26 15:36:37.000545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.632 [2024-04-26 15:36:37.000895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.632 [2024-04-26 15:36:37.000911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.632 qpair failed and we were unable to recover it. 00:26:19.632 [2024-04-26 15:36:37.001272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.632 [2024-04-26 15:36:37.001522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.632 [2024-04-26 15:36:37.001537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.632 qpair failed and we were unable to recover it. 00:26:19.632 [2024-04-26 15:36:37.001905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.632 [2024-04-26 15:36:37.002288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.632 [2024-04-26 15:36:37.002303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.632 qpair failed and we were unable to recover it. 00:26:19.632 [2024-04-26 15:36:37.002523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.632 [2024-04-26 15:36:37.002876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.632 [2024-04-26 15:36:37.002891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.632 qpair failed and we were unable to recover it. 00:26:19.632 [2024-04-26 15:36:37.003247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.632 [2024-04-26 15:36:37.003460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.632 [2024-04-26 15:36:37.003474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.632 qpair failed and we were unable to recover it. 00:26:19.632 [2024-04-26 15:36:37.003834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.632 [2024-04-26 15:36:37.004180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.632 [2024-04-26 15:36:37.004199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.632 qpair failed and we were unable to recover it. 00:26:19.632 [2024-04-26 15:36:37.004461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.632 [2024-04-26 15:36:37.004826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.632 [2024-04-26 15:36:37.004854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.632 qpair failed and we were unable to recover it. 00:26:19.632 [2024-04-26 15:36:37.005085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.632 [2024-04-26 15:36:37.005395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.632 [2024-04-26 15:36:37.005411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.632 qpair failed and we were unable to recover it. 00:26:19.632 [2024-04-26 15:36:37.005688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.632 [2024-04-26 15:36:37.006038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.633 [2024-04-26 15:36:37.006053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.633 qpair failed and we were unable to recover it. 00:26:19.633 [2024-04-26 15:36:37.006392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.633 [2024-04-26 15:36:37.006750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.633 [2024-04-26 15:36:37.006766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.633 qpair failed and we were unable to recover it. 00:26:19.633 [2024-04-26 15:36:37.007169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.633 [2024-04-26 15:36:37.007382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.633 [2024-04-26 15:36:37.007397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.633 qpair failed and we were unable to recover it. 00:26:19.633 [2024-04-26 15:36:37.007795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.633 [2024-04-26 15:36:37.008009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.633 [2024-04-26 15:36:37.008025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.633 qpair failed and we were unable to recover it. 00:26:19.633 [2024-04-26 15:36:37.008236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.633 [2024-04-26 15:36:37.008558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.633 [2024-04-26 15:36:37.008572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.633 qpair failed and we were unable to recover it. 00:26:19.633 [2024-04-26 15:36:37.008634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.633 [2024-04-26 15:36:37.008818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.633 [2024-04-26 15:36:37.008832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.633 qpair failed and we were unable to recover it. 00:26:19.633 [2024-04-26 15:36:37.009200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.633 [2024-04-26 15:36:37.009572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.633 [2024-04-26 15:36:37.009587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.633 qpair failed and we were unable to recover it. 00:26:19.633 [2024-04-26 15:36:37.009921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.633 [2024-04-26 15:36:37.010156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.633 [2024-04-26 15:36:37.010174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.633 qpair failed and we were unable to recover it. 00:26:19.633 [2024-04-26 15:36:37.010361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.633 [2024-04-26 15:36:37.010608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.633 [2024-04-26 15:36:37.010621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.633 qpair failed and we were unable to recover it. 00:26:19.633 [2024-04-26 15:36:37.010957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.633 [2024-04-26 15:36:37.011328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.633 [2024-04-26 15:36:37.011346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.633 qpair failed and we were unable to recover it. 00:26:19.633 [2024-04-26 15:36:37.011713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.633 [2024-04-26 15:36:37.012030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.633 [2024-04-26 15:36:37.012044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.633 qpair failed and we were unable to recover it. 00:26:19.633 [2024-04-26 15:36:37.012390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.633 [2024-04-26 15:36:37.012701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.633 [2024-04-26 15:36:37.012716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.633 qpair failed and we were unable to recover it. 00:26:19.633 [2024-04-26 15:36:37.013124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.633 [2024-04-26 15:36:37.013326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.633 [2024-04-26 15:36:37.013345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.633 qpair failed and we were unable to recover it. 00:26:19.633 [2024-04-26 15:36:37.013415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.633 [2024-04-26 15:36:37.013773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.633 [2024-04-26 15:36:37.013789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.633 qpair failed and we were unable to recover it. 00:26:19.633 [2024-04-26 15:36:37.014111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.633 [2024-04-26 15:36:37.014484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.633 [2024-04-26 15:36:37.014499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.633 qpair failed and we were unable to recover it. 00:26:19.633 [2024-04-26 15:36:37.014823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.633 [2024-04-26 15:36:37.015044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.633 [2024-04-26 15:36:37.015059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.633 qpair failed and we were unable to recover it. 00:26:19.633 [2024-04-26 15:36:37.015422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.633 [2024-04-26 15:36:37.015788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.633 [2024-04-26 15:36:37.015802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.633 qpair failed and we were unable to recover it. 00:26:19.633 [2024-04-26 15:36:37.016128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.633 [2024-04-26 15:36:37.016344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.633 [2024-04-26 15:36:37.016364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.633 qpair failed and we were unable to recover it. 00:26:19.633 [2024-04-26 15:36:37.016740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.633 [2024-04-26 15:36:37.016926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.633 [2024-04-26 15:36:37.016941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.633 qpair failed and we were unable to recover it. 00:26:19.633 [2024-04-26 15:36:37.017136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.633 [2024-04-26 15:36:37.017380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.633 [2024-04-26 15:36:37.017394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.633 qpair failed and we were unable to recover it. 00:26:19.633 [2024-04-26 15:36:37.017805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.633 [2024-04-26 15:36:37.018005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.633 [2024-04-26 15:36:37.018020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.633 qpair failed and we were unable to recover it. 00:26:19.633 [2024-04-26 15:36:37.018393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.633 [2024-04-26 15:36:37.018732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.633 [2024-04-26 15:36:37.018748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.633 qpair failed and we were unable to recover it. 00:26:19.633 [2024-04-26 15:36:37.019114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.633 [2024-04-26 15:36:37.019461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.633 [2024-04-26 15:36:37.019477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.634 qpair failed and we were unable to recover it. 00:26:19.634 [2024-04-26 15:36:37.019846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.634 [2024-04-26 15:36:37.020106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.634 [2024-04-26 15:36:37.020122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.634 qpair failed and we were unable to recover it. 00:26:19.634 [2024-04-26 15:36:37.020324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.634 [2024-04-26 15:36:37.020518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.634 [2024-04-26 15:36:37.020534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.634 qpair failed and we were unable to recover it. 00:26:19.634 [2024-04-26 15:36:37.020728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.634 [2024-04-26 15:36:37.020984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.634 [2024-04-26 15:36:37.021002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.634 qpair failed and we were unable to recover it. 00:26:19.634 [2024-04-26 15:36:37.021338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.634 [2024-04-26 15:36:37.021676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.634 [2024-04-26 15:36:37.021691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.634 qpair failed and we were unable to recover it. 00:26:19.634 [2024-04-26 15:36:37.021892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.634 [2024-04-26 15:36:37.022210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.634 [2024-04-26 15:36:37.022224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.634 qpair failed and we were unable to recover it. 00:26:19.634 [2024-04-26 15:36:37.022558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.634 [2024-04-26 15:36:37.022918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.634 [2024-04-26 15:36:37.022935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.634 qpair failed and we were unable to recover it. 00:26:19.634 [2024-04-26 15:36:37.023292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.634 [2024-04-26 15:36:37.023502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.634 [2024-04-26 15:36:37.023515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.634 qpair failed and we were unable to recover it. 00:26:19.634 [2024-04-26 15:36:37.023954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.634 [2024-04-26 15:36:37.024284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.634 [2024-04-26 15:36:37.024300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.634 qpair failed and we were unable to recover it. 00:26:19.634 [2024-04-26 15:36:37.024657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.634 [2024-04-26 15:36:37.025027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.634 [2024-04-26 15:36:37.025043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.634 qpair failed and we were unable to recover it. 00:26:19.634 [2024-04-26 15:36:37.025389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.634 [2024-04-26 15:36:37.025756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.634 [2024-04-26 15:36:37.025771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.634 qpair failed and we were unable to recover it. 00:26:19.634 [2024-04-26 15:36:37.026014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.634 [2024-04-26 15:36:37.026373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.634 [2024-04-26 15:36:37.026387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.634 qpair failed and we were unable to recover it. 00:26:19.634 15:36:37 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:19.634 [2024-04-26 15:36:37.026635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.634 15:36:37 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:19.634 [2024-04-26 15:36:37.027012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.634 [2024-04-26 15:36:37.027029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.634 qpair failed and we were unable to recover it. 00:26:19.634 15:36:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:19.634 [2024-04-26 15:36:37.027384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.634 15:36:37 -- common/autotest_common.sh@10 -- # set +x 00:26:19.634 [2024-04-26 15:36:37.027758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.634 [2024-04-26 15:36:37.027775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.634 qpair failed and we were unable to recover it. 00:26:19.634 [2024-04-26 15:36:37.028130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.634 [2024-04-26 15:36:37.028335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.634 [2024-04-26 15:36:37.028349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.634 qpair failed and we were unable to recover it. 00:26:19.634 [2024-04-26 15:36:37.028693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.634 [2024-04-26 15:36:37.028945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.634 [2024-04-26 15:36:37.028960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.634 qpair failed and we were unable to recover it. 00:26:19.634 [2024-04-26 15:36:37.029324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.634 [2024-04-26 15:36:37.029580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.634 [2024-04-26 15:36:37.029594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.634 qpair failed and we were unable to recover it. 00:26:19.634 [2024-04-26 15:36:37.029921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.634 [2024-04-26 15:36:37.030176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.634 [2024-04-26 15:36:37.030190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.634 qpair failed and we were unable to recover it. 00:26:19.634 [2024-04-26 15:36:37.030538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.634 [2024-04-26 15:36:37.030926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.634 [2024-04-26 15:36:37.030940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.634 qpair failed and we were unable to recover it. 00:26:19.634 [2024-04-26 15:36:37.031148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.634 [2024-04-26 15:36:37.031218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.634 [2024-04-26 15:36:37.031232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.634 qpair failed and we were unable to recover it. 00:26:19.634 [2024-04-26 15:36:37.031590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.634 [2024-04-26 15:36:37.031928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.634 [2024-04-26 15:36:37.031942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.634 qpair failed and we were unable to recover it. 00:26:19.634 [2024-04-26 15:36:37.032312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.634 [2024-04-26 15:36:37.032699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.634 [2024-04-26 15:36:37.032713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.634 qpair failed and we were unable to recover it. 00:26:19.634 [2024-04-26 15:36:37.033115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.634 [2024-04-26 15:36:37.033325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.635 [2024-04-26 15:36:37.033339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.635 qpair failed and we were unable to recover it. 00:26:19.635 [2024-04-26 15:36:37.033542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.635 [2024-04-26 15:36:37.033861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.635 [2024-04-26 15:36:37.033875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.635 qpair failed and we were unable to recover it. 00:26:19.635 [2024-04-26 15:36:37.034220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.635 [2024-04-26 15:36:37.034597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.635 [2024-04-26 15:36:37.034612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.635 qpair failed and we were unable to recover it. 00:26:19.635 [2024-04-26 15:36:37.034982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.635 [2024-04-26 15:36:37.035203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.635 [2024-04-26 15:36:37.035218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.635 qpair failed and we were unable to recover it. 00:26:19.635 [2024-04-26 15:36:37.035559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.635 [2024-04-26 15:36:37.035759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.635 [2024-04-26 15:36:37.035773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.635 qpair failed and we were unable to recover it. 00:26:19.635 [2024-04-26 15:36:37.036166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.635 [2024-04-26 15:36:37.036504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.635 [2024-04-26 15:36:37.036518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.635 qpair failed and we were unable to recover it. 00:26:19.635 [2024-04-26 15:36:37.036857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.635 [2024-04-26 15:36:37.037219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.635 [2024-04-26 15:36:37.037234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.635 qpair failed and we were unable to recover it. 00:26:19.635 [2024-04-26 15:36:37.037565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.635 [2024-04-26 15:36:37.037935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.635 [2024-04-26 15:36:37.037950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.635 qpair failed and we were unable to recover it. 00:26:19.635 [2024-04-26 15:36:37.038300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.635 [2024-04-26 15:36:37.038638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.635 [2024-04-26 15:36:37.038651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.635 qpair failed and we were unable to recover it. 00:26:19.635 [2024-04-26 15:36:37.038900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.635 [2024-04-26 15:36:37.039120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.635 [2024-04-26 15:36:37.039135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.635 qpair failed and we were unable to recover it. 00:26:19.635 [2024-04-26 15:36:37.039504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.635 [2024-04-26 15:36:37.039852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.635 [2024-04-26 15:36:37.039866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.635 qpair failed and we were unable to recover it. 00:26:19.635 [2024-04-26 15:36:37.040168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.635 [2024-04-26 15:36:37.040376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.635 [2024-04-26 15:36:37.040391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.635 qpair failed and we were unable to recover it. 00:26:19.635 [2024-04-26 15:36:37.040739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.635 [2024-04-26 15:36:37.041126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.635 [2024-04-26 15:36:37.041142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.635 qpair failed and we were unable to recover it. 00:26:19.635 [2024-04-26 15:36:37.041482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.635 [2024-04-26 15:36:37.041859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.635 [2024-04-26 15:36:37.041875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.635 qpair failed and we were unable to recover it. 00:26:19.635 [2024-04-26 15:36:37.042232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.635 [2024-04-26 15:36:37.042572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.635 [2024-04-26 15:36:37.042585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.635 qpair failed and we were unable to recover it. 00:26:19.635 [2024-04-26 15:36:37.042681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.635 [2024-04-26 15:36:37.043028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.635 [2024-04-26 15:36:37.043044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.635 qpair failed and we were unable to recover it. 00:26:19.635 [2024-04-26 15:36:37.043388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.635 [2024-04-26 15:36:37.043671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.635 [2024-04-26 15:36:37.043686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.635 qpair failed and we were unable to recover it. 00:26:19.635 [2024-04-26 15:36:37.044037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.635 [2024-04-26 15:36:37.044271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.635 [2024-04-26 15:36:37.044286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.635 qpair failed and we were unable to recover it. 00:26:19.635 [2024-04-26 15:36:37.044646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.635 [2024-04-26 15:36:37.044915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.635 [2024-04-26 15:36:37.044930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.635 qpair failed and we were unable to recover it. 00:26:19.635 [2024-04-26 15:36:37.045272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.635 [2024-04-26 15:36:37.045549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.635 [2024-04-26 15:36:37.045563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.635 qpair failed and we were unable to recover it. 00:26:19.635 [2024-04-26 15:36:37.045900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.635 [2024-04-26 15:36:37.046251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.635 [2024-04-26 15:36:37.046265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.635 qpair failed and we were unable to recover it. 00:26:19.635 [2024-04-26 15:36:37.046625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.635 [2024-04-26 15:36:37.046976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.635 [2024-04-26 15:36:37.046991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.635 qpair failed and we were unable to recover it. 00:26:19.635 [2024-04-26 15:36:37.047411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.635 [2024-04-26 15:36:37.047631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.635 [2024-04-26 15:36:37.047647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.635 qpair failed and we were unable to recover it. 00:26:19.635 [2024-04-26 15:36:37.047906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.635 [2024-04-26 15:36:37.048255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.635 [2024-04-26 15:36:37.048271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.635 qpair failed and we were unable to recover it. 00:26:19.635 [2024-04-26 15:36:37.048612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.635 [2024-04-26 15:36:37.048983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.635 [2024-04-26 15:36:37.049000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.635 qpair failed and we were unable to recover it. 00:26:19.635 [2024-04-26 15:36:37.049366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.635 [2024-04-26 15:36:37.049737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.635 [2024-04-26 15:36:37.049752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.635 qpair failed and we were unable to recover it. 00:26:19.635 [2024-04-26 15:36:37.050079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.635 [2024-04-26 15:36:37.050453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.635 [2024-04-26 15:36:37.050468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.635 qpair failed and we were unable to recover it. 00:26:19.635 [2024-04-26 15:36:37.050799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.635 [2024-04-26 15:36:37.051157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.636 [2024-04-26 15:36:37.051174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.636 qpair failed and we were unable to recover it. 00:26:19.636 [2024-04-26 15:36:37.051539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.636 [2024-04-26 15:36:37.051796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.636 [2024-04-26 15:36:37.051810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.636 qpair failed and we were unable to recover it. 00:26:19.636 Malloc0 00:26:19.636 [2024-04-26 15:36:37.052037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.636 [2024-04-26 15:36:37.052247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.636 [2024-04-26 15:36:37.052261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.636 qpair failed and we were unable to recover it. 00:26:19.636 [2024-04-26 15:36:37.052595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.636 15:36:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:19.636 [2024-04-26 15:36:37.052967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.636 [2024-04-26 15:36:37.052982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.636 qpair failed and we were unable to recover it. 00:26:19.636 15:36:37 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:19.636 15:36:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:19.636 [2024-04-26 15:36:37.053346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.636 15:36:37 -- common/autotest_common.sh@10 -- # set +x 00:26:19.636 [2024-04-26 15:36:37.053733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.636 [2024-04-26 15:36:37.053747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.636 qpair failed and we were unable to recover it. 00:26:19.636 [2024-04-26 15:36:37.054129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.636 [2024-04-26 15:36:37.054507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.636 [2024-04-26 15:36:37.054526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.636 qpair failed and we were unable to recover it. 00:26:19.636 [2024-04-26 15:36:37.054598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.636 [2024-04-26 15:36:37.054816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.636 [2024-04-26 15:36:37.054831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.636 qpair failed and we were unable to recover it. 00:26:19.636 [2024-04-26 15:36:37.055178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.636 [2024-04-26 15:36:37.055393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.636 [2024-04-26 15:36:37.055407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.636 qpair failed and we were unable to recover it. 00:26:19.636 [2024-04-26 15:36:37.055772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.636 [2024-04-26 15:36:37.055980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.636 [2024-04-26 15:36:37.055994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.636 qpair failed and we were unable to recover it. 00:26:19.636 [2024-04-26 15:36:37.056376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.636 [2024-04-26 15:36:37.056721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.636 [2024-04-26 15:36:37.056735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.636 qpair failed and we were unable to recover it. 00:26:19.636 [2024-04-26 15:36:37.057079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.636 [2024-04-26 15:36:37.057451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.636 [2024-04-26 15:36:37.057465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.636 qpair failed and we were unable to recover it. 00:26:19.636 [2024-04-26 15:36:37.057794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.636 [2024-04-26 15:36:37.058144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.636 [2024-04-26 15:36:37.058159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.636 qpair failed and we were unable to recover it. 00:26:19.636 [2024-04-26 15:36:37.058528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.904 [2024-04-26 15:36:37.058911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.904 [2024-04-26 15:36:37.058928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.904 qpair failed and we were unable to recover it. 00:26:19.904 [2024-04-26 15:36:37.059157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.904 [2024-04-26 15:36:37.059186] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:19.904 [2024-04-26 15:36:37.059375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.904 [2024-04-26 15:36:37.059390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.904 qpair failed and we were unable to recover it. 00:26:19.904 [2024-04-26 15:36:37.059719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.904 [2024-04-26 15:36:37.060079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.904 [2024-04-26 15:36:37.060094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.904 qpair failed and we were unable to recover it. 00:26:19.904 [2024-04-26 15:36:37.060450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.904 [2024-04-26 15:36:37.060659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.904 [2024-04-26 15:36:37.060673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.904 qpair failed and we were unable to recover it. 00:26:19.904 [2024-04-26 15:36:37.060761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.904 [2024-04-26 15:36:37.060968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.904 [2024-04-26 15:36:37.060983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.904 qpair failed and we were unable to recover it. 00:26:19.904 [2024-04-26 15:36:37.061247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.904 [2024-04-26 15:36:37.061595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.904 [2024-04-26 15:36:37.061609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.904 qpair failed and we were unable to recover it. 00:26:19.904 [2024-04-26 15:36:37.061932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.904 [2024-04-26 15:36:37.062156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.904 [2024-04-26 15:36:37.062169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.904 qpair failed and we were unable to recover it. 00:26:19.904 [2024-04-26 15:36:37.062524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.904 [2024-04-26 15:36:37.062875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.904 [2024-04-26 15:36:37.062889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.904 qpair failed and we were unable to recover it. 00:26:19.904 [2024-04-26 15:36:37.063128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.904 [2024-04-26 15:36:37.063348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.904 [2024-04-26 15:36:37.063361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.904 qpair failed and we were unable to recover it. 00:26:19.904 [2024-04-26 15:36:37.063719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.904 [2024-04-26 15:36:37.064083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.904 [2024-04-26 15:36:37.064097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.904 qpair failed and we were unable to recover it. 00:26:19.904 [2024-04-26 15:36:37.064369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.904 [2024-04-26 15:36:37.064606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.904 [2024-04-26 15:36:37.064619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.904 qpair failed and we were unable to recover it. 00:26:19.904 [2024-04-26 15:36:37.064831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.904 [2024-04-26 15:36:37.065184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.904 [2024-04-26 15:36:37.065198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.904 qpair failed and we were unable to recover it. 00:26:19.904 [2024-04-26 15:36:37.065286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.904 [2024-04-26 15:36:37.065647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.904 [2024-04-26 15:36:37.065661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.905 qpair failed and we were unable to recover it. 00:26:19.905 [2024-04-26 15:36:37.065989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.905 [2024-04-26 15:36:37.066231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.905 [2024-04-26 15:36:37.066248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.905 qpair failed and we were unable to recover it. 00:26:19.905 [2024-04-26 15:36:37.066475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.905 [2024-04-26 15:36:37.066851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.905 [2024-04-26 15:36:37.066866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.905 qpair failed and we were unable to recover it. 00:26:19.905 [2024-04-26 15:36:37.067069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.905 [2024-04-26 15:36:37.067452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.905 [2024-04-26 15:36:37.067466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.905 qpair failed and we were unable to recover it. 00:26:19.905 [2024-04-26 15:36:37.067693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.905 [2024-04-26 15:36:37.067881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.905 [2024-04-26 15:36:37.067894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.905 qpair failed and we were unable to recover it. 00:26:19.905 [2024-04-26 15:36:37.068223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.905 15:36:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:19.905 [2024-04-26 15:36:37.068595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.905 [2024-04-26 15:36:37.068611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.905 qpair failed and we were unable to recover it. 00:26:19.905 15:36:37 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:19.905 15:36:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:19.905 [2024-04-26 15:36:37.068977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.905 15:36:37 -- common/autotest_common.sh@10 -- # set +x 00:26:19.905 [2024-04-26 15:36:37.069191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.905 [2024-04-26 15:36:37.069206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.905 qpair failed and we were unable to recover it. 00:26:19.905 [2024-04-26 15:36:37.069553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.905 [2024-04-26 15:36:37.069904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.905 [2024-04-26 15:36:37.069919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.905 qpair failed and we were unable to recover it. 00:26:19.905 [2024-04-26 15:36:37.070162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.905 [2024-04-26 15:36:37.070557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.905 [2024-04-26 15:36:37.070571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.905 qpair failed and we were unable to recover it. 00:26:19.905 [2024-04-26 15:36:37.070939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.905 [2024-04-26 15:36:37.071154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.905 [2024-04-26 15:36:37.071168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.905 qpair failed and we were unable to recover it. 00:26:19.905 [2024-04-26 15:36:37.071407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.905 [2024-04-26 15:36:37.071605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.905 [2024-04-26 15:36:37.071620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.905 qpair failed and we were unable to recover it. 00:26:19.905 [2024-04-26 15:36:37.071972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.905 [2024-04-26 15:36:37.072218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.905 [2024-04-26 15:36:37.072233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.905 qpair failed and we were unable to recover it. 00:26:19.905 [2024-04-26 15:36:37.072583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.905 [2024-04-26 15:36:37.072932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.905 [2024-04-26 15:36:37.072946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.905 qpair failed and we were unable to recover it. 00:26:19.905 [2024-04-26 15:36:37.073147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.905 [2024-04-26 15:36:37.073380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.905 [2024-04-26 15:36:37.073396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.905 qpair failed and we were unable to recover it. 00:26:19.905 [2024-04-26 15:36:37.073751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.905 [2024-04-26 15:36:37.073970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.905 [2024-04-26 15:36:37.073985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.905 qpair failed and we were unable to recover it. 00:26:19.905 [2024-04-26 15:36:37.074378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.905 [2024-04-26 15:36:37.074719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.905 [2024-04-26 15:36:37.074733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.905 qpair failed and we were unable to recover it. 00:26:19.905 [2024-04-26 15:36:37.074937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.905 [2024-04-26 15:36:37.075345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.905 [2024-04-26 15:36:37.075358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.905 qpair failed and we were unable to recover it. 00:26:19.905 [2024-04-26 15:36:37.075436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.905 [2024-04-26 15:36:37.075506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.905 [2024-04-26 15:36:37.075522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.905 qpair failed and we were unable to recover it. 00:26:19.905 [2024-04-26 15:36:37.075776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.905 [2024-04-26 15:36:37.076127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.905 [2024-04-26 15:36:37.076142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.905 qpair failed and we were unable to recover it. 00:26:19.905 [2024-04-26 15:36:37.076463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.905 [2024-04-26 15:36:37.076817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.905 [2024-04-26 15:36:37.076831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.905 qpair failed and we were unable to recover it. 00:26:19.905 [2024-04-26 15:36:37.077181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.905 [2024-04-26 15:36:37.077553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.905 [2024-04-26 15:36:37.077567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.905 qpair failed and we were unable to recover it. 00:26:19.905 [2024-04-26 15:36:37.077908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.905 [2024-04-26 15:36:37.078291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.905 [2024-04-26 15:36:37.078304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.905 qpair failed and we were unable to recover it. 00:26:19.905 [2024-04-26 15:36:37.078616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.905 [2024-04-26 15:36:37.078820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.905 [2024-04-26 15:36:37.078833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.905 qpair failed and we were unable to recover it. 00:26:19.905 [2024-04-26 15:36:37.079204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.905 [2024-04-26 15:36:37.079541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.905 [2024-04-26 15:36:37.079555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.905 qpair failed and we were unable to recover it. 00:26:19.905 [2024-04-26 15:36:37.079881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.905 [2024-04-26 15:36:37.080216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.905 [2024-04-26 15:36:37.080230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.905 qpair failed and we were unable to recover it. 00:26:19.905 15:36:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:19.905 [2024-04-26 15:36:37.080567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.905 15:36:37 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:19.905 15:36:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:19.905 [2024-04-26 15:36:37.080923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.905 [2024-04-26 15:36:37.080938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.905 qpair failed and we were unable to recover it. 00:26:19.905 15:36:37 -- common/autotest_common.sh@10 -- # set +x 00:26:19.905 [2024-04-26 15:36:37.081205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.905 [2024-04-26 15:36:37.081583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.905 [2024-04-26 15:36:37.081597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.905 qpair failed and we were unable to recover it. 00:26:19.905 [2024-04-26 15:36:37.081921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.905 [2024-04-26 15:36:37.082136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.906 [2024-04-26 15:36:37.082150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.906 qpair failed and we were unable to recover it. 00:26:19.906 [2024-04-26 15:36:37.082400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.906 [2024-04-26 15:36:37.082779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.906 [2024-04-26 15:36:37.082792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.906 qpair failed and we were unable to recover it. 00:26:19.906 [2024-04-26 15:36:37.083132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.906 [2024-04-26 15:36:37.083359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.906 [2024-04-26 15:36:37.083373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.906 qpair failed and we were unable to recover it. 00:26:19.906 [2024-04-26 15:36:37.083465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.906 [2024-04-26 15:36:37.083648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.906 [2024-04-26 15:36:37.083662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.906 qpair failed and we were unable to recover it. 00:26:19.906 [2024-04-26 15:36:37.084104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.906 [2024-04-26 15:36:37.084452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.906 [2024-04-26 15:36:37.084465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.906 qpair failed and we were unable to recover it. 00:26:19.906 [2024-04-26 15:36:37.084794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.906 [2024-04-26 15:36:37.085066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.906 [2024-04-26 15:36:37.085080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.906 qpair failed and we were unable to recover it. 00:26:19.906 [2024-04-26 15:36:37.085154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.906 [2024-04-26 15:36:37.085436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.906 [2024-04-26 15:36:37.085450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.906 qpair failed and we were unable to recover it. 00:26:19.906 [2024-04-26 15:36:37.085766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.906 [2024-04-26 15:36:37.086220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.906 [2024-04-26 15:36:37.086235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.906 qpair failed and we were unable to recover it. 00:26:19.906 [2024-04-26 15:36:37.086485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.906 [2024-04-26 15:36:37.086895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.906 [2024-04-26 15:36:37.086909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.906 qpair failed and we were unable to recover it. 00:26:19.906 [2024-04-26 15:36:37.087212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.906 [2024-04-26 15:36:37.087422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.906 [2024-04-26 15:36:37.087436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.906 qpair failed and we were unable to recover it. 00:26:19.906 [2024-04-26 15:36:37.087780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.906 [2024-04-26 15:36:37.088112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.906 [2024-04-26 15:36:37.088127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.906 qpair failed and we were unable to recover it. 00:26:19.906 [2024-04-26 15:36:37.088323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.906 [2024-04-26 15:36:37.088646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.906 [2024-04-26 15:36:37.088661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.906 qpair failed and we were unable to recover it. 00:26:19.906 [2024-04-26 15:36:37.089010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.906 [2024-04-26 15:36:37.089358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.906 [2024-04-26 15:36:37.089372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.906 qpair failed and we were unable to recover it. 00:26:19.906 [2024-04-26 15:36:37.089736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.906 [2024-04-26 15:36:37.089957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.906 [2024-04-26 15:36:37.089972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.906 qpair failed and we were unable to recover it. 00:26:19.906 [2024-04-26 15:36:37.090205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.906 [2024-04-26 15:36:37.090568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.906 [2024-04-26 15:36:37.090582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.906 qpair failed and we were unable to recover it. 00:26:19.906 [2024-04-26 15:36:37.090796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.906 [2024-04-26 15:36:37.091165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.906 [2024-04-26 15:36:37.091179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.906 qpair failed and we were unable to recover it. 00:26:19.906 [2024-04-26 15:36:37.091403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.906 [2024-04-26 15:36:37.091743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.906 [2024-04-26 15:36:37.091758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.906 qpair failed and we were unable to recover it. 00:26:19.906 [2024-04-26 15:36:37.092081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.906 15:36:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:19.906 [2024-04-26 15:36:37.092465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.906 [2024-04-26 15:36:37.092481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.906 qpair failed and we were unable to recover it. 00:26:19.906 15:36:37 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:19.906 [2024-04-26 15:36:37.092730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.906 [2024-04-26 15:36:37.092985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.906 [2024-04-26 15:36:37.093001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.906 15:36:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:19.906 qpair failed and we were unable to recover it. 00:26:19.906 15:36:37 -- common/autotest_common.sh@10 -- # set +x 00:26:19.906 [2024-04-26 15:36:37.093332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.906 [2024-04-26 15:36:37.093711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.906 [2024-04-26 15:36:37.093725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.906 qpair failed and we were unable to recover it. 00:26:19.906 [2024-04-26 15:36:37.093929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.906 [2024-04-26 15:36:37.094342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.906 [2024-04-26 15:36:37.094356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.906 qpair failed and we were unable to recover it. 00:26:19.906 [2024-04-26 15:36:37.094697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.906 [2024-04-26 15:36:37.095050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.906 [2024-04-26 15:36:37.095065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.906 qpair failed and we were unable to recover it. 00:26:19.906 [2024-04-26 15:36:37.095484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.906 [2024-04-26 15:36:37.095851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.906 [2024-04-26 15:36:37.095869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.906 qpair failed and we were unable to recover it. 00:26:19.906 [2024-04-26 15:36:37.096111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.906 [2024-04-26 15:36:37.096528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.906 [2024-04-26 15:36:37.096541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.906 qpair failed and we were unable to recover it. 00:26:19.906 [2024-04-26 15:36:37.096875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.906 [2024-04-26 15:36:37.097087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.906 [2024-04-26 15:36:37.097101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.906 qpair failed and we were unable to recover it. 00:26:19.906 [2024-04-26 15:36:37.097415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.906 [2024-04-26 15:36:37.097779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.906 [2024-04-26 15:36:37.097793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.906 qpair failed and we were unable to recover it. 00:26:19.906 [2024-04-26 15:36:37.098076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.906 [2024-04-26 15:36:37.098271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.906 [2024-04-26 15:36:37.098285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.906 qpair failed and we were unable to recover it. 00:26:19.906 [2024-04-26 15:36:37.098627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.906 [2024-04-26 15:36:37.099005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.906 [2024-04-26 15:36:37.099020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb678000b90 with addr=10.0.0.2, port=4420 00:26:19.907 qpair failed and we were unable to recover it. 00:26:19.907 [2024-04-26 15:36:37.099376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.907 [2024-04-26 15:36:37.099578] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:19.907 [2024-04-26 15:36:37.103558] posix.c: 675:posix_sock_psk_use_session_client_cb: *ERROR*: PSK is not set 00:26:19.907 [2024-04-26 15:36:37.103631] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7fb678000b90 (107): Transport endpoint is not connected 00:26:19.907 [2024-04-26 15:36:37.103695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.907 qpair failed and we were unable to recover it. 00:26:19.907 15:36:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:19.907 15:36:37 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:19.907 15:36:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:19.907 15:36:37 -- common/autotest_common.sh@10 -- # set +x 00:26:19.907 [2024-04-26 15:36:37.110402] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.907 [2024-04-26 15:36:37.110523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.907 [2024-04-26 15:36:37.110555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.907 [2024-04-26 15:36:37.110568] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.907 [2024-04-26 15:36:37.110578] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:19.907 [2024-04-26 15:36:37.110605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.907 qpair failed and we were unable to recover it. 00:26:19.907 15:36:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:19.907 15:36:37 -- host/target_disconnect.sh@58 -- # wait 1789642 00:26:19.907 [2024-04-26 15:36:37.120120] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.907 [2024-04-26 15:36:37.120213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.907 [2024-04-26 15:36:37.120243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.907 [2024-04-26 15:36:37.120254] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.907 [2024-04-26 15:36:37.120265] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:19.907 [2024-04-26 15:36:37.120292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.907 qpair failed and we were unable to recover it. 00:26:19.907 [2024-04-26 15:36:37.130084] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.907 [2024-04-26 15:36:37.130174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.907 [2024-04-26 15:36:37.130201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.907 [2024-04-26 15:36:37.130212] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.907 [2024-04-26 15:36:37.130221] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:19.907 [2024-04-26 15:36:37.130245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.907 qpair failed and we were unable to recover it. 00:26:19.907 [2024-04-26 15:36:37.140196] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.907 [2024-04-26 15:36:37.140293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.907 [2024-04-26 15:36:37.140318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.907 [2024-04-26 15:36:37.140328] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.907 [2024-04-26 15:36:37.140336] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:19.907 [2024-04-26 15:36:37.140359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.907 qpair failed and we were unable to recover it. 00:26:19.907 [2024-04-26 15:36:37.150103] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.907 [2024-04-26 15:36:37.150189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.907 [2024-04-26 15:36:37.150217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.907 [2024-04-26 15:36:37.150226] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.907 [2024-04-26 15:36:37.150232] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:19.907 [2024-04-26 15:36:37.150255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.907 qpair failed and we were unable to recover it. 00:26:19.907 [2024-04-26 15:36:37.160376] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.907 [2024-04-26 15:36:37.160463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.907 [2024-04-26 15:36:37.160498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.907 [2024-04-26 15:36:37.160508] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.907 [2024-04-26 15:36:37.160515] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:19.907 [2024-04-26 15:36:37.160537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.907 qpair failed and we were unable to recover it. 00:26:19.907 [2024-04-26 15:36:37.170242] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.907 [2024-04-26 15:36:37.170315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.907 [2024-04-26 15:36:37.170342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.907 [2024-04-26 15:36:37.170350] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.907 [2024-04-26 15:36:37.170357] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:19.907 [2024-04-26 15:36:37.170377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.907 qpair failed and we were unable to recover it. 00:26:19.907 [2024-04-26 15:36:37.180241] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.907 [2024-04-26 15:36:37.180325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.907 [2024-04-26 15:36:37.180347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.907 [2024-04-26 15:36:37.180354] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.907 [2024-04-26 15:36:37.180361] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:19.907 [2024-04-26 15:36:37.180379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.907 qpair failed and we were unable to recover it. 00:26:19.907 [2024-04-26 15:36:37.190305] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.907 [2024-04-26 15:36:37.190376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.907 [2024-04-26 15:36:37.190396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.907 [2024-04-26 15:36:37.190404] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.907 [2024-04-26 15:36:37.190410] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:19.907 [2024-04-26 15:36:37.190426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.907 qpair failed and we were unable to recover it. 00:26:19.907 [2024-04-26 15:36:37.200328] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.907 [2024-04-26 15:36:37.200407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.907 [2024-04-26 15:36:37.200427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.907 [2024-04-26 15:36:37.200435] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.907 [2024-04-26 15:36:37.200445] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:19.907 [2024-04-26 15:36:37.200462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.907 qpair failed and we were unable to recover it. 00:26:19.907 [2024-04-26 15:36:37.210409] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.907 [2024-04-26 15:36:37.210498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.907 [2024-04-26 15:36:37.210519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.907 [2024-04-26 15:36:37.210526] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.907 [2024-04-26 15:36:37.210533] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:19.907 [2024-04-26 15:36:37.210549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.907 qpair failed and we were unable to recover it. 00:26:19.907 [2024-04-26 15:36:37.220406] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.907 [2024-04-26 15:36:37.220475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.907 [2024-04-26 15:36:37.220494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.907 [2024-04-26 15:36:37.220501] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.908 [2024-04-26 15:36:37.220507] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:19.908 [2024-04-26 15:36:37.220523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.908 qpair failed and we were unable to recover it. 00:26:19.908 [2024-04-26 15:36:37.230430] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.908 [2024-04-26 15:36:37.230514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.908 [2024-04-26 15:36:37.230533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.908 [2024-04-26 15:36:37.230540] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.908 [2024-04-26 15:36:37.230547] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:19.908 [2024-04-26 15:36:37.230563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.908 qpair failed and we were unable to recover it. 00:26:19.908 [2024-04-26 15:36:37.240462] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.908 [2024-04-26 15:36:37.240546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.908 [2024-04-26 15:36:37.240568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.908 [2024-04-26 15:36:37.240579] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.908 [2024-04-26 15:36:37.240585] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:19.908 [2024-04-26 15:36:37.240603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.908 qpair failed and we were unable to recover it. 00:26:19.908 [2024-04-26 15:36:37.250524] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.908 [2024-04-26 15:36:37.250629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.908 [2024-04-26 15:36:37.250663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.908 [2024-04-26 15:36:37.250673] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.908 [2024-04-26 15:36:37.250680] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:19.908 [2024-04-26 15:36:37.250702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.908 qpair failed and we were unable to recover it. 00:26:19.908 [2024-04-26 15:36:37.260532] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.908 [2024-04-26 15:36:37.260614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.908 [2024-04-26 15:36:37.260635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.908 [2024-04-26 15:36:37.260643] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.908 [2024-04-26 15:36:37.260649] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:19.908 [2024-04-26 15:36:37.260667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.908 qpair failed and we were unable to recover it. 00:26:19.908 [2024-04-26 15:36:37.270538] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.908 [2024-04-26 15:36:37.270606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.908 [2024-04-26 15:36:37.270625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.908 [2024-04-26 15:36:37.270633] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.908 [2024-04-26 15:36:37.270639] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:19.908 [2024-04-26 15:36:37.270655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.908 qpair failed and we were unable to recover it. 00:26:19.908 [2024-04-26 15:36:37.280581] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.908 [2024-04-26 15:36:37.280658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.908 [2024-04-26 15:36:37.280677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.908 [2024-04-26 15:36:37.280684] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.908 [2024-04-26 15:36:37.280690] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:19.908 [2024-04-26 15:36:37.280706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.908 qpair failed and we were unable to recover it. 00:26:19.908 [2024-04-26 15:36:37.290470] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.908 [2024-04-26 15:36:37.290535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.908 [2024-04-26 15:36:37.290553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.908 [2024-04-26 15:36:37.290560] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.908 [2024-04-26 15:36:37.290573] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:19.908 [2024-04-26 15:36:37.290588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.908 qpair failed and we were unable to recover it. 00:26:19.908 [2024-04-26 15:36:37.300633] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.908 [2024-04-26 15:36:37.300703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.908 [2024-04-26 15:36:37.300722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.908 [2024-04-26 15:36:37.300730] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.908 [2024-04-26 15:36:37.300736] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:19.908 [2024-04-26 15:36:37.300752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.908 qpair failed and we were unable to recover it. 00:26:19.908 [2024-04-26 15:36:37.310556] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.908 [2024-04-26 15:36:37.310626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.908 [2024-04-26 15:36:37.310649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.908 [2024-04-26 15:36:37.310656] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.908 [2024-04-26 15:36:37.310664] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:19.908 [2024-04-26 15:36:37.310682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.908 qpair failed and we were unable to recover it. 00:26:19.908 [2024-04-26 15:36:37.320575] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.908 [2024-04-26 15:36:37.320649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.908 [2024-04-26 15:36:37.320669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.908 [2024-04-26 15:36:37.320676] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.908 [2024-04-26 15:36:37.320682] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:19.908 [2024-04-26 15:36:37.320699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.908 qpair failed and we were unable to recover it. 00:26:19.908 [2024-04-26 15:36:37.330707] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.908 [2024-04-26 15:36:37.330778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.908 [2024-04-26 15:36:37.330796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.908 [2024-04-26 15:36:37.330804] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.908 [2024-04-26 15:36:37.330810] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:19.908 [2024-04-26 15:36:37.330826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.908 qpair failed and we were unable to recover it. 00:26:19.908 [2024-04-26 15:36:37.340792] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.908 [2024-04-26 15:36:37.340893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.908 [2024-04-26 15:36:37.340913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.908 [2024-04-26 15:36:37.340922] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.908 [2024-04-26 15:36:37.340928] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:19.908 [2024-04-26 15:36:37.340944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.908 qpair failed and we were unable to recover it. 00:26:20.224 [2024-04-26 15:36:37.350772] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.224 [2024-04-26 15:36:37.350851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.224 [2024-04-26 15:36:37.350871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.224 [2024-04-26 15:36:37.350878] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.224 [2024-04-26 15:36:37.350885] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.224 [2024-04-26 15:36:37.350902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.224 qpair failed and we were unable to recover it. 00:26:20.224 [2024-04-26 15:36:37.360817] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.224 [2024-04-26 15:36:37.360881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.224 [2024-04-26 15:36:37.360901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.224 [2024-04-26 15:36:37.360908] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.224 [2024-04-26 15:36:37.360914] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.224 [2024-04-26 15:36:37.360930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.224 qpair failed and we were unable to recover it. 00:26:20.224 [2024-04-26 15:36:37.370858] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.224 [2024-04-26 15:36:37.370926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.224 [2024-04-26 15:36:37.370944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.224 [2024-04-26 15:36:37.370951] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.224 [2024-04-26 15:36:37.370957] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.224 [2024-04-26 15:36:37.370973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.224 qpair failed and we were unable to recover it. 00:26:20.224 [2024-04-26 15:36:37.381083] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.224 [2024-04-26 15:36:37.381179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.224 [2024-04-26 15:36:37.381197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.224 [2024-04-26 15:36:37.381215] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.224 [2024-04-26 15:36:37.381221] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.224 [2024-04-26 15:36:37.381237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.224 qpair failed and we were unable to recover it. 00:26:20.224 [2024-04-26 15:36:37.390961] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.224 [2024-04-26 15:36:37.391033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.224 [2024-04-26 15:36:37.391052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.224 [2024-04-26 15:36:37.391059] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.224 [2024-04-26 15:36:37.391065] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.224 [2024-04-26 15:36:37.391081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.224 qpair failed and we were unable to recover it. 00:26:20.224 [2024-04-26 15:36:37.401011] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.224 [2024-04-26 15:36:37.401116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.224 [2024-04-26 15:36:37.401144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.224 [2024-04-26 15:36:37.401153] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.224 [2024-04-26 15:36:37.401159] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.224 [2024-04-26 15:36:37.401180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.224 qpair failed and we were unable to recover it. 00:26:20.224 [2024-04-26 15:36:37.411016] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.224 [2024-04-26 15:36:37.411085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.224 [2024-04-26 15:36:37.411105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.224 [2024-04-26 15:36:37.411112] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.224 [2024-04-26 15:36:37.411119] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.224 [2024-04-26 15:36:37.411136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.224 qpair failed and we were unable to recover it. 00:26:20.224 [2024-04-26 15:36:37.421034] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.225 [2024-04-26 15:36:37.421165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.225 [2024-04-26 15:36:37.421185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.225 [2024-04-26 15:36:37.421192] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.225 [2024-04-26 15:36:37.421198] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.225 [2024-04-26 15:36:37.421215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.225 qpair failed and we were unable to recover it. 00:26:20.225 [2024-04-26 15:36:37.431050] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.225 [2024-04-26 15:36:37.431160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.225 [2024-04-26 15:36:37.431180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.225 [2024-04-26 15:36:37.431187] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.225 [2024-04-26 15:36:37.431194] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.225 [2024-04-26 15:36:37.431211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.225 qpair failed and we were unable to recover it. 00:26:20.225 [2024-04-26 15:36:37.441085] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.225 [2024-04-26 15:36:37.441146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.225 [2024-04-26 15:36:37.441165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.225 [2024-04-26 15:36:37.441172] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.225 [2024-04-26 15:36:37.441179] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.225 [2024-04-26 15:36:37.441194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.225 qpair failed and we were unable to recover it. 00:26:20.225 [2024-04-26 15:36:37.451074] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.225 [2024-04-26 15:36:37.451154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.225 [2024-04-26 15:36:37.451173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.225 [2024-04-26 15:36:37.451180] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.225 [2024-04-26 15:36:37.451187] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.225 [2024-04-26 15:36:37.451203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.225 qpair failed and we were unable to recover it. 00:26:20.225 [2024-04-26 15:36:37.461144] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.225 [2024-04-26 15:36:37.461227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.225 [2024-04-26 15:36:37.461246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.225 [2024-04-26 15:36:37.461253] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.225 [2024-04-26 15:36:37.461259] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.225 [2024-04-26 15:36:37.461275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.225 qpair failed and we were unable to recover it. 00:26:20.225 [2024-04-26 15:36:37.471132] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.225 [2024-04-26 15:36:37.471217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.225 [2024-04-26 15:36:37.471241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.225 [2024-04-26 15:36:37.471248] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.225 [2024-04-26 15:36:37.471254] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.225 [2024-04-26 15:36:37.471270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.225 qpair failed and we were unable to recover it. 00:26:20.225 [2024-04-26 15:36:37.481164] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.225 [2024-04-26 15:36:37.481283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.225 [2024-04-26 15:36:37.481303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.225 [2024-04-26 15:36:37.481311] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.225 [2024-04-26 15:36:37.481317] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.225 [2024-04-26 15:36:37.481333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.225 qpair failed and we were unable to recover it. 00:26:20.225 [2024-04-26 15:36:37.491237] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.225 [2024-04-26 15:36:37.491306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.225 [2024-04-26 15:36:37.491327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.225 [2024-04-26 15:36:37.491334] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.225 [2024-04-26 15:36:37.491341] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.225 [2024-04-26 15:36:37.491357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.225 qpair failed and we were unable to recover it. 00:26:20.225 [2024-04-26 15:36:37.501272] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.225 [2024-04-26 15:36:37.501341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.225 [2024-04-26 15:36:37.501360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.225 [2024-04-26 15:36:37.501367] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.225 [2024-04-26 15:36:37.501373] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.225 [2024-04-26 15:36:37.501389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.225 qpair failed and we were unable to recover it. 00:26:20.225 [2024-04-26 15:36:37.511218] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.225 [2024-04-26 15:36:37.511285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.225 [2024-04-26 15:36:37.511305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.225 [2024-04-26 15:36:37.511312] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.225 [2024-04-26 15:36:37.511319] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.225 [2024-04-26 15:36:37.511340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.225 qpair failed and we were unable to recover it. 00:26:20.225 [2024-04-26 15:36:37.521348] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.225 [2024-04-26 15:36:37.521422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.225 [2024-04-26 15:36:37.521442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.225 [2024-04-26 15:36:37.521449] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.225 [2024-04-26 15:36:37.521455] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.225 [2024-04-26 15:36:37.521472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.225 qpair failed and we were unable to recover it. 00:26:20.225 [2024-04-26 15:36:37.531311] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.225 [2024-04-26 15:36:37.531382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.225 [2024-04-26 15:36:37.531402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.225 [2024-04-26 15:36:37.531409] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.225 [2024-04-26 15:36:37.531416] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.225 [2024-04-26 15:36:37.531432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.225 qpair failed and we were unable to recover it. 00:26:20.225 [2024-04-26 15:36:37.541322] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.225 [2024-04-26 15:36:37.541394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.225 [2024-04-26 15:36:37.541413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.225 [2024-04-26 15:36:37.541420] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.225 [2024-04-26 15:36:37.541426] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.225 [2024-04-26 15:36:37.541443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.225 qpair failed and we were unable to recover it. 00:26:20.225 [2024-04-26 15:36:37.551347] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.225 [2024-04-26 15:36:37.551422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.225 [2024-04-26 15:36:37.551442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.226 [2024-04-26 15:36:37.551449] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.226 [2024-04-26 15:36:37.551456] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.226 [2024-04-26 15:36:37.551472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.226 qpair failed and we were unable to recover it. 00:26:20.226 [2024-04-26 15:36:37.561424] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.226 [2024-04-26 15:36:37.561488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.226 [2024-04-26 15:36:37.561512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.226 [2024-04-26 15:36:37.561520] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.226 [2024-04-26 15:36:37.561527] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.226 [2024-04-26 15:36:37.561543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.226 qpair failed and we were unable to recover it. 00:26:20.226 [2024-04-26 15:36:37.571441] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.226 [2024-04-26 15:36:37.571507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.226 [2024-04-26 15:36:37.571527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.226 [2024-04-26 15:36:37.571534] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.226 [2024-04-26 15:36:37.571540] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.226 [2024-04-26 15:36:37.571556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.226 qpair failed and we were unable to recover it. 00:26:20.226 [2024-04-26 15:36:37.581460] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.226 [2024-04-26 15:36:37.581533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.226 [2024-04-26 15:36:37.581552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.226 [2024-04-26 15:36:37.581559] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.226 [2024-04-26 15:36:37.581565] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.226 [2024-04-26 15:36:37.581581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.226 qpair failed and we were unable to recover it. 00:26:20.226 [2024-04-26 15:36:37.591497] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.226 [2024-04-26 15:36:37.591570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.226 [2024-04-26 15:36:37.591604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.226 [2024-04-26 15:36:37.591613] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.226 [2024-04-26 15:36:37.591620] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.226 [2024-04-26 15:36:37.591642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.226 qpair failed and we were unable to recover it. 00:26:20.226 [2024-04-26 15:36:37.601415] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.226 [2024-04-26 15:36:37.601497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.226 [2024-04-26 15:36:37.601531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.226 [2024-04-26 15:36:37.601541] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.226 [2024-04-26 15:36:37.601548] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.226 [2024-04-26 15:36:37.601577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.226 qpair failed and we were unable to recover it. 00:26:20.226 [2024-04-26 15:36:37.611566] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.226 [2024-04-26 15:36:37.611655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.226 [2024-04-26 15:36:37.611691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.226 [2024-04-26 15:36:37.611701] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.226 [2024-04-26 15:36:37.611708] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.226 [2024-04-26 15:36:37.611731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.226 qpair failed and we were unable to recover it. 00:26:20.226 [2024-04-26 15:36:37.621589] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.226 [2024-04-26 15:36:37.621674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.226 [2024-04-26 15:36:37.621712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.226 [2024-04-26 15:36:37.621720] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.226 [2024-04-26 15:36:37.621727] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.226 [2024-04-26 15:36:37.621750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.226 qpair failed and we were unable to recover it. 00:26:20.226 [2024-04-26 15:36:37.631612] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.226 [2024-04-26 15:36:37.631690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.226 [2024-04-26 15:36:37.631712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.226 [2024-04-26 15:36:37.631719] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.226 [2024-04-26 15:36:37.631725] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.226 [2024-04-26 15:36:37.631743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.226 qpair failed and we were unable to recover it. 00:26:20.226 [2024-04-26 15:36:37.641511] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.226 [2024-04-26 15:36:37.641572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.226 [2024-04-26 15:36:37.641592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.226 [2024-04-26 15:36:37.641599] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.226 [2024-04-26 15:36:37.641606] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.226 [2024-04-26 15:36:37.641623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.226 qpair failed and we were unable to recover it. 00:26:20.226 [2024-04-26 15:36:37.651716] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.226 [2024-04-26 15:36:37.651790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.226 [2024-04-26 15:36:37.651817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.226 [2024-04-26 15:36:37.651825] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.226 [2024-04-26 15:36:37.651832] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.226 [2024-04-26 15:36:37.651869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.226 qpair failed and we were unable to recover it. 00:26:20.226 [2024-04-26 15:36:37.661731] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.226 [2024-04-26 15:36:37.661811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.226 [2024-04-26 15:36:37.661831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.226 [2024-04-26 15:36:37.661848] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.226 [2024-04-26 15:36:37.661854] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.226 [2024-04-26 15:36:37.661871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.226 qpair failed and we were unable to recover it. 00:26:20.489 [2024-04-26 15:36:37.671691] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.489 [2024-04-26 15:36:37.671762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.489 [2024-04-26 15:36:37.671782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.489 [2024-04-26 15:36:37.671790] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.489 [2024-04-26 15:36:37.671796] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.489 [2024-04-26 15:36:37.671813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.489 qpair failed and we were unable to recover it. 00:26:20.489 [2024-04-26 15:36:37.681726] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.489 [2024-04-26 15:36:37.681793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.489 [2024-04-26 15:36:37.681813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.489 [2024-04-26 15:36:37.681820] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.489 [2024-04-26 15:36:37.681827] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.489 [2024-04-26 15:36:37.681850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.489 qpair failed and we were unable to recover it. 00:26:20.489 [2024-04-26 15:36:37.691733] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.489 [2024-04-26 15:36:37.691812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.489 [2024-04-26 15:36:37.691831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.489 [2024-04-26 15:36:37.691848] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.489 [2024-04-26 15:36:37.691863] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.489 [2024-04-26 15:36:37.691880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.489 qpair failed and we were unable to recover it. 00:26:20.489 [2024-04-26 15:36:37.701900] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.489 [2024-04-26 15:36:37.702005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.489 [2024-04-26 15:36:37.702025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.489 [2024-04-26 15:36:37.702033] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.489 [2024-04-26 15:36:37.702039] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.489 [2024-04-26 15:36:37.702056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.489 qpair failed and we were unable to recover it. 00:26:20.489 [2024-04-26 15:36:37.711820] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.489 [2024-04-26 15:36:37.711897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.489 [2024-04-26 15:36:37.711918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.489 [2024-04-26 15:36:37.711925] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.489 [2024-04-26 15:36:37.711932] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.489 [2024-04-26 15:36:37.711949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.489 qpair failed and we were unable to recover it. 00:26:20.489 [2024-04-26 15:36:37.721894] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.489 [2024-04-26 15:36:37.721969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.489 [2024-04-26 15:36:37.721989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.489 [2024-04-26 15:36:37.721996] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.489 [2024-04-26 15:36:37.722003] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.489 [2024-04-26 15:36:37.722019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.489 qpair failed and we were unable to recover it. 00:26:20.489 [2024-04-26 15:36:37.731899] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.489 [2024-04-26 15:36:37.731967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.489 [2024-04-26 15:36:37.731986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.489 [2024-04-26 15:36:37.731993] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.489 [2024-04-26 15:36:37.731999] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.489 [2024-04-26 15:36:37.732015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.489 qpair failed and we were unable to recover it. 00:26:20.489 [2024-04-26 15:36:37.741846] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.489 [2024-04-26 15:36:37.741924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.489 [2024-04-26 15:36:37.741944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.489 [2024-04-26 15:36:37.741951] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.489 [2024-04-26 15:36:37.741957] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.489 [2024-04-26 15:36:37.741973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.489 qpair failed and we were unable to recover it. 00:26:20.489 [2024-04-26 15:36:37.751843] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.489 [2024-04-26 15:36:37.751914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.489 [2024-04-26 15:36:37.751934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.489 [2024-04-26 15:36:37.751941] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.489 [2024-04-26 15:36:37.751947] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.489 [2024-04-26 15:36:37.751963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.489 qpair failed and we were unable to recover it. 00:26:20.489 [2024-04-26 15:36:37.761988] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.489 [2024-04-26 15:36:37.762062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.489 [2024-04-26 15:36:37.762081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.489 [2024-04-26 15:36:37.762088] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.489 [2024-04-26 15:36:37.762095] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.489 [2024-04-26 15:36:37.762110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.489 qpair failed and we were unable to recover it. 00:26:20.489 [2024-04-26 15:36:37.772062] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.489 [2024-04-26 15:36:37.772133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.489 [2024-04-26 15:36:37.772154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.489 [2024-04-26 15:36:37.772161] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.489 [2024-04-26 15:36:37.772168] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.489 [2024-04-26 15:36:37.772185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.489 qpair failed and we were unable to recover it. 00:26:20.489 [2024-04-26 15:36:37.782104] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.489 [2024-04-26 15:36:37.782192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.489 [2024-04-26 15:36:37.782210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.489 [2024-04-26 15:36:37.782223] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.489 [2024-04-26 15:36:37.782229] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.489 [2024-04-26 15:36:37.782245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.489 qpair failed and we were unable to recover it. 00:26:20.489 [2024-04-26 15:36:37.792017] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.489 [2024-04-26 15:36:37.792099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.490 [2024-04-26 15:36:37.792119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.490 [2024-04-26 15:36:37.792127] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.490 [2024-04-26 15:36:37.792133] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.490 [2024-04-26 15:36:37.792150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.490 qpair failed and we were unable to recover it. 00:26:20.490 [2024-04-26 15:36:37.802082] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.490 [2024-04-26 15:36:37.802160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.490 [2024-04-26 15:36:37.802179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.490 [2024-04-26 15:36:37.802187] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.490 [2024-04-26 15:36:37.802193] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.490 [2024-04-26 15:36:37.802209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.490 qpair failed and we were unable to recover it. 00:26:20.490 [2024-04-26 15:36:37.812186] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.490 [2024-04-26 15:36:37.812256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.490 [2024-04-26 15:36:37.812276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.490 [2024-04-26 15:36:37.812283] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.490 [2024-04-26 15:36:37.812290] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.490 [2024-04-26 15:36:37.812306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.490 qpair failed and we were unable to recover it. 00:26:20.490 [2024-04-26 15:36:37.822164] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.490 [2024-04-26 15:36:37.822241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.490 [2024-04-26 15:36:37.822261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.490 [2024-04-26 15:36:37.822268] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.490 [2024-04-26 15:36:37.822274] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.490 [2024-04-26 15:36:37.822290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.490 qpair failed and we were unable to recover it. 00:26:20.490 [2024-04-26 15:36:37.832203] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.490 [2024-04-26 15:36:37.832322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.490 [2024-04-26 15:36:37.832341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.490 [2024-04-26 15:36:37.832349] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.490 [2024-04-26 15:36:37.832355] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.490 [2024-04-26 15:36:37.832372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.490 qpair failed and we were unable to recover it. 00:26:20.490 [2024-04-26 15:36:37.842245] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.490 [2024-04-26 15:36:37.842357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.490 [2024-04-26 15:36:37.842375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.490 [2024-04-26 15:36:37.842383] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.490 [2024-04-26 15:36:37.842390] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.490 [2024-04-26 15:36:37.842406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.490 qpair failed and we were unable to recover it. 00:26:20.490 [2024-04-26 15:36:37.852287] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.490 [2024-04-26 15:36:37.852361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.490 [2024-04-26 15:36:37.852380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.490 [2024-04-26 15:36:37.852387] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.490 [2024-04-26 15:36:37.852394] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.490 [2024-04-26 15:36:37.852410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.490 qpair failed and we were unable to recover it. 00:26:20.490 [2024-04-26 15:36:37.862218] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.490 [2024-04-26 15:36:37.862302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.490 [2024-04-26 15:36:37.862320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.490 [2024-04-26 15:36:37.862327] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.490 [2024-04-26 15:36:37.862333] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.490 [2024-04-26 15:36:37.862349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.490 qpair failed and we were unable to recover it. 00:26:20.490 [2024-04-26 15:36:37.872332] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.490 [2024-04-26 15:36:37.872399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.490 [2024-04-26 15:36:37.872423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.490 [2024-04-26 15:36:37.872431] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.490 [2024-04-26 15:36:37.872437] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.490 [2024-04-26 15:36:37.872453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.490 qpair failed and we were unable to recover it. 00:26:20.490 [2024-04-26 15:36:37.882334] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.490 [2024-04-26 15:36:37.882407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.490 [2024-04-26 15:36:37.882426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.490 [2024-04-26 15:36:37.882434] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.490 [2024-04-26 15:36:37.882440] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.490 [2024-04-26 15:36:37.882456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.490 qpair failed and we were unable to recover it. 00:26:20.490 [2024-04-26 15:36:37.892429] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.490 [2024-04-26 15:36:37.892503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.490 [2024-04-26 15:36:37.892522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.490 [2024-04-26 15:36:37.892529] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.490 [2024-04-26 15:36:37.892536] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.490 [2024-04-26 15:36:37.892552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.490 qpair failed and we were unable to recover it. 00:26:20.490 [2024-04-26 15:36:37.902313] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.490 [2024-04-26 15:36:37.902399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.490 [2024-04-26 15:36:37.902426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.490 [2024-04-26 15:36:37.902435] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.490 [2024-04-26 15:36:37.902442] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.490 [2024-04-26 15:36:37.902463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.490 qpair failed and we were unable to recover it. 00:26:20.490 [2024-04-26 15:36:37.912456] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.490 [2024-04-26 15:36:37.912527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.490 [2024-04-26 15:36:37.912548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.490 [2024-04-26 15:36:37.912556] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.490 [2024-04-26 15:36:37.912562] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.490 [2024-04-26 15:36:37.912580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.490 qpair failed and we were unable to recover it. 00:26:20.490 [2024-04-26 15:36:37.922549] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.490 [2024-04-26 15:36:37.922646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.490 [2024-04-26 15:36:37.922680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.490 [2024-04-26 15:36:37.922689] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.491 [2024-04-26 15:36:37.922696] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.491 [2024-04-26 15:36:37.922717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.491 qpair failed and we were unable to recover it. 00:26:20.491 [2024-04-26 15:36:37.932527] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.491 [2024-04-26 15:36:37.932599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.491 [2024-04-26 15:36:37.932621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.491 [2024-04-26 15:36:37.932628] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.491 [2024-04-26 15:36:37.932635] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.491 [2024-04-26 15:36:37.932652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.491 qpair failed and we were unable to recover it. 00:26:20.754 [2024-04-26 15:36:37.942570] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.754 [2024-04-26 15:36:37.942641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.754 [2024-04-26 15:36:37.942661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.754 [2024-04-26 15:36:37.942669] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.754 [2024-04-26 15:36:37.942675] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.754 [2024-04-26 15:36:37.942692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.754 qpair failed and we were unable to recover it. 00:26:20.754 [2024-04-26 15:36:37.952560] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.754 [2024-04-26 15:36:37.952685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.754 [2024-04-26 15:36:37.952705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.754 [2024-04-26 15:36:37.952712] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.754 [2024-04-26 15:36:37.952719] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.754 [2024-04-26 15:36:37.952735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.754 qpair failed and we were unable to recover it. 00:26:20.754 [2024-04-26 15:36:37.962641] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.754 [2024-04-26 15:36:37.962706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.754 [2024-04-26 15:36:37.962733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.754 [2024-04-26 15:36:37.962740] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.754 [2024-04-26 15:36:37.962746] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.754 [2024-04-26 15:36:37.962763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.754 qpair failed and we were unable to recover it. 00:26:20.754 [2024-04-26 15:36:37.972636] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.754 [2024-04-26 15:36:37.972704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.754 [2024-04-26 15:36:37.972724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.754 [2024-04-26 15:36:37.972731] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.754 [2024-04-26 15:36:37.972738] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.754 [2024-04-26 15:36:37.972754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.754 qpair failed and we were unable to recover it. 00:26:20.754 [2024-04-26 15:36:37.982661] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.754 [2024-04-26 15:36:37.982740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.754 [2024-04-26 15:36:37.982758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.754 [2024-04-26 15:36:37.982766] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.754 [2024-04-26 15:36:37.982772] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.754 [2024-04-26 15:36:37.982789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.754 qpair failed and we were unable to recover it. 00:26:20.754 [2024-04-26 15:36:37.992558] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.754 [2024-04-26 15:36:37.992620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.754 [2024-04-26 15:36:37.992639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.754 [2024-04-26 15:36:37.992646] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.754 [2024-04-26 15:36:37.992652] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.754 [2024-04-26 15:36:37.992669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.754 qpair failed and we were unable to recover it. 00:26:20.754 [2024-04-26 15:36:38.002740] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.754 [2024-04-26 15:36:38.002810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.754 [2024-04-26 15:36:38.002830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.754 [2024-04-26 15:36:38.002847] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.754 [2024-04-26 15:36:38.002855] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.754 [2024-04-26 15:36:38.002883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.754 qpair failed and we were unable to recover it. 00:26:20.754 [2024-04-26 15:36:38.012811] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.754 [2024-04-26 15:36:38.012904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.755 [2024-04-26 15:36:38.012929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.755 [2024-04-26 15:36:38.012939] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.755 [2024-04-26 15:36:38.012945] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.755 [2024-04-26 15:36:38.012963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.755 qpair failed and we were unable to recover it. 00:26:20.755 [2024-04-26 15:36:38.022790] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.755 [2024-04-26 15:36:38.022866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.755 [2024-04-26 15:36:38.022887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.755 [2024-04-26 15:36:38.022894] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.755 [2024-04-26 15:36:38.022900] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.755 [2024-04-26 15:36:38.022917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.755 qpair failed and we were unable to recover it. 00:26:20.755 [2024-04-26 15:36:38.032666] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.755 [2024-04-26 15:36:38.032740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.755 [2024-04-26 15:36:38.032761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.755 [2024-04-26 15:36:38.032768] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.755 [2024-04-26 15:36:38.032775] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.755 [2024-04-26 15:36:38.032792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.755 qpair failed and we were unable to recover it. 00:26:20.755 [2024-04-26 15:36:38.042714] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.755 [2024-04-26 15:36:38.042821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.755 [2024-04-26 15:36:38.042850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.755 [2024-04-26 15:36:38.042858] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.755 [2024-04-26 15:36:38.042864] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.755 [2024-04-26 15:36:38.042881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.755 qpair failed and we were unable to recover it. 00:26:20.755 [2024-04-26 15:36:38.052874] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.755 [2024-04-26 15:36:38.052939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.755 [2024-04-26 15:36:38.052964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.755 [2024-04-26 15:36:38.052971] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.755 [2024-04-26 15:36:38.052977] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.755 [2024-04-26 15:36:38.052994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.755 qpair failed and we were unable to recover it. 00:26:20.755 [2024-04-26 15:36:38.062798] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.755 [2024-04-26 15:36:38.062876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.755 [2024-04-26 15:36:38.062895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.755 [2024-04-26 15:36:38.062902] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.755 [2024-04-26 15:36:38.062908] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.755 [2024-04-26 15:36:38.062925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.755 qpair failed and we were unable to recover it. 00:26:20.755 [2024-04-26 15:36:38.072925] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.755 [2024-04-26 15:36:38.072982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.755 [2024-04-26 15:36:38.073000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.755 [2024-04-26 15:36:38.073007] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.755 [2024-04-26 15:36:38.073013] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.755 [2024-04-26 15:36:38.073030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.755 qpair failed and we were unable to recover it. 00:26:20.755 [2024-04-26 15:36:38.082916] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.755 [2024-04-26 15:36:38.082991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.755 [2024-04-26 15:36:38.083009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.755 [2024-04-26 15:36:38.083016] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.755 [2024-04-26 15:36:38.083022] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.755 [2024-04-26 15:36:38.083039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.755 qpair failed and we were unable to recover it. 00:26:20.755 [2024-04-26 15:36:38.092919] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.755 [2024-04-26 15:36:38.092987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.755 [2024-04-26 15:36:38.093006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.755 [2024-04-26 15:36:38.093013] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.755 [2024-04-26 15:36:38.093025] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.755 [2024-04-26 15:36:38.093041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.755 qpair failed and we were unable to recover it. 00:26:20.755 [2024-04-26 15:36:38.102989] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.755 [2024-04-26 15:36:38.103072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.755 [2024-04-26 15:36:38.103092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.755 [2024-04-26 15:36:38.103099] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.755 [2024-04-26 15:36:38.103105] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.756 [2024-04-26 15:36:38.103122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.756 qpair failed and we were unable to recover it. 00:26:20.756 [2024-04-26 15:36:38.113000] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.756 [2024-04-26 15:36:38.113061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.756 [2024-04-26 15:36:38.113081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.756 [2024-04-26 15:36:38.113088] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.756 [2024-04-26 15:36:38.113094] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.756 [2024-04-26 15:36:38.113111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.756 qpair failed and we were unable to recover it. 00:26:20.756 [2024-04-26 15:36:38.123078] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.756 [2024-04-26 15:36:38.123146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.756 [2024-04-26 15:36:38.123165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.756 [2024-04-26 15:36:38.123172] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.756 [2024-04-26 15:36:38.123178] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.756 [2024-04-26 15:36:38.123194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.756 qpair failed and we were unable to recover it. 00:26:20.756 [2024-04-26 15:36:38.133101] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.756 [2024-04-26 15:36:38.133169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.756 [2024-04-26 15:36:38.133189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.756 [2024-04-26 15:36:38.133196] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.756 [2024-04-26 15:36:38.133202] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.756 [2024-04-26 15:36:38.133218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.756 qpair failed and we were unable to recover it. 00:26:20.756 [2024-04-26 15:36:38.143157] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.756 [2024-04-26 15:36:38.143240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.756 [2024-04-26 15:36:38.143260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.756 [2024-04-26 15:36:38.143267] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.756 [2024-04-26 15:36:38.143273] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.756 [2024-04-26 15:36:38.143289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.756 qpair failed and we were unable to recover it. 00:26:20.756 [2024-04-26 15:36:38.153175] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.756 [2024-04-26 15:36:38.153284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.756 [2024-04-26 15:36:38.153312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.756 [2024-04-26 15:36:38.153322] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.756 [2024-04-26 15:36:38.153329] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.756 [2024-04-26 15:36:38.153350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.756 qpair failed and we were unable to recover it. 00:26:20.756 [2024-04-26 15:36:38.163229] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.756 [2024-04-26 15:36:38.163300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.756 [2024-04-26 15:36:38.163320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.756 [2024-04-26 15:36:38.163328] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.756 [2024-04-26 15:36:38.163334] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.756 [2024-04-26 15:36:38.163351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.756 qpair failed and we were unable to recover it. 00:26:20.756 [2024-04-26 15:36:38.173277] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.756 [2024-04-26 15:36:38.173341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.756 [2024-04-26 15:36:38.173360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.756 [2024-04-26 15:36:38.173368] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.756 [2024-04-26 15:36:38.173374] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.756 [2024-04-26 15:36:38.173390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.756 qpair failed and we were unable to recover it. 00:26:20.756 [2024-04-26 15:36:38.183272] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.756 [2024-04-26 15:36:38.183345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.756 [2024-04-26 15:36:38.183364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.756 [2024-04-26 15:36:38.183377] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.756 [2024-04-26 15:36:38.183383] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.756 [2024-04-26 15:36:38.183399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.756 qpair failed and we were unable to recover it. 00:26:20.756 [2024-04-26 15:36:38.193287] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.756 [2024-04-26 15:36:38.193351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.756 [2024-04-26 15:36:38.193370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.756 [2024-04-26 15:36:38.193377] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.756 [2024-04-26 15:36:38.193384] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:20.756 [2024-04-26 15:36:38.193400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.756 qpair failed and we were unable to recover it. 00:26:21.019 [2024-04-26 15:36:38.203346] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.019 [2024-04-26 15:36:38.203428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.019 [2024-04-26 15:36:38.203447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.019 [2024-04-26 15:36:38.203455] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.019 [2024-04-26 15:36:38.203461] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.019 [2024-04-26 15:36:38.203477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.019 qpair failed and we were unable to recover it. 00:26:21.019 [2024-04-26 15:36:38.213398] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.019 [2024-04-26 15:36:38.213469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.019 [2024-04-26 15:36:38.213489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.019 [2024-04-26 15:36:38.213497] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.019 [2024-04-26 15:36:38.213503] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.019 [2024-04-26 15:36:38.213521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.019 qpair failed and we were unable to recover it. 00:26:21.019 [2024-04-26 15:36:38.223398] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.019 [2024-04-26 15:36:38.223467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.019 [2024-04-26 15:36:38.223486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.019 [2024-04-26 15:36:38.223494] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.019 [2024-04-26 15:36:38.223500] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.019 [2024-04-26 15:36:38.223516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.019 qpair failed and we were unable to recover it. 00:26:21.019 [2024-04-26 15:36:38.233414] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.019 [2024-04-26 15:36:38.233484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.019 [2024-04-26 15:36:38.233503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.019 [2024-04-26 15:36:38.233511] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.019 [2024-04-26 15:36:38.233517] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.019 [2024-04-26 15:36:38.233533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.019 qpair failed and we were unable to recover it. 00:26:21.019 [2024-04-26 15:36:38.243507] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.019 [2024-04-26 15:36:38.243575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.019 [2024-04-26 15:36:38.243600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.019 [2024-04-26 15:36:38.243607] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.019 [2024-04-26 15:36:38.243613] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.019 [2024-04-26 15:36:38.243632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.019 qpair failed and we were unable to recover it. 00:26:21.019 [2024-04-26 15:36:38.253491] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.019 [2024-04-26 15:36:38.253566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.019 [2024-04-26 15:36:38.253601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.019 [2024-04-26 15:36:38.253610] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.019 [2024-04-26 15:36:38.253617] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.019 [2024-04-26 15:36:38.253640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.019 qpair failed and we were unable to recover it. 00:26:21.019 [2024-04-26 15:36:38.263536] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.019 [2024-04-26 15:36:38.263617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.019 [2024-04-26 15:36:38.263651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.019 [2024-04-26 15:36:38.263660] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.019 [2024-04-26 15:36:38.263667] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.019 [2024-04-26 15:36:38.263689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.019 qpair failed and we were unable to recover it. 00:26:21.019 [2024-04-26 15:36:38.273555] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.019 [2024-04-26 15:36:38.273634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.019 [2024-04-26 15:36:38.273655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.019 [2024-04-26 15:36:38.273669] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.019 [2024-04-26 15:36:38.273676] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.019 [2024-04-26 15:36:38.273693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.019 qpair failed and we were unable to recover it. 00:26:21.019 [2024-04-26 15:36:38.283672] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.019 [2024-04-26 15:36:38.283770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.019 [2024-04-26 15:36:38.283789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.019 [2024-04-26 15:36:38.283796] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.019 [2024-04-26 15:36:38.283803] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.019 [2024-04-26 15:36:38.283819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.019 qpair failed and we were unable to recover it. 00:26:21.019 [2024-04-26 15:36:38.293665] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.019 [2024-04-26 15:36:38.293728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.019 [2024-04-26 15:36:38.293747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.019 [2024-04-26 15:36:38.293754] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.019 [2024-04-26 15:36:38.293760] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.019 [2024-04-26 15:36:38.293777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.019 qpair failed and we were unable to recover it. 00:26:21.019 [2024-04-26 15:36:38.303649] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.019 [2024-04-26 15:36:38.303727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.019 [2024-04-26 15:36:38.303746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.019 [2024-04-26 15:36:38.303753] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.019 [2024-04-26 15:36:38.303760] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.019 [2024-04-26 15:36:38.303776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.019 qpair failed and we were unable to recover it. 00:26:21.019 [2024-04-26 15:36:38.313668] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.019 [2024-04-26 15:36:38.313746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.019 [2024-04-26 15:36:38.313766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.019 [2024-04-26 15:36:38.313773] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.019 [2024-04-26 15:36:38.313779] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.019 [2024-04-26 15:36:38.313795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.019 qpair failed and we were unable to recover it. 00:26:21.020 [2024-04-26 15:36:38.323702] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.020 [2024-04-26 15:36:38.323821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.020 [2024-04-26 15:36:38.323846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.020 [2024-04-26 15:36:38.323853] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.020 [2024-04-26 15:36:38.323860] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.020 [2024-04-26 15:36:38.323876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.020 qpair failed and we were unable to recover it. 00:26:21.020 [2024-04-26 15:36:38.333734] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.020 [2024-04-26 15:36:38.333801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.020 [2024-04-26 15:36:38.333822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.020 [2024-04-26 15:36:38.333831] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.020 [2024-04-26 15:36:38.333842] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.020 [2024-04-26 15:36:38.333860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.020 qpair failed and we were unable to recover it. 00:26:21.020 [2024-04-26 15:36:38.343768] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.020 [2024-04-26 15:36:38.343859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.020 [2024-04-26 15:36:38.343880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.020 [2024-04-26 15:36:38.343887] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.020 [2024-04-26 15:36:38.343893] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.020 [2024-04-26 15:36:38.343909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.020 qpair failed and we were unable to recover it. 00:26:21.020 [2024-04-26 15:36:38.353656] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.020 [2024-04-26 15:36:38.353725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.020 [2024-04-26 15:36:38.353745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.020 [2024-04-26 15:36:38.353752] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.020 [2024-04-26 15:36:38.353759] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.020 [2024-04-26 15:36:38.353774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.020 qpair failed and we were unable to recover it. 00:26:21.020 [2024-04-26 15:36:38.363829] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.020 [2024-04-26 15:36:38.363905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.020 [2024-04-26 15:36:38.363930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.020 [2024-04-26 15:36:38.363937] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.020 [2024-04-26 15:36:38.363943] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.020 [2024-04-26 15:36:38.363960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.020 qpair failed and we were unable to recover it. 00:26:21.020 [2024-04-26 15:36:38.373880] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.020 [2024-04-26 15:36:38.373953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.020 [2024-04-26 15:36:38.373972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.020 [2024-04-26 15:36:38.373979] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.020 [2024-04-26 15:36:38.373986] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.020 [2024-04-26 15:36:38.374002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.020 qpair failed and we were unable to recover it. 00:26:21.020 [2024-04-26 15:36:38.383763] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.020 [2024-04-26 15:36:38.383850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.020 [2024-04-26 15:36:38.383869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.020 [2024-04-26 15:36:38.383876] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.020 [2024-04-26 15:36:38.383883] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.020 [2024-04-26 15:36:38.383899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.020 qpair failed and we were unable to recover it. 00:26:21.020 [2024-04-26 15:36:38.393891] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.020 [2024-04-26 15:36:38.393958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.020 [2024-04-26 15:36:38.393978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.020 [2024-04-26 15:36:38.393985] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.020 [2024-04-26 15:36:38.393991] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.020 [2024-04-26 15:36:38.394007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.020 qpair failed and we were unable to recover it. 00:26:21.020 [2024-04-26 15:36:38.403936] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.020 [2024-04-26 15:36:38.404012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.020 [2024-04-26 15:36:38.404039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.020 [2024-04-26 15:36:38.404047] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.020 [2024-04-26 15:36:38.404054] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.020 [2024-04-26 15:36:38.404080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.020 qpair failed and we were unable to recover it. 00:26:21.020 [2024-04-26 15:36:38.413988] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.020 [2024-04-26 15:36:38.414052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.020 [2024-04-26 15:36:38.414073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.020 [2024-04-26 15:36:38.414080] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.020 [2024-04-26 15:36:38.414086] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.020 [2024-04-26 15:36:38.414103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.020 qpair failed and we were unable to recover it. 00:26:21.020 [2024-04-26 15:36:38.424014] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.020 [2024-04-26 15:36:38.424092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.020 [2024-04-26 15:36:38.424111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.020 [2024-04-26 15:36:38.424118] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.020 [2024-04-26 15:36:38.424124] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.020 [2024-04-26 15:36:38.424140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.020 qpair failed and we were unable to recover it. 00:26:21.020 [2024-04-26 15:36:38.434021] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.020 [2024-04-26 15:36:38.434096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.020 [2024-04-26 15:36:38.434115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.020 [2024-04-26 15:36:38.434122] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.020 [2024-04-26 15:36:38.434128] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.020 [2024-04-26 15:36:38.434144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.020 qpair failed and we were unable to recover it. 00:26:21.020 [2024-04-26 15:36:38.444026] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.020 [2024-04-26 15:36:38.444098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.020 [2024-04-26 15:36:38.444117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.020 [2024-04-26 15:36:38.444124] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.020 [2024-04-26 15:36:38.444130] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.020 [2024-04-26 15:36:38.444146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.020 qpair failed and we were unable to recover it. 00:26:21.020 [2024-04-26 15:36:38.454126] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.020 [2024-04-26 15:36:38.454203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.021 [2024-04-26 15:36:38.454226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.021 [2024-04-26 15:36:38.454233] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.021 [2024-04-26 15:36:38.454239] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.021 [2024-04-26 15:36:38.454255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.021 qpair failed and we were unable to recover it. 00:26:21.021 [2024-04-26 15:36:38.464142] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.021 [2024-04-26 15:36:38.464226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.021 [2024-04-26 15:36:38.464245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.021 [2024-04-26 15:36:38.464252] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.021 [2024-04-26 15:36:38.464258] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.021 [2024-04-26 15:36:38.464275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.021 qpair failed and we were unable to recover it. 00:26:21.282 [2024-04-26 15:36:38.474169] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.283 [2024-04-26 15:36:38.474244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.283 [2024-04-26 15:36:38.474263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.283 [2024-04-26 15:36:38.474270] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.283 [2024-04-26 15:36:38.474277] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.283 [2024-04-26 15:36:38.474292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.283 qpair failed and we were unable to recover it. 00:26:21.283 [2024-04-26 15:36:38.484192] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.283 [2024-04-26 15:36:38.484276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.283 [2024-04-26 15:36:38.484296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.283 [2024-04-26 15:36:38.484303] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.283 [2024-04-26 15:36:38.484310] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.283 [2024-04-26 15:36:38.484326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.283 qpair failed and we were unable to recover it. 00:26:21.283 [2024-04-26 15:36:38.494204] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.283 [2024-04-26 15:36:38.494269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.283 [2024-04-26 15:36:38.494287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.283 [2024-04-26 15:36:38.494295] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.283 [2024-04-26 15:36:38.494307] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.283 [2024-04-26 15:36:38.494322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.283 qpair failed and we were unable to recover it. 00:26:21.283 [2024-04-26 15:36:38.504132] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.283 [2024-04-26 15:36:38.504244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.283 [2024-04-26 15:36:38.504263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.283 [2024-04-26 15:36:38.504270] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.283 [2024-04-26 15:36:38.504276] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.283 [2024-04-26 15:36:38.504292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.283 qpair failed and we were unable to recover it. 00:26:21.283 [2024-04-26 15:36:38.514271] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.283 [2024-04-26 15:36:38.514338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.283 [2024-04-26 15:36:38.514358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.283 [2024-04-26 15:36:38.514365] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.283 [2024-04-26 15:36:38.514372] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.283 [2024-04-26 15:36:38.514387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.283 qpair failed and we were unable to recover it. 00:26:21.283 [2024-04-26 15:36:38.524322] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.283 [2024-04-26 15:36:38.524377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.283 [2024-04-26 15:36:38.524396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.283 [2024-04-26 15:36:38.524403] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.283 [2024-04-26 15:36:38.524410] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.283 [2024-04-26 15:36:38.524425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.283 qpair failed and we were unable to recover it. 00:26:21.283 [2024-04-26 15:36:38.534377] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.283 [2024-04-26 15:36:38.534444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.283 [2024-04-26 15:36:38.534465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.283 [2024-04-26 15:36:38.534472] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.283 [2024-04-26 15:36:38.534478] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.283 [2024-04-26 15:36:38.534494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.283 qpair failed and we were unable to recover it. 00:26:21.283 [2024-04-26 15:36:38.544385] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.283 [2024-04-26 15:36:38.544472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.283 [2024-04-26 15:36:38.544491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.283 [2024-04-26 15:36:38.544498] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.283 [2024-04-26 15:36:38.544504] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.283 [2024-04-26 15:36:38.544521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.283 qpair failed and we were unable to recover it. 00:26:21.283 [2024-04-26 15:36:38.554412] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.283 [2024-04-26 15:36:38.554470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.283 [2024-04-26 15:36:38.554490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.283 [2024-04-26 15:36:38.554497] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.283 [2024-04-26 15:36:38.554503] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.283 [2024-04-26 15:36:38.554518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.283 qpair failed and we were unable to recover it. 00:26:21.283 [2024-04-26 15:36:38.564433] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.283 [2024-04-26 15:36:38.564501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.283 [2024-04-26 15:36:38.564519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.283 [2024-04-26 15:36:38.564527] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.283 [2024-04-26 15:36:38.564533] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.283 [2024-04-26 15:36:38.564548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.283 qpair failed and we were unable to recover it. 00:26:21.283 [2024-04-26 15:36:38.574495] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.283 [2024-04-26 15:36:38.574566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.283 [2024-04-26 15:36:38.574599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.283 [2024-04-26 15:36:38.574608] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.283 [2024-04-26 15:36:38.574615] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.283 [2024-04-26 15:36:38.574637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.283 qpair failed and we were unable to recover it. 00:26:21.283 [2024-04-26 15:36:38.584526] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.283 [2024-04-26 15:36:38.584606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.283 [2024-04-26 15:36:38.584640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.283 [2024-04-26 15:36:38.584649] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.283 [2024-04-26 15:36:38.584662] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.283 [2024-04-26 15:36:38.584685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.283 qpair failed and we were unable to recover it. 00:26:21.283 [2024-04-26 15:36:38.594533] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.283 [2024-04-26 15:36:38.594596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.283 [2024-04-26 15:36:38.594630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.283 [2024-04-26 15:36:38.594639] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.283 [2024-04-26 15:36:38.594646] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.283 [2024-04-26 15:36:38.594667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.283 qpair failed and we were unable to recover it. 00:26:21.283 [2024-04-26 15:36:38.604579] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.283 [2024-04-26 15:36:38.604652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.283 [2024-04-26 15:36:38.604675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.283 [2024-04-26 15:36:38.604682] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.283 [2024-04-26 15:36:38.604688] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.283 [2024-04-26 15:36:38.604706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.283 qpair failed and we were unable to recover it. 00:26:21.283 [2024-04-26 15:36:38.614620] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.283 [2024-04-26 15:36:38.614697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.283 [2024-04-26 15:36:38.614718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.283 [2024-04-26 15:36:38.614726] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.283 [2024-04-26 15:36:38.614732] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.283 [2024-04-26 15:36:38.614749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.283 qpair failed and we were unable to recover it. 00:26:21.283 [2024-04-26 15:36:38.624681] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.283 [2024-04-26 15:36:38.624805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.283 [2024-04-26 15:36:38.624824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.283 [2024-04-26 15:36:38.624832] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.283 [2024-04-26 15:36:38.624846] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.283 [2024-04-26 15:36:38.624863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.283 qpair failed and we were unable to recover it. 00:26:21.283 [2024-04-26 15:36:38.634656] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.283 [2024-04-26 15:36:38.634715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.283 [2024-04-26 15:36:38.634735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.283 [2024-04-26 15:36:38.634742] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.283 [2024-04-26 15:36:38.634749] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.283 [2024-04-26 15:36:38.634766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.283 qpair failed and we were unable to recover it. 00:26:21.283 [2024-04-26 15:36:38.644712] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.283 [2024-04-26 15:36:38.644787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.283 [2024-04-26 15:36:38.644806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.283 [2024-04-26 15:36:38.644814] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.283 [2024-04-26 15:36:38.644821] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.283 [2024-04-26 15:36:38.644842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.283 qpair failed and we were unable to recover it. 00:26:21.283 [2024-04-26 15:36:38.654766] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.283 [2024-04-26 15:36:38.654847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.283 [2024-04-26 15:36:38.654875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.283 [2024-04-26 15:36:38.654884] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.283 [2024-04-26 15:36:38.654890] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.283 [2024-04-26 15:36:38.654912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.283 qpair failed and we were unable to recover it. 00:26:21.283 [2024-04-26 15:36:38.664660] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.283 [2024-04-26 15:36:38.664734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.283 [2024-04-26 15:36:38.664754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.283 [2024-04-26 15:36:38.664761] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.283 [2024-04-26 15:36:38.664768] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.284 [2024-04-26 15:36:38.664784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.284 qpair failed and we were unable to recover it. 00:26:21.284 [2024-04-26 15:36:38.674820] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.284 [2024-04-26 15:36:38.674904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.284 [2024-04-26 15:36:38.674924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.284 [2024-04-26 15:36:38.674938] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.284 [2024-04-26 15:36:38.674944] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.284 [2024-04-26 15:36:38.674961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.284 qpair failed and we were unable to recover it. 00:26:21.284 [2024-04-26 15:36:38.684872] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.284 [2024-04-26 15:36:38.684976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.284 [2024-04-26 15:36:38.684995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.284 [2024-04-26 15:36:38.685003] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.284 [2024-04-26 15:36:38.685010] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.284 [2024-04-26 15:36:38.685027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.284 qpair failed and we were unable to recover it. 00:26:21.284 [2024-04-26 15:36:38.694895] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.284 [2024-04-26 15:36:38.694968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.284 [2024-04-26 15:36:38.694987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.284 [2024-04-26 15:36:38.694995] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.284 [2024-04-26 15:36:38.695001] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.284 [2024-04-26 15:36:38.695018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.284 qpair failed and we were unable to recover it. 00:26:21.284 [2024-04-26 15:36:38.704797] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.284 [2024-04-26 15:36:38.704892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.284 [2024-04-26 15:36:38.704915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.284 [2024-04-26 15:36:38.704923] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.284 [2024-04-26 15:36:38.704929] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.284 [2024-04-26 15:36:38.704947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.284 qpair failed and we were unable to recover it. 00:26:21.284 [2024-04-26 15:36:38.714927] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.284 [2024-04-26 15:36:38.714998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.284 [2024-04-26 15:36:38.715020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.284 [2024-04-26 15:36:38.715027] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.284 [2024-04-26 15:36:38.715033] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.284 [2024-04-26 15:36:38.715051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.284 qpair failed and we were unable to recover it. 00:26:21.284 [2024-04-26 15:36:38.724968] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.284 [2024-04-26 15:36:38.725034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.284 [2024-04-26 15:36:38.725053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.284 [2024-04-26 15:36:38.725061] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.284 [2024-04-26 15:36:38.725067] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.284 [2024-04-26 15:36:38.725083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.284 qpair failed and we were unable to recover it. 00:26:21.546 [2024-04-26 15:36:38.735008] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.546 [2024-04-26 15:36:38.735080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.546 [2024-04-26 15:36:38.735099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.546 [2024-04-26 15:36:38.735107] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.546 [2024-04-26 15:36:38.735113] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.546 [2024-04-26 15:36:38.735130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.546 qpair failed and we were unable to recover it. 00:26:21.546 [2024-04-26 15:36:38.744960] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.546 [2024-04-26 15:36:38.745066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.546 [2024-04-26 15:36:38.745085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.546 [2024-04-26 15:36:38.745093] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.546 [2024-04-26 15:36:38.745099] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.546 [2024-04-26 15:36:38.745115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.546 qpair failed and we were unable to recover it. 00:26:21.546 [2024-04-26 15:36:38.755053] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.546 [2024-04-26 15:36:38.755109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.546 [2024-04-26 15:36:38.755128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.546 [2024-04-26 15:36:38.755135] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.547 [2024-04-26 15:36:38.755141] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.547 [2024-04-26 15:36:38.755157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.547 qpair failed and we were unable to recover it. 00:26:21.547 [2024-04-26 15:36:38.765072] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.547 [2024-04-26 15:36:38.765151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.547 [2024-04-26 15:36:38.765181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.547 [2024-04-26 15:36:38.765188] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.547 [2024-04-26 15:36:38.765195] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.547 [2024-04-26 15:36:38.765211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.547 qpair failed and we were unable to recover it. 00:26:21.547 [2024-04-26 15:36:38.775040] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.547 [2024-04-26 15:36:38.775122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.547 [2024-04-26 15:36:38.775141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.547 [2024-04-26 15:36:38.775148] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.547 [2024-04-26 15:36:38.775154] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.547 [2024-04-26 15:36:38.775169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.547 qpair failed and we were unable to recover it. 00:26:21.547 [2024-04-26 15:36:38.785048] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.547 [2024-04-26 15:36:38.785132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.547 [2024-04-26 15:36:38.785153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.547 [2024-04-26 15:36:38.785160] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.547 [2024-04-26 15:36:38.785167] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.547 [2024-04-26 15:36:38.785183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.547 qpair failed and we were unable to recover it. 00:26:21.547 [2024-04-26 15:36:38.795185] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.547 [2024-04-26 15:36:38.795247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.547 [2024-04-26 15:36:38.795267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.547 [2024-04-26 15:36:38.795274] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.547 [2024-04-26 15:36:38.795280] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.547 [2024-04-26 15:36:38.795296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.547 qpair failed and we were unable to recover it. 00:26:21.547 [2024-04-26 15:36:38.805176] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.547 [2024-04-26 15:36:38.805241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.547 [2024-04-26 15:36:38.805260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.547 [2024-04-26 15:36:38.805267] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.547 [2024-04-26 15:36:38.805273] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.547 [2024-04-26 15:36:38.805295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.547 qpair failed and we were unable to recover it. 00:26:21.547 [2024-04-26 15:36:38.815286] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.547 [2024-04-26 15:36:38.815388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.547 [2024-04-26 15:36:38.815409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.547 [2024-04-26 15:36:38.815417] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.547 [2024-04-26 15:36:38.815423] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.547 [2024-04-26 15:36:38.815439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.547 qpair failed and we were unable to recover it. 00:26:21.547 [2024-04-26 15:36:38.825281] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.547 [2024-04-26 15:36:38.825359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.547 [2024-04-26 15:36:38.825378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.547 [2024-04-26 15:36:38.825385] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.547 [2024-04-26 15:36:38.825391] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.547 [2024-04-26 15:36:38.825407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.547 qpair failed and we were unable to recover it. 00:26:21.547 [2024-04-26 15:36:38.835273] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.547 [2024-04-26 15:36:38.835345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.547 [2024-04-26 15:36:38.835364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.547 [2024-04-26 15:36:38.835371] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.547 [2024-04-26 15:36:38.835377] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.547 [2024-04-26 15:36:38.835393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.547 qpair failed and we were unable to recover it. 00:26:21.547 [2024-04-26 15:36:38.845299] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.547 [2024-04-26 15:36:38.845367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.547 [2024-04-26 15:36:38.845387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.547 [2024-04-26 15:36:38.845395] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.547 [2024-04-26 15:36:38.845402] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.547 [2024-04-26 15:36:38.845417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.547 qpair failed and we were unable to recover it. 00:26:21.547 [2024-04-26 15:36:38.855340] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.547 [2024-04-26 15:36:38.855405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.547 [2024-04-26 15:36:38.855430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.547 [2024-04-26 15:36:38.855437] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.547 [2024-04-26 15:36:38.855443] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.547 [2024-04-26 15:36:38.855459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.547 qpair failed and we were unable to recover it. 00:26:21.547 [2024-04-26 15:36:38.865356] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.547 [2024-04-26 15:36:38.865439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.547 [2024-04-26 15:36:38.865458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.547 [2024-04-26 15:36:38.865466] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.547 [2024-04-26 15:36:38.865472] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.547 [2024-04-26 15:36:38.865487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.547 qpair failed and we were unable to recover it. 00:26:21.547 [2024-04-26 15:36:38.875341] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.547 [2024-04-26 15:36:38.875428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.548 [2024-04-26 15:36:38.875447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.548 [2024-04-26 15:36:38.875455] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.548 [2024-04-26 15:36:38.875461] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.548 [2024-04-26 15:36:38.875476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.548 qpair failed and we were unable to recover it. 00:26:21.548 [2024-04-26 15:36:38.885335] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.548 [2024-04-26 15:36:38.885407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.548 [2024-04-26 15:36:38.885428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.548 [2024-04-26 15:36:38.885435] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.548 [2024-04-26 15:36:38.885442] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.548 [2024-04-26 15:36:38.885460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.548 qpair failed and we were unable to recover it. 00:26:21.548 [2024-04-26 15:36:38.895377] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.548 [2024-04-26 15:36:38.895444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.548 [2024-04-26 15:36:38.895464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.548 [2024-04-26 15:36:38.895471] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.548 [2024-04-26 15:36:38.895482] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.548 [2024-04-26 15:36:38.895498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.548 qpair failed and we were unable to recover it. 00:26:21.548 [2024-04-26 15:36:38.905510] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.548 [2024-04-26 15:36:38.905596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.548 [2024-04-26 15:36:38.905625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.548 [2024-04-26 15:36:38.905633] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.548 [2024-04-26 15:36:38.905640] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.548 [2024-04-26 15:36:38.905661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.548 qpair failed and we were unable to recover it. 00:26:21.548 [2024-04-26 15:36:38.915450] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.548 [2024-04-26 15:36:38.915515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.548 [2024-04-26 15:36:38.915537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.548 [2024-04-26 15:36:38.915544] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.548 [2024-04-26 15:36:38.915551] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.548 [2024-04-26 15:36:38.915569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.548 qpair failed and we were unable to recover it. 00:26:21.548 [2024-04-26 15:36:38.925586] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.548 [2024-04-26 15:36:38.925654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.548 [2024-04-26 15:36:38.925688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.548 [2024-04-26 15:36:38.925697] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.548 [2024-04-26 15:36:38.925704] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.548 [2024-04-26 15:36:38.925726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.548 qpair failed and we were unable to recover it. 00:26:21.548 [2024-04-26 15:36:38.935653] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.548 [2024-04-26 15:36:38.935771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.548 [2024-04-26 15:36:38.935793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.548 [2024-04-26 15:36:38.935801] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.548 [2024-04-26 15:36:38.935807] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.548 [2024-04-26 15:36:38.935825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.548 qpair failed and we were unable to recover it. 00:26:21.548 [2024-04-26 15:36:38.945668] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.548 [2024-04-26 15:36:38.945751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.548 [2024-04-26 15:36:38.945771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.548 [2024-04-26 15:36:38.945779] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.548 [2024-04-26 15:36:38.945785] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.548 [2024-04-26 15:36:38.945801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.548 qpair failed and we were unable to recover it. 00:26:21.548 [2024-04-26 15:36:38.955555] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.548 [2024-04-26 15:36:38.955626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.548 [2024-04-26 15:36:38.955645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.548 [2024-04-26 15:36:38.955652] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.548 [2024-04-26 15:36:38.955658] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.548 [2024-04-26 15:36:38.955674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.548 qpair failed and we were unable to recover it. 00:26:21.548 [2024-04-26 15:36:38.965617] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.548 [2024-04-26 15:36:38.965701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.548 [2024-04-26 15:36:38.965721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.548 [2024-04-26 15:36:38.965731] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.548 [2024-04-26 15:36:38.965738] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.548 [2024-04-26 15:36:38.965756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.548 qpair failed and we were unable to recover it. 00:26:21.548 [2024-04-26 15:36:38.975830] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.548 [2024-04-26 15:36:38.975919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.548 [2024-04-26 15:36:38.975938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.548 [2024-04-26 15:36:38.975945] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.548 [2024-04-26 15:36:38.975952] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.548 [2024-04-26 15:36:38.975968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.548 qpair failed and we were unable to recover it. 00:26:21.548 [2024-04-26 15:36:38.985728] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.548 [2024-04-26 15:36:38.985811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.548 [2024-04-26 15:36:38.985829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.548 [2024-04-26 15:36:38.985844] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.548 [2024-04-26 15:36:38.985858] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.548 [2024-04-26 15:36:38.985876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.548 qpair failed and we were unable to recover it. 00:26:21.811 [2024-04-26 15:36:38.995796] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.811 [2024-04-26 15:36:38.995884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.811 [2024-04-26 15:36:38.995903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.811 [2024-04-26 15:36:38.995910] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.811 [2024-04-26 15:36:38.995916] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.811 [2024-04-26 15:36:38.995932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.811 qpair failed and we were unable to recover it. 00:26:21.811 [2024-04-26 15:36:39.005859] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.811 [2024-04-26 15:36:39.005930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.811 [2024-04-26 15:36:39.005949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.811 [2024-04-26 15:36:39.005956] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.811 [2024-04-26 15:36:39.005963] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.811 [2024-04-26 15:36:39.005978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.811 qpair failed and we were unable to recover it. 00:26:21.811 [2024-04-26 15:36:39.015910] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.811 [2024-04-26 15:36:39.015979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.811 [2024-04-26 15:36:39.015999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.811 [2024-04-26 15:36:39.016006] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.811 [2024-04-26 15:36:39.016013] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.811 [2024-04-26 15:36:39.016029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.811 qpair failed and we were unable to recover it. 00:26:21.811 [2024-04-26 15:36:39.025904] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.811 [2024-04-26 15:36:39.025991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.811 [2024-04-26 15:36:39.026011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.811 [2024-04-26 15:36:39.026018] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.811 [2024-04-26 15:36:39.026024] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.811 [2024-04-26 15:36:39.026040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.811 qpair failed and we were unable to recover it. 00:26:21.811 [2024-04-26 15:36:39.035882] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.811 [2024-04-26 15:36:39.035950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.811 [2024-04-26 15:36:39.035970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.811 [2024-04-26 15:36:39.035977] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.811 [2024-04-26 15:36:39.035983] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.811 [2024-04-26 15:36:39.035999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.811 qpair failed and we were unable to recover it. 00:26:21.811 [2024-04-26 15:36:39.045967] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.811 [2024-04-26 15:36:39.046045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.811 [2024-04-26 15:36:39.046063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.811 [2024-04-26 15:36:39.046070] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.811 [2024-04-26 15:36:39.046077] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.811 [2024-04-26 15:36:39.046092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.811 qpair failed and we were unable to recover it. 00:26:21.811 [2024-04-26 15:36:39.055886] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.811 [2024-04-26 15:36:39.055951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.811 [2024-04-26 15:36:39.055969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.811 [2024-04-26 15:36:39.055975] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.811 [2024-04-26 15:36:39.055982] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.811 [2024-04-26 15:36:39.055998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.811 qpair failed and we were unable to recover it. 00:26:21.811 [2024-04-26 15:36:39.066046] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.811 [2024-04-26 15:36:39.066129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.811 [2024-04-26 15:36:39.066147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.811 [2024-04-26 15:36:39.066154] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.811 [2024-04-26 15:36:39.066160] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.811 [2024-04-26 15:36:39.066176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.811 qpair failed and we were unable to recover it. 00:26:21.811 [2024-04-26 15:36:39.076056] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.811 [2024-04-26 15:36:39.076114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.811 [2024-04-26 15:36:39.076132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.811 [2024-04-26 15:36:39.076144] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.811 [2024-04-26 15:36:39.076151] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.811 [2024-04-26 15:36:39.076166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.811 qpair failed and we were unable to recover it. 00:26:21.811 [2024-04-26 15:36:39.086153] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.811 [2024-04-26 15:36:39.086233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.811 [2024-04-26 15:36:39.086252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.811 [2024-04-26 15:36:39.086259] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.811 [2024-04-26 15:36:39.086265] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.811 [2024-04-26 15:36:39.086280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.811 qpair failed and we were unable to recover it. 00:26:21.811 [2024-04-26 15:36:39.096192] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.811 [2024-04-26 15:36:39.096259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.811 [2024-04-26 15:36:39.096278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.811 [2024-04-26 15:36:39.096285] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.811 [2024-04-26 15:36:39.096291] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.811 [2024-04-26 15:36:39.096307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.811 qpair failed and we were unable to recover it. 00:26:21.811 [2024-04-26 15:36:39.106187] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.811 [2024-04-26 15:36:39.106314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.811 [2024-04-26 15:36:39.106332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.811 [2024-04-26 15:36:39.106339] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.811 [2024-04-26 15:36:39.106345] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.811 [2024-04-26 15:36:39.106361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.811 qpair failed and we were unable to recover it. 00:26:21.811 [2024-04-26 15:36:39.116061] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.811 [2024-04-26 15:36:39.116139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.811 [2024-04-26 15:36:39.116159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.811 [2024-04-26 15:36:39.116166] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.811 [2024-04-26 15:36:39.116173] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.812 [2024-04-26 15:36:39.116188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.812 qpair failed and we were unable to recover it. 00:26:21.812 [2024-04-26 15:36:39.126179] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.812 [2024-04-26 15:36:39.126259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.812 [2024-04-26 15:36:39.126277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.812 [2024-04-26 15:36:39.126284] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.812 [2024-04-26 15:36:39.126291] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.812 [2024-04-26 15:36:39.126306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.812 qpair failed and we were unable to recover it. 00:26:21.812 [2024-04-26 15:36:39.136300] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.812 [2024-04-26 15:36:39.136368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.812 [2024-04-26 15:36:39.136388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.812 [2024-04-26 15:36:39.136395] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.812 [2024-04-26 15:36:39.136401] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.812 [2024-04-26 15:36:39.136416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.812 qpair failed and we were unable to recover it. 00:26:21.812 [2024-04-26 15:36:39.146267] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.812 [2024-04-26 15:36:39.146349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.812 [2024-04-26 15:36:39.146368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.812 [2024-04-26 15:36:39.146375] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.812 [2024-04-26 15:36:39.146381] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.812 [2024-04-26 15:36:39.146397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.812 qpair failed and we were unable to recover it. 00:26:21.812 [2024-04-26 15:36:39.156291] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.812 [2024-04-26 15:36:39.156379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.812 [2024-04-26 15:36:39.156406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.812 [2024-04-26 15:36:39.156415] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.812 [2024-04-26 15:36:39.156421] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.812 [2024-04-26 15:36:39.156441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.812 qpair failed and we were unable to recover it. 00:26:21.812 [2024-04-26 15:36:39.166327] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.812 [2024-04-26 15:36:39.166388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.812 [2024-04-26 15:36:39.166414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.812 [2024-04-26 15:36:39.166421] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.812 [2024-04-26 15:36:39.166428] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.812 [2024-04-26 15:36:39.166445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.812 qpair failed and we were unable to recover it. 00:26:21.812 [2024-04-26 15:36:39.176276] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.812 [2024-04-26 15:36:39.176341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.812 [2024-04-26 15:36:39.176360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.812 [2024-04-26 15:36:39.176367] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.812 [2024-04-26 15:36:39.176373] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.812 [2024-04-26 15:36:39.176389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.812 qpair failed and we were unable to recover it. 00:26:21.812 [2024-04-26 15:36:39.186430] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.812 [2024-04-26 15:36:39.186508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.812 [2024-04-26 15:36:39.186528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.812 [2024-04-26 15:36:39.186535] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.812 [2024-04-26 15:36:39.186541] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.812 [2024-04-26 15:36:39.186557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.812 qpair failed and we were unable to recover it. 00:26:21.812 [2024-04-26 15:36:39.196425] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.812 [2024-04-26 15:36:39.196483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.812 [2024-04-26 15:36:39.196502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.812 [2024-04-26 15:36:39.196509] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.812 [2024-04-26 15:36:39.196515] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.812 [2024-04-26 15:36:39.196531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.812 qpair failed and we were unable to recover it. 00:26:21.812 [2024-04-26 15:36:39.206503] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.812 [2024-04-26 15:36:39.206608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.812 [2024-04-26 15:36:39.206627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.812 [2024-04-26 15:36:39.206634] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.812 [2024-04-26 15:36:39.206641] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.812 [2024-04-26 15:36:39.206663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.812 qpair failed and we were unable to recover it. 00:26:21.812 [2024-04-26 15:36:39.216533] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.812 [2024-04-26 15:36:39.216605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.812 [2024-04-26 15:36:39.216626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.812 [2024-04-26 15:36:39.216633] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.812 [2024-04-26 15:36:39.216640] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.812 [2024-04-26 15:36:39.216656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.812 qpair failed and we were unable to recover it. 00:26:21.812 [2024-04-26 15:36:39.226561] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.812 [2024-04-26 15:36:39.226641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.812 [2024-04-26 15:36:39.226660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.812 [2024-04-26 15:36:39.226667] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.812 [2024-04-26 15:36:39.226673] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.812 [2024-04-26 15:36:39.226690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.812 qpair failed and we were unable to recover it. 00:26:21.812 [2024-04-26 15:36:39.236585] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.812 [2024-04-26 15:36:39.236645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.812 [2024-04-26 15:36:39.236664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.812 [2024-04-26 15:36:39.236671] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.812 [2024-04-26 15:36:39.236677] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.812 [2024-04-26 15:36:39.236693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.812 qpair failed and we were unable to recover it. 00:26:21.812 [2024-04-26 15:36:39.246502] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.812 [2024-04-26 15:36:39.246571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.812 [2024-04-26 15:36:39.246593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.812 [2024-04-26 15:36:39.246601] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.812 [2024-04-26 15:36:39.246607] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.813 [2024-04-26 15:36:39.246624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.813 qpair failed and we were unable to recover it. 00:26:21.813 [2024-04-26 15:36:39.256537] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.813 [2024-04-26 15:36:39.256606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.813 [2024-04-26 15:36:39.256631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.813 [2024-04-26 15:36:39.256639] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.813 [2024-04-26 15:36:39.256645] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:21.813 [2024-04-26 15:36:39.256661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.813 qpair failed and we were unable to recover it. 00:26:22.075 [2024-04-26 15:36:39.266647] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.075 [2024-04-26 15:36:39.266732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.075 [2024-04-26 15:36:39.266752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.075 [2024-04-26 15:36:39.266760] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.075 [2024-04-26 15:36:39.266766] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.075 [2024-04-26 15:36:39.266784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.075 qpair failed and we were unable to recover it. 00:26:22.075 [2024-04-26 15:36:39.276726] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.075 [2024-04-26 15:36:39.276787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.075 [2024-04-26 15:36:39.276806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.075 [2024-04-26 15:36:39.276813] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.075 [2024-04-26 15:36:39.276820] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.075 [2024-04-26 15:36:39.276845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.075 qpair failed and we were unable to recover it. 00:26:22.075 [2024-04-26 15:36:39.286763] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.075 [2024-04-26 15:36:39.286824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.075 [2024-04-26 15:36:39.286851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.075 [2024-04-26 15:36:39.286858] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.075 [2024-04-26 15:36:39.286865] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.075 [2024-04-26 15:36:39.286881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.075 qpair failed and we were unable to recover it. 00:26:22.076 [2024-04-26 15:36:39.296804] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.076 [2024-04-26 15:36:39.296876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.076 [2024-04-26 15:36:39.296895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.076 [2024-04-26 15:36:39.296902] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.076 [2024-04-26 15:36:39.296908] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.076 [2024-04-26 15:36:39.296931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.076 qpair failed and we were unable to recover it. 00:26:22.076 [2024-04-26 15:36:39.306821] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.076 [2024-04-26 15:36:39.306905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.076 [2024-04-26 15:36:39.306924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.076 [2024-04-26 15:36:39.306931] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.076 [2024-04-26 15:36:39.306937] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.076 [2024-04-26 15:36:39.306954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.076 qpair failed and we were unable to recover it. 00:26:22.076 [2024-04-26 15:36:39.316862] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.076 [2024-04-26 15:36:39.316920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.076 [2024-04-26 15:36:39.316940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.076 [2024-04-26 15:36:39.316947] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.076 [2024-04-26 15:36:39.316953] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.076 [2024-04-26 15:36:39.316970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.076 qpair failed and we were unable to recover it. 00:26:22.076 [2024-04-26 15:36:39.326885] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.076 [2024-04-26 15:36:39.326959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.076 [2024-04-26 15:36:39.326978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.076 [2024-04-26 15:36:39.326985] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.076 [2024-04-26 15:36:39.326992] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.076 [2024-04-26 15:36:39.327008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.076 qpair failed and we were unable to recover it. 00:26:22.076 [2024-04-26 15:36:39.336808] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.076 [2024-04-26 15:36:39.336898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.076 [2024-04-26 15:36:39.336918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.076 [2024-04-26 15:36:39.336926] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.076 [2024-04-26 15:36:39.336932] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.076 [2024-04-26 15:36:39.336949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.076 qpair failed and we were unable to recover it. 00:26:22.076 [2024-04-26 15:36:39.346943] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.076 [2024-04-26 15:36:39.347039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.076 [2024-04-26 15:36:39.347058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.076 [2024-04-26 15:36:39.347065] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.076 [2024-04-26 15:36:39.347072] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.076 [2024-04-26 15:36:39.347088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.076 qpair failed and we were unable to recover it. 00:26:22.076 [2024-04-26 15:36:39.356999] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.076 [2024-04-26 15:36:39.357062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.076 [2024-04-26 15:36:39.357081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.076 [2024-04-26 15:36:39.357088] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.076 [2024-04-26 15:36:39.357095] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.076 [2024-04-26 15:36:39.357111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.076 qpair failed and we were unable to recover it. 00:26:22.076 [2024-04-26 15:36:39.367055] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.076 [2024-04-26 15:36:39.367131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.076 [2024-04-26 15:36:39.367149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.076 [2024-04-26 15:36:39.367156] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.076 [2024-04-26 15:36:39.367162] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.076 [2024-04-26 15:36:39.367178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.076 qpair failed and we were unable to recover it. 00:26:22.076 [2024-04-26 15:36:39.377088] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.076 [2024-04-26 15:36:39.377162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.076 [2024-04-26 15:36:39.377182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.076 [2024-04-26 15:36:39.377190] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.076 [2024-04-26 15:36:39.377196] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.076 [2024-04-26 15:36:39.377212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.076 qpair failed and we were unable to recover it. 00:26:22.076 [2024-04-26 15:36:39.387043] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.076 [2024-04-26 15:36:39.387126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.076 [2024-04-26 15:36:39.387146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.076 [2024-04-26 15:36:39.387153] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.076 [2024-04-26 15:36:39.387165] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.076 [2024-04-26 15:36:39.387181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.076 qpair failed and we were unable to recover it. 00:26:22.076 [2024-04-26 15:36:39.396991] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.076 [2024-04-26 15:36:39.397068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.076 [2024-04-26 15:36:39.397088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.076 [2024-04-26 15:36:39.397095] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.076 [2024-04-26 15:36:39.397101] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.076 [2024-04-26 15:36:39.397118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.076 qpair failed and we were unable to recover it. 00:26:22.076 [2024-04-26 15:36:39.407153] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.076 [2024-04-26 15:36:39.407263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.076 [2024-04-26 15:36:39.407291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.076 [2024-04-26 15:36:39.407300] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.076 [2024-04-26 15:36:39.407306] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.076 [2024-04-26 15:36:39.407327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.076 qpair failed and we were unable to recover it. 00:26:22.076 [2024-04-26 15:36:39.417178] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.076 [2024-04-26 15:36:39.417246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.076 [2024-04-26 15:36:39.417267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.076 [2024-04-26 15:36:39.417274] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.076 [2024-04-26 15:36:39.417281] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.077 [2024-04-26 15:36:39.417297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.077 qpair failed and we were unable to recover it. 00:26:22.077 [2024-04-26 15:36:39.427204] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.077 [2024-04-26 15:36:39.427284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.077 [2024-04-26 15:36:39.427303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.077 [2024-04-26 15:36:39.427310] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.077 [2024-04-26 15:36:39.427317] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.077 [2024-04-26 15:36:39.427333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.077 qpair failed and we were unable to recover it. 00:26:22.077 [2024-04-26 15:36:39.437225] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.077 [2024-04-26 15:36:39.437296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.077 [2024-04-26 15:36:39.437316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.077 [2024-04-26 15:36:39.437323] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.077 [2024-04-26 15:36:39.437329] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.077 [2024-04-26 15:36:39.437346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.077 qpair failed and we were unable to recover it. 00:26:22.077 [2024-04-26 15:36:39.447228] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.077 [2024-04-26 15:36:39.447311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.077 [2024-04-26 15:36:39.447330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.077 [2024-04-26 15:36:39.447337] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.077 [2024-04-26 15:36:39.447343] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.077 [2024-04-26 15:36:39.447359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.077 qpair failed and we were unable to recover it. 00:26:22.077 [2024-04-26 15:36:39.457297] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.077 [2024-04-26 15:36:39.457363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.077 [2024-04-26 15:36:39.457381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.077 [2024-04-26 15:36:39.457389] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.077 [2024-04-26 15:36:39.457395] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.077 [2024-04-26 15:36:39.457410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.077 qpair failed and we were unable to recover it. 00:26:22.077 [2024-04-26 15:36:39.467288] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.077 [2024-04-26 15:36:39.467358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.077 [2024-04-26 15:36:39.467378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.077 [2024-04-26 15:36:39.467385] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.077 [2024-04-26 15:36:39.467391] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.077 [2024-04-26 15:36:39.467407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.077 qpair failed and we were unable to recover it. 00:26:22.077 [2024-04-26 15:36:39.477376] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.077 [2024-04-26 15:36:39.477448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.077 [2024-04-26 15:36:39.477467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.077 [2024-04-26 15:36:39.477480] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.077 [2024-04-26 15:36:39.477486] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.077 [2024-04-26 15:36:39.477502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.077 qpair failed and we were unable to recover it. 00:26:22.077 [2024-04-26 15:36:39.487344] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.077 [2024-04-26 15:36:39.487404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.077 [2024-04-26 15:36:39.487423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.077 [2024-04-26 15:36:39.487430] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.077 [2024-04-26 15:36:39.487437] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.077 [2024-04-26 15:36:39.487452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.077 qpair failed and we were unable to recover it. 00:26:22.077 [2024-04-26 15:36:39.497412] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.077 [2024-04-26 15:36:39.497478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.077 [2024-04-26 15:36:39.497496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.077 [2024-04-26 15:36:39.497503] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.077 [2024-04-26 15:36:39.497510] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.077 [2024-04-26 15:36:39.497526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.077 qpair failed and we were unable to recover it. 00:26:22.077 [2024-04-26 15:36:39.507300] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.077 [2024-04-26 15:36:39.507383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.077 [2024-04-26 15:36:39.507404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.077 [2024-04-26 15:36:39.507411] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.077 [2024-04-26 15:36:39.507417] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.077 [2024-04-26 15:36:39.507440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.077 qpair failed and we were unable to recover it. 00:26:22.077 [2024-04-26 15:36:39.517455] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.077 [2024-04-26 15:36:39.517521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.077 [2024-04-26 15:36:39.517540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.077 [2024-04-26 15:36:39.517548] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.077 [2024-04-26 15:36:39.517554] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.077 [2024-04-26 15:36:39.517570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.077 qpair failed and we were unable to recover it. 00:26:22.339 [2024-04-26 15:36:39.527482] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.339 [2024-04-26 15:36:39.527565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.339 [2024-04-26 15:36:39.527583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.339 [2024-04-26 15:36:39.527590] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.339 [2024-04-26 15:36:39.527596] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.339 [2024-04-26 15:36:39.527612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.339 qpair failed and we were unable to recover it. 00:26:22.339 [2024-04-26 15:36:39.537524] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.339 [2024-04-26 15:36:39.537591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.339 [2024-04-26 15:36:39.537609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.339 [2024-04-26 15:36:39.537616] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.339 [2024-04-26 15:36:39.537623] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.339 [2024-04-26 15:36:39.537639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.339 qpair failed and we were unable to recover it. 00:26:22.339 [2024-04-26 15:36:39.547491] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.339 [2024-04-26 15:36:39.547553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.339 [2024-04-26 15:36:39.547574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.339 [2024-04-26 15:36:39.547581] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.339 [2024-04-26 15:36:39.547587] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.339 [2024-04-26 15:36:39.547604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.339 qpair failed and we were unable to recover it. 00:26:22.339 [2024-04-26 15:36:39.557541] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.339 [2024-04-26 15:36:39.557610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.339 [2024-04-26 15:36:39.557639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.339 [2024-04-26 15:36:39.557647] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.339 [2024-04-26 15:36:39.557654] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.339 [2024-04-26 15:36:39.557674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.339 qpair failed and we were unable to recover it. 00:26:22.339 [2024-04-26 15:36:39.567591] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.339 [2024-04-26 15:36:39.567661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.339 [2024-04-26 15:36:39.567695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.340 [2024-04-26 15:36:39.567704] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.340 [2024-04-26 15:36:39.567710] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.340 [2024-04-26 15:36:39.567730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.340 qpair failed and we were unable to recover it. 00:26:22.340 [2024-04-26 15:36:39.577601] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.340 [2024-04-26 15:36:39.577694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.340 [2024-04-26 15:36:39.577712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.340 [2024-04-26 15:36:39.577719] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.340 [2024-04-26 15:36:39.577725] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.340 [2024-04-26 15:36:39.577741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.340 qpair failed and we were unable to recover it. 00:26:22.340 [2024-04-26 15:36:39.587615] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.340 [2024-04-26 15:36:39.587704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.340 [2024-04-26 15:36:39.587720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.340 [2024-04-26 15:36:39.587727] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.340 [2024-04-26 15:36:39.587734] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.340 [2024-04-26 15:36:39.587749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.340 qpair failed and we were unable to recover it. 00:26:22.340 [2024-04-26 15:36:39.597675] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.340 [2024-04-26 15:36:39.597734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.340 [2024-04-26 15:36:39.597749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.340 [2024-04-26 15:36:39.597756] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.340 [2024-04-26 15:36:39.597762] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.340 [2024-04-26 15:36:39.597777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.340 qpair failed and we were unable to recover it. 00:26:22.340 [2024-04-26 15:36:39.607693] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.340 [2024-04-26 15:36:39.607752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.340 [2024-04-26 15:36:39.607768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.340 [2024-04-26 15:36:39.607774] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.340 [2024-04-26 15:36:39.607781] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.340 [2024-04-26 15:36:39.607795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.340 qpair failed and we were unable to recover it. 00:26:22.340 [2024-04-26 15:36:39.617727] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.340 [2024-04-26 15:36:39.617809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.340 [2024-04-26 15:36:39.617826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.340 [2024-04-26 15:36:39.617833] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.340 [2024-04-26 15:36:39.617846] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.340 [2024-04-26 15:36:39.617861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.340 qpair failed and we were unable to recover it. 00:26:22.340 [2024-04-26 15:36:39.627678] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.340 [2024-04-26 15:36:39.627739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.340 [2024-04-26 15:36:39.627754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.340 [2024-04-26 15:36:39.627761] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.340 [2024-04-26 15:36:39.627767] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.340 [2024-04-26 15:36:39.627782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.340 qpair failed and we were unable to recover it. 00:26:22.340 [2024-04-26 15:36:39.637776] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.340 [2024-04-26 15:36:39.637835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.340 [2024-04-26 15:36:39.637855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.340 [2024-04-26 15:36:39.637862] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.340 [2024-04-26 15:36:39.637868] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.340 [2024-04-26 15:36:39.637882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.340 qpair failed and we were unable to recover it. 00:26:22.340 [2024-04-26 15:36:39.647744] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.340 [2024-04-26 15:36:39.647830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.340 [2024-04-26 15:36:39.647850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.340 [2024-04-26 15:36:39.647857] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.340 [2024-04-26 15:36:39.647863] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.340 [2024-04-26 15:36:39.647877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.340 qpair failed and we were unable to recover it. 00:26:22.340 [2024-04-26 15:36:39.657709] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.340 [2024-04-26 15:36:39.657770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.340 [2024-04-26 15:36:39.657793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.340 [2024-04-26 15:36:39.657801] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.340 [2024-04-26 15:36:39.657807] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.340 [2024-04-26 15:36:39.657825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.340 qpair failed and we were unable to recover it. 00:26:22.340 [2024-04-26 15:36:39.667686] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.340 [2024-04-26 15:36:39.667743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.340 [2024-04-26 15:36:39.667759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.340 [2024-04-26 15:36:39.667766] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.340 [2024-04-26 15:36:39.667772] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.340 [2024-04-26 15:36:39.667786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.340 qpair failed and we were unable to recover it. 00:26:22.340 [2024-04-26 15:36:39.677887] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.340 [2024-04-26 15:36:39.677941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.340 [2024-04-26 15:36:39.677956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.340 [2024-04-26 15:36:39.677963] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.340 [2024-04-26 15:36:39.677969] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.340 [2024-04-26 15:36:39.677983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.340 qpair failed and we were unable to recover it. 00:26:22.340 [2024-04-26 15:36:39.687897] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.340 [2024-04-26 15:36:39.687954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.340 [2024-04-26 15:36:39.687969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.340 [2024-04-26 15:36:39.687978] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.340 [2024-04-26 15:36:39.687984] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.340 [2024-04-26 15:36:39.687999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.340 qpair failed and we were unable to recover it. 00:26:22.340 [2024-04-26 15:36:39.697853] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.340 [2024-04-26 15:36:39.697910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.340 [2024-04-26 15:36:39.697924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.340 [2024-04-26 15:36:39.697931] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.340 [2024-04-26 15:36:39.697937] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.341 [2024-04-26 15:36:39.697955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.341 qpair failed and we were unable to recover it. 00:26:22.341 [2024-04-26 15:36:39.707906] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.341 [2024-04-26 15:36:39.707972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.341 [2024-04-26 15:36:39.707986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.341 [2024-04-26 15:36:39.707993] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.341 [2024-04-26 15:36:39.707999] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.341 [2024-04-26 15:36:39.708013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.341 qpair failed and we were unable to recover it. 00:26:22.341 [2024-04-26 15:36:39.717875] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.341 [2024-04-26 15:36:39.717938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.341 [2024-04-26 15:36:39.717952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.341 [2024-04-26 15:36:39.717959] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.341 [2024-04-26 15:36:39.717966] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.341 [2024-04-26 15:36:39.717980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.341 qpair failed and we were unable to recover it. 00:26:22.341 [2024-04-26 15:36:39.728023] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.341 [2024-04-26 15:36:39.728076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.341 [2024-04-26 15:36:39.728090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.341 [2024-04-26 15:36:39.728097] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.341 [2024-04-26 15:36:39.728103] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.341 [2024-04-26 15:36:39.728118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.341 qpair failed and we were unable to recover it. 00:26:22.341 [2024-04-26 15:36:39.738102] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.341 [2024-04-26 15:36:39.738163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.341 [2024-04-26 15:36:39.738177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.341 [2024-04-26 15:36:39.738184] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.341 [2024-04-26 15:36:39.738190] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.341 [2024-04-26 15:36:39.738203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.341 qpair failed and we were unable to recover it. 00:26:22.341 [2024-04-26 15:36:39.747917] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.341 [2024-04-26 15:36:39.747973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.341 [2024-04-26 15:36:39.747991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.341 [2024-04-26 15:36:39.747997] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.341 [2024-04-26 15:36:39.748004] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.341 [2024-04-26 15:36:39.748017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.341 qpair failed and we were unable to recover it. 00:26:22.341 [2024-04-26 15:36:39.757993] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.341 [2024-04-26 15:36:39.758052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.341 [2024-04-26 15:36:39.758066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.341 [2024-04-26 15:36:39.758073] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.341 [2024-04-26 15:36:39.758079] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.341 [2024-04-26 15:36:39.758093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.341 qpair failed and we were unable to recover it. 00:26:22.341 [2024-04-26 15:36:39.768139] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.341 [2024-04-26 15:36:39.768199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.341 [2024-04-26 15:36:39.768213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.341 [2024-04-26 15:36:39.768219] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.341 [2024-04-26 15:36:39.768225] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.341 [2024-04-26 15:36:39.768239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.341 qpair failed and we were unable to recover it. 00:26:22.341 [2024-04-26 15:36:39.778175] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.341 [2024-04-26 15:36:39.778238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.341 [2024-04-26 15:36:39.778251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.341 [2024-04-26 15:36:39.778258] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.341 [2024-04-26 15:36:39.778264] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.341 [2024-04-26 15:36:39.778277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.341 qpair failed and we were unable to recover it. 00:26:22.603 [2024-04-26 15:36:39.788142] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.603 [2024-04-26 15:36:39.788200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.603 [2024-04-26 15:36:39.788215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.603 [2024-04-26 15:36:39.788222] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.603 [2024-04-26 15:36:39.788235] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.603 [2024-04-26 15:36:39.788249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.603 qpair failed and we were unable to recover it. 00:26:22.603 [2024-04-26 15:36:39.798202] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.603 [2024-04-26 15:36:39.798259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.603 [2024-04-26 15:36:39.798273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.603 [2024-04-26 15:36:39.798280] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.603 [2024-04-26 15:36:39.798286] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.603 [2024-04-26 15:36:39.798300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.603 qpair failed and we were unable to recover it. 00:26:22.603 [2024-04-26 15:36:39.808247] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.603 [2024-04-26 15:36:39.808302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.603 [2024-04-26 15:36:39.808317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.603 [2024-04-26 15:36:39.808323] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.603 [2024-04-26 15:36:39.808330] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.603 [2024-04-26 15:36:39.808343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.603 qpair failed and we were unable to recover it. 00:26:22.603 [2024-04-26 15:36:39.818286] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.603 [2024-04-26 15:36:39.818341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.603 [2024-04-26 15:36:39.818356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.603 [2024-04-26 15:36:39.818363] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.603 [2024-04-26 15:36:39.818369] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.603 [2024-04-26 15:36:39.818383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.603 qpair failed and we were unable to recover it. 00:26:22.603 [2024-04-26 15:36:39.828265] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.603 [2024-04-26 15:36:39.828370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.603 [2024-04-26 15:36:39.828384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.603 [2024-04-26 15:36:39.828391] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.603 [2024-04-26 15:36:39.828397] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.603 [2024-04-26 15:36:39.828411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.603 qpair failed and we were unable to recover it. 00:26:22.603 [2024-04-26 15:36:39.838199] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.603 [2024-04-26 15:36:39.838257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.603 [2024-04-26 15:36:39.838271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.603 [2024-04-26 15:36:39.838277] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.603 [2024-04-26 15:36:39.838283] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.603 [2024-04-26 15:36:39.838296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.603 qpair failed and we were unable to recover it. 00:26:22.603 [2024-04-26 15:36:39.848358] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.603 [2024-04-26 15:36:39.848412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.603 [2024-04-26 15:36:39.848426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.603 [2024-04-26 15:36:39.848432] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.603 [2024-04-26 15:36:39.848438] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.603 [2024-04-26 15:36:39.848452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.603 qpair failed and we were unable to recover it. 00:26:22.603 [2024-04-26 15:36:39.858398] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.603 [2024-04-26 15:36:39.858459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.603 [2024-04-26 15:36:39.858473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.603 [2024-04-26 15:36:39.858479] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.603 [2024-04-26 15:36:39.858485] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.603 [2024-04-26 15:36:39.858499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.603 qpair failed and we were unable to recover it. 00:26:22.603 [2024-04-26 15:36:39.868372] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.603 [2024-04-26 15:36:39.868424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.603 [2024-04-26 15:36:39.868438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.603 [2024-04-26 15:36:39.868444] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.603 [2024-04-26 15:36:39.868450] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.603 [2024-04-26 15:36:39.868463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.603 qpair failed and we were unable to recover it. 00:26:22.603 [2024-04-26 15:36:39.878417] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.603 [2024-04-26 15:36:39.878498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.603 [2024-04-26 15:36:39.878511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.603 [2024-04-26 15:36:39.878522] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.603 [2024-04-26 15:36:39.878528] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.603 [2024-04-26 15:36:39.878541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.603 qpair failed and we were unable to recover it. 00:26:22.603 [2024-04-26 15:36:39.888333] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.603 [2024-04-26 15:36:39.888383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.603 [2024-04-26 15:36:39.888397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.603 [2024-04-26 15:36:39.888404] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.603 [2024-04-26 15:36:39.888410] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.603 [2024-04-26 15:36:39.888430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.603 qpair failed and we were unable to recover it. 00:26:22.603 [2024-04-26 15:36:39.898380] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.603 [2024-04-26 15:36:39.898444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.603 [2024-04-26 15:36:39.898458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.603 [2024-04-26 15:36:39.898464] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.603 [2024-04-26 15:36:39.898470] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.603 [2024-04-26 15:36:39.898483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.603 qpair failed and we were unable to recover it. 00:26:22.603 [2024-04-26 15:36:39.908477] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.603 [2024-04-26 15:36:39.908534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.603 [2024-04-26 15:36:39.908552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.603 [2024-04-26 15:36:39.908559] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.603 [2024-04-26 15:36:39.908565] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.603 [2024-04-26 15:36:39.908580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.603 qpair failed and we were unable to recover it. 00:26:22.603 [2024-04-26 15:36:39.918437] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.603 [2024-04-26 15:36:39.918489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.603 [2024-04-26 15:36:39.918504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.603 [2024-04-26 15:36:39.918510] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.603 [2024-04-26 15:36:39.918517] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.603 [2024-04-26 15:36:39.918530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.603 qpair failed and we were unable to recover it. 00:26:22.603 [2024-04-26 15:36:39.928582] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.603 [2024-04-26 15:36:39.928640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.603 [2024-04-26 15:36:39.928655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.603 [2024-04-26 15:36:39.928661] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.603 [2024-04-26 15:36:39.928667] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.603 [2024-04-26 15:36:39.928681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.603 qpair failed and we were unable to recover it. 00:26:22.603 [2024-04-26 15:36:39.938610] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.603 [2024-04-26 15:36:39.938665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.603 [2024-04-26 15:36:39.938679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.603 [2024-04-26 15:36:39.938685] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.603 [2024-04-26 15:36:39.938691] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.603 [2024-04-26 15:36:39.938704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.603 qpair failed and we were unable to recover it. 00:26:22.604 [2024-04-26 15:36:39.948469] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.604 [2024-04-26 15:36:39.948528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.604 [2024-04-26 15:36:39.948542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.604 [2024-04-26 15:36:39.948549] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.604 [2024-04-26 15:36:39.948555] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.604 [2024-04-26 15:36:39.948568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.604 qpair failed and we were unable to recover it. 00:26:22.604 [2024-04-26 15:36:39.958657] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.604 [2024-04-26 15:36:39.958737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.604 [2024-04-26 15:36:39.958751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.604 [2024-04-26 15:36:39.958758] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.604 [2024-04-26 15:36:39.958764] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.604 [2024-04-26 15:36:39.958777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.604 qpair failed and we were unable to recover it. 00:26:22.604 [2024-04-26 15:36:39.968695] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.604 [2024-04-26 15:36:39.968751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.604 [2024-04-26 15:36:39.968765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.604 [2024-04-26 15:36:39.968776] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.604 [2024-04-26 15:36:39.968782] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.604 [2024-04-26 15:36:39.968796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.604 qpair failed and we were unable to recover it. 00:26:22.604 [2024-04-26 15:36:39.978742] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.604 [2024-04-26 15:36:39.978845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.604 [2024-04-26 15:36:39.978859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.604 [2024-04-26 15:36:39.978866] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.604 [2024-04-26 15:36:39.978872] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.604 [2024-04-26 15:36:39.978886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.604 qpair failed and we were unable to recover it. 00:26:22.604 [2024-04-26 15:36:39.988675] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.604 [2024-04-26 15:36:39.988729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.604 [2024-04-26 15:36:39.988743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.604 [2024-04-26 15:36:39.988751] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.604 [2024-04-26 15:36:39.988757] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.604 [2024-04-26 15:36:39.988770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.604 qpair failed and we were unable to recover it. 00:26:22.604 [2024-04-26 15:36:39.998765] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.604 [2024-04-26 15:36:39.998817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.604 [2024-04-26 15:36:39.998831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.604 [2024-04-26 15:36:39.998843] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.604 [2024-04-26 15:36:39.998851] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.604 [2024-04-26 15:36:39.998865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.604 qpair failed and we were unable to recover it. 00:26:22.604 [2024-04-26 15:36:40.008805] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.604 [2024-04-26 15:36:40.008911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.604 [2024-04-26 15:36:40.008927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.604 [2024-04-26 15:36:40.008934] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.604 [2024-04-26 15:36:40.008940] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.604 [2024-04-26 15:36:40.008955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.604 qpair failed and we were unable to recover it. 00:26:22.604 [2024-04-26 15:36:40.018777] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.604 [2024-04-26 15:36:40.018832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.604 [2024-04-26 15:36:40.018851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.604 [2024-04-26 15:36:40.018858] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.604 [2024-04-26 15:36:40.018864] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.604 [2024-04-26 15:36:40.018878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.604 qpair failed and we were unable to recover it. 00:26:22.604 [2024-04-26 15:36:40.028815] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.604 [2024-04-26 15:36:40.028868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.604 [2024-04-26 15:36:40.028882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.604 [2024-04-26 15:36:40.028888] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.604 [2024-04-26 15:36:40.028895] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.604 [2024-04-26 15:36:40.028908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.604 qpair failed and we were unable to recover it. 00:26:22.604 [2024-04-26 15:36:40.038754] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.604 [2024-04-26 15:36:40.038858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.604 [2024-04-26 15:36:40.038877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.604 [2024-04-26 15:36:40.038886] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.604 [2024-04-26 15:36:40.038894] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.604 [2024-04-26 15:36:40.038912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.604 qpair failed and we were unable to recover it. 00:26:22.604 [2024-04-26 15:36:40.048787] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.604 [2024-04-26 15:36:40.048853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.604 [2024-04-26 15:36:40.048869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.604 [2024-04-26 15:36:40.048876] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.604 [2024-04-26 15:36:40.048882] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.604 [2024-04-26 15:36:40.048896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.604 qpair failed and we were unable to recover it. 00:26:22.865 [2024-04-26 15:36:40.058774] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.865 [2024-04-26 15:36:40.058825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.865 [2024-04-26 15:36:40.058847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.865 [2024-04-26 15:36:40.058855] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.865 [2024-04-26 15:36:40.058861] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.865 [2024-04-26 15:36:40.058875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.865 qpair failed and we were unable to recover it. 00:26:22.865 [2024-04-26 15:36:40.068891] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.865 [2024-04-26 15:36:40.068945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.865 [2024-04-26 15:36:40.068959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.865 [2024-04-26 15:36:40.068966] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.865 [2024-04-26 15:36:40.068972] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.865 [2024-04-26 15:36:40.068986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.865 qpair failed and we were unable to recover it. 00:26:22.865 [2024-04-26 15:36:40.078853] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.865 [2024-04-26 15:36:40.078906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.865 [2024-04-26 15:36:40.078919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.865 [2024-04-26 15:36:40.078926] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.865 [2024-04-26 15:36:40.078932] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.865 [2024-04-26 15:36:40.078945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.865 qpair failed and we were unable to recover it. 00:26:22.865 [2024-04-26 15:36:40.089021] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.865 [2024-04-26 15:36:40.089072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.865 [2024-04-26 15:36:40.089085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.865 [2024-04-26 15:36:40.089092] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.865 [2024-04-26 15:36:40.089098] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.865 [2024-04-26 15:36:40.089111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.865 qpair failed and we were unable to recover it. 00:26:22.865 [2024-04-26 15:36:40.098888] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.865 [2024-04-26 15:36:40.098937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.866 [2024-04-26 15:36:40.098950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.866 [2024-04-26 15:36:40.098957] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.866 [2024-04-26 15:36:40.098963] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.866 [2024-04-26 15:36:40.098980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.866 qpair failed and we were unable to recover it. 00:26:22.866 [2024-04-26 15:36:40.108924] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.866 [2024-04-26 15:36:40.108979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.866 [2024-04-26 15:36:40.108993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.866 [2024-04-26 15:36:40.109000] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.866 [2024-04-26 15:36:40.109006] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.866 [2024-04-26 15:36:40.109019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.866 qpair failed and we were unable to recover it. 00:26:22.866 [2024-04-26 15:36:40.119091] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.866 [2024-04-26 15:36:40.119142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.866 [2024-04-26 15:36:40.119155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.866 [2024-04-26 15:36:40.119162] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.866 [2024-04-26 15:36:40.119168] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.866 [2024-04-26 15:36:40.119182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.866 qpair failed and we were unable to recover it. 00:26:22.866 [2024-04-26 15:36:40.129136] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.866 [2024-04-26 15:36:40.129192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.866 [2024-04-26 15:36:40.129206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.866 [2024-04-26 15:36:40.129212] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.866 [2024-04-26 15:36:40.129218] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.866 [2024-04-26 15:36:40.129232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.866 qpair failed and we were unable to recover it. 00:26:22.866 [2024-04-26 15:36:40.139150] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.866 [2024-04-26 15:36:40.139227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.866 [2024-04-26 15:36:40.139241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.866 [2024-04-26 15:36:40.139247] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.866 [2024-04-26 15:36:40.139253] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.866 [2024-04-26 15:36:40.139267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.866 qpair failed and we were unable to recover it. 00:26:22.866 [2024-04-26 15:36:40.149133] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.866 [2024-04-26 15:36:40.149185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.866 [2024-04-26 15:36:40.149202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.866 [2024-04-26 15:36:40.149209] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.866 [2024-04-26 15:36:40.149215] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.866 [2024-04-26 15:36:40.149228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.866 qpair failed and we were unable to recover it. 00:26:22.866 [2024-04-26 15:36:40.159181] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.866 [2024-04-26 15:36:40.159231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.866 [2024-04-26 15:36:40.159248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.866 [2024-04-26 15:36:40.159255] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.866 [2024-04-26 15:36:40.159261] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.866 [2024-04-26 15:36:40.159277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.866 qpair failed and we were unable to recover it. 00:26:22.866 [2024-04-26 15:36:40.169095] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.866 [2024-04-26 15:36:40.169151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.866 [2024-04-26 15:36:40.169165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.866 [2024-04-26 15:36:40.169171] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.866 [2024-04-26 15:36:40.169177] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.866 [2024-04-26 15:36:40.169191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.866 qpair failed and we were unable to recover it. 00:26:22.866 [2024-04-26 15:36:40.179221] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.866 [2024-04-26 15:36:40.179272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.866 [2024-04-26 15:36:40.179286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.866 [2024-04-26 15:36:40.179292] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.866 [2024-04-26 15:36:40.179298] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.866 [2024-04-26 15:36:40.179312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.866 qpair failed and we were unable to recover it. 00:26:22.866 [2024-04-26 15:36:40.189232] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.866 [2024-04-26 15:36:40.189287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.866 [2024-04-26 15:36:40.189301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.866 [2024-04-26 15:36:40.189307] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.866 [2024-04-26 15:36:40.189317] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.866 [2024-04-26 15:36:40.189330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.866 qpair failed and we were unable to recover it. 00:26:22.866 [2024-04-26 15:36:40.199294] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.866 [2024-04-26 15:36:40.199349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.866 [2024-04-26 15:36:40.199362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.866 [2024-04-26 15:36:40.199369] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.866 [2024-04-26 15:36:40.199374] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.866 [2024-04-26 15:36:40.199388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.866 qpair failed and we were unable to recover it. 00:26:22.866 [2024-04-26 15:36:40.209324] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.866 [2024-04-26 15:36:40.209374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.866 [2024-04-26 15:36:40.209387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.866 [2024-04-26 15:36:40.209394] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.866 [2024-04-26 15:36:40.209400] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.866 [2024-04-26 15:36:40.209413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.866 qpair failed and we were unable to recover it. 00:26:22.866 [2024-04-26 15:36:40.219290] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.866 [2024-04-26 15:36:40.219339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.866 [2024-04-26 15:36:40.219353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.866 [2024-04-26 15:36:40.219359] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.866 [2024-04-26 15:36:40.219365] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.866 [2024-04-26 15:36:40.219378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.866 qpair failed and we were unable to recover it. 00:26:22.866 [2024-04-26 15:36:40.229333] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.866 [2024-04-26 15:36:40.229381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.866 [2024-04-26 15:36:40.229395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.866 [2024-04-26 15:36:40.229401] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.867 [2024-04-26 15:36:40.229407] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.867 [2024-04-26 15:36:40.229421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.867 qpair failed and we were unable to recover it. 00:26:22.867 [2024-04-26 15:36:40.239400] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.867 [2024-04-26 15:36:40.239466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.867 [2024-04-26 15:36:40.239479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.867 [2024-04-26 15:36:40.239486] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.867 [2024-04-26 15:36:40.239492] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.867 [2024-04-26 15:36:40.239505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.867 qpair failed and we were unable to recover it. 00:26:22.867 [2024-04-26 15:36:40.249446] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.867 [2024-04-26 15:36:40.249498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.867 [2024-04-26 15:36:40.249512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.867 [2024-04-26 15:36:40.249519] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.867 [2024-04-26 15:36:40.249525] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.867 [2024-04-26 15:36:40.249538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.867 qpair failed and we were unable to recover it. 00:26:22.867 [2024-04-26 15:36:40.259357] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.867 [2024-04-26 15:36:40.259406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.867 [2024-04-26 15:36:40.259420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.867 [2024-04-26 15:36:40.259426] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.867 [2024-04-26 15:36:40.259432] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.867 [2024-04-26 15:36:40.259446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.867 qpair failed and we were unable to recover it. 00:26:22.867 [2024-04-26 15:36:40.269468] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.867 [2024-04-26 15:36:40.269524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.867 [2024-04-26 15:36:40.269537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.867 [2024-04-26 15:36:40.269544] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.867 [2024-04-26 15:36:40.269550] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.867 [2024-04-26 15:36:40.269563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.867 qpair failed and we were unable to recover it. 00:26:22.867 [2024-04-26 15:36:40.279518] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.867 [2024-04-26 15:36:40.279577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.867 [2024-04-26 15:36:40.279591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.867 [2024-04-26 15:36:40.279597] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.867 [2024-04-26 15:36:40.279609] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.867 [2024-04-26 15:36:40.279623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.867 qpair failed and we were unable to recover it. 00:26:22.867 [2024-04-26 15:36:40.289572] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.867 [2024-04-26 15:36:40.289647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.867 [2024-04-26 15:36:40.289670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.867 [2024-04-26 15:36:40.289678] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.867 [2024-04-26 15:36:40.289685] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.867 [2024-04-26 15:36:40.289703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.867 qpair failed and we were unable to recover it. 00:26:22.867 [2024-04-26 15:36:40.299542] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.867 [2024-04-26 15:36:40.299593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.867 [2024-04-26 15:36:40.299608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.867 [2024-04-26 15:36:40.299615] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.867 [2024-04-26 15:36:40.299621] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.867 [2024-04-26 15:36:40.299635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.867 qpair failed and we were unable to recover it. 00:26:22.867 [2024-04-26 15:36:40.309633] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.867 [2024-04-26 15:36:40.309731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.867 [2024-04-26 15:36:40.309746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.867 [2024-04-26 15:36:40.309753] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.867 [2024-04-26 15:36:40.309759] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:22.867 [2024-04-26 15:36:40.309772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.867 qpair failed and we were unable to recover it. 00:26:23.130 [2024-04-26 15:36:40.319627] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.130 [2024-04-26 15:36:40.319688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.130 [2024-04-26 15:36:40.319710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.130 [2024-04-26 15:36:40.319717] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.130 [2024-04-26 15:36:40.319723] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.130 [2024-04-26 15:36:40.319741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.130 qpair failed and we were unable to recover it. 00:26:23.130 [2024-04-26 15:36:40.329630] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.130 [2024-04-26 15:36:40.329685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.130 [2024-04-26 15:36:40.329699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.130 [2024-04-26 15:36:40.329706] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.130 [2024-04-26 15:36:40.329712] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.130 [2024-04-26 15:36:40.329725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.130 qpair failed and we were unable to recover it. 00:26:23.130 [2024-04-26 15:36:40.339649] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.130 [2024-04-26 15:36:40.339703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.130 [2024-04-26 15:36:40.339717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.130 [2024-04-26 15:36:40.339724] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.130 [2024-04-26 15:36:40.339730] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.130 [2024-04-26 15:36:40.339743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.130 qpair failed and we were unable to recover it. 00:26:23.130 [2024-04-26 15:36:40.349683] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.130 [2024-04-26 15:36:40.349737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.130 [2024-04-26 15:36:40.349752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.130 [2024-04-26 15:36:40.349759] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.130 [2024-04-26 15:36:40.349765] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.130 [2024-04-26 15:36:40.349779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.130 qpair failed and we were unable to recover it. 00:26:23.130 [2024-04-26 15:36:40.359738] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.130 [2024-04-26 15:36:40.359792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.130 [2024-04-26 15:36:40.359806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.130 [2024-04-26 15:36:40.359813] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.130 [2024-04-26 15:36:40.359819] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.130 [2024-04-26 15:36:40.359832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.130 qpair failed and we were unable to recover it. 00:26:23.130 [2024-04-26 15:36:40.369846] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.130 [2024-04-26 15:36:40.369903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.130 [2024-04-26 15:36:40.369917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.130 [2024-04-26 15:36:40.369927] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.130 [2024-04-26 15:36:40.369933] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.130 [2024-04-26 15:36:40.369947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.130 qpair failed and we were unable to recover it. 00:26:23.130 [2024-04-26 15:36:40.379756] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.130 [2024-04-26 15:36:40.379811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.130 [2024-04-26 15:36:40.379825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.130 [2024-04-26 15:36:40.379831] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.130 [2024-04-26 15:36:40.379842] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.130 [2024-04-26 15:36:40.379856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.130 qpair failed and we were unable to recover it. 00:26:23.130 [2024-04-26 15:36:40.389654] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.130 [2024-04-26 15:36:40.389707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.130 [2024-04-26 15:36:40.389720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.130 [2024-04-26 15:36:40.389727] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.130 [2024-04-26 15:36:40.389733] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.130 [2024-04-26 15:36:40.389746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.130 qpair failed and we were unable to recover it. 00:26:23.130 [2024-04-26 15:36:40.399708] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.130 [2024-04-26 15:36:40.399760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.130 [2024-04-26 15:36:40.399773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.130 [2024-04-26 15:36:40.399780] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.130 [2024-04-26 15:36:40.399786] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.130 [2024-04-26 15:36:40.399800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.130 qpair failed and we were unable to recover it. 00:26:23.130 [2024-04-26 15:36:40.409880] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.130 [2024-04-26 15:36:40.409939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.130 [2024-04-26 15:36:40.409956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.130 [2024-04-26 15:36:40.409963] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.130 [2024-04-26 15:36:40.409969] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.130 [2024-04-26 15:36:40.409985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.130 qpair failed and we were unable to recover it. 00:26:23.130 [2024-04-26 15:36:40.419857] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.130 [2024-04-26 15:36:40.419907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.130 [2024-04-26 15:36:40.419921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.130 [2024-04-26 15:36:40.419928] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.130 [2024-04-26 15:36:40.419934] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.130 [2024-04-26 15:36:40.419948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.130 qpair failed and we were unable to recover it. 00:26:23.130 [2024-04-26 15:36:40.429888] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.130 [2024-04-26 15:36:40.429943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.130 [2024-04-26 15:36:40.429957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.130 [2024-04-26 15:36:40.429963] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.130 [2024-04-26 15:36:40.429970] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.130 [2024-04-26 15:36:40.429983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.130 qpair failed and we were unable to recover it. 00:26:23.130 [2024-04-26 15:36:40.439943] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.130 [2024-04-26 15:36:40.439998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.130 [2024-04-26 15:36:40.440012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.131 [2024-04-26 15:36:40.440019] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.131 [2024-04-26 15:36:40.440024] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.131 [2024-04-26 15:36:40.440038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.131 qpair failed and we were unable to recover it. 00:26:23.131 [2024-04-26 15:36:40.449933] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.131 [2024-04-26 15:36:40.449984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.131 [2024-04-26 15:36:40.449998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.131 [2024-04-26 15:36:40.450005] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.131 [2024-04-26 15:36:40.450011] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.131 [2024-04-26 15:36:40.450025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.131 qpair failed and we were unable to recover it. 00:26:23.131 [2024-04-26 15:36:40.459857] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.131 [2024-04-26 15:36:40.459909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.131 [2024-04-26 15:36:40.459926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.131 [2024-04-26 15:36:40.459933] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.131 [2024-04-26 15:36:40.459939] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.131 [2024-04-26 15:36:40.459952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.131 qpair failed and we were unable to recover it. 00:26:23.131 [2024-04-26 15:36:40.469961] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.131 [2024-04-26 15:36:40.470023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.131 [2024-04-26 15:36:40.470037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.131 [2024-04-26 15:36:40.470044] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.131 [2024-04-26 15:36:40.470050] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.131 [2024-04-26 15:36:40.470063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.131 qpair failed and we were unable to recover it. 00:26:23.131 [2024-04-26 15:36:40.480053] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.131 [2024-04-26 15:36:40.480153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.131 [2024-04-26 15:36:40.480168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.131 [2024-04-26 15:36:40.480175] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.131 [2024-04-26 15:36:40.480183] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.131 [2024-04-26 15:36:40.480197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.131 qpair failed and we were unable to recover it. 00:26:23.131 [2024-04-26 15:36:40.490015] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.131 [2024-04-26 15:36:40.490063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.131 [2024-04-26 15:36:40.490077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.131 [2024-04-26 15:36:40.490084] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.131 [2024-04-26 15:36:40.490090] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.131 [2024-04-26 15:36:40.490103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.131 qpair failed and we were unable to recover it. 00:26:23.131 [2024-04-26 15:36:40.499951] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.131 [2024-04-26 15:36:40.500000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.131 [2024-04-26 15:36:40.500014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.131 [2024-04-26 15:36:40.500021] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.131 [2024-04-26 15:36:40.500027] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.131 [2024-04-26 15:36:40.500044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.131 qpair failed and we were unable to recover it. 00:26:23.131 [2024-04-26 15:36:40.510098] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.131 [2024-04-26 15:36:40.510150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.131 [2024-04-26 15:36:40.510164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.131 [2024-04-26 15:36:40.510171] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.131 [2024-04-26 15:36:40.510177] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.131 [2024-04-26 15:36:40.510191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.131 qpair failed and we were unable to recover it. 00:26:23.131 [2024-04-26 15:36:40.520152] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.131 [2024-04-26 15:36:40.520199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.131 [2024-04-26 15:36:40.520213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.131 [2024-04-26 15:36:40.520220] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.131 [2024-04-26 15:36:40.520226] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.131 [2024-04-26 15:36:40.520239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.131 qpair failed and we were unable to recover it. 00:26:23.131 [2024-04-26 15:36:40.530122] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.131 [2024-04-26 15:36:40.530211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.131 [2024-04-26 15:36:40.530224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.131 [2024-04-26 15:36:40.530231] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.131 [2024-04-26 15:36:40.530237] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.131 [2024-04-26 15:36:40.530250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.131 qpair failed and we were unable to recover it. 00:26:23.131 [2024-04-26 15:36:40.540187] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.131 [2024-04-26 15:36:40.540233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.131 [2024-04-26 15:36:40.540247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.131 [2024-04-26 15:36:40.540253] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.131 [2024-04-26 15:36:40.540259] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.131 [2024-04-26 15:36:40.540273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.131 qpair failed and we were unable to recover it. 00:26:23.131 [2024-04-26 15:36:40.550208] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.131 [2024-04-26 15:36:40.550263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.131 [2024-04-26 15:36:40.550281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.131 [2024-04-26 15:36:40.550287] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.131 [2024-04-26 15:36:40.550293] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.131 [2024-04-26 15:36:40.550307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.131 qpair failed and we were unable to recover it. 00:26:23.131 [2024-04-26 15:36:40.560281] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.131 [2024-04-26 15:36:40.560374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.131 [2024-04-26 15:36:40.560388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.131 [2024-04-26 15:36:40.560394] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.131 [2024-04-26 15:36:40.560401] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.131 [2024-04-26 15:36:40.560414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.131 qpair failed and we were unable to recover it. 00:26:23.131 [2024-04-26 15:36:40.570202] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.131 [2024-04-26 15:36:40.570254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.131 [2024-04-26 15:36:40.570268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.131 [2024-04-26 15:36:40.570274] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.131 [2024-04-26 15:36:40.570280] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.132 [2024-04-26 15:36:40.570293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.132 qpair failed and we were unable to recover it. 00:26:23.393 [2024-04-26 15:36:40.580284] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.393 [2024-04-26 15:36:40.580329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.394 [2024-04-26 15:36:40.580343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.394 [2024-04-26 15:36:40.580350] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.394 [2024-04-26 15:36:40.580356] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.394 [2024-04-26 15:36:40.580369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.394 qpair failed and we were unable to recover it. 00:26:23.394 [2024-04-26 15:36:40.590295] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.394 [2024-04-26 15:36:40.590350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.394 [2024-04-26 15:36:40.590363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.394 [2024-04-26 15:36:40.590370] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.394 [2024-04-26 15:36:40.590379] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.394 [2024-04-26 15:36:40.590393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.394 qpair failed and we were unable to recover it. 00:26:23.394 [2024-04-26 15:36:40.600364] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.394 [2024-04-26 15:36:40.600415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.394 [2024-04-26 15:36:40.600428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.394 [2024-04-26 15:36:40.600435] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.394 [2024-04-26 15:36:40.600441] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.394 [2024-04-26 15:36:40.600454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.394 qpair failed and we were unable to recover it. 00:26:23.394 [2024-04-26 15:36:40.610355] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.394 [2024-04-26 15:36:40.610452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.394 [2024-04-26 15:36:40.610466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.394 [2024-04-26 15:36:40.610473] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.394 [2024-04-26 15:36:40.610478] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.394 [2024-04-26 15:36:40.610492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.394 qpair failed and we were unable to recover it. 00:26:23.394 [2024-04-26 15:36:40.620370] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.394 [2024-04-26 15:36:40.620422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.394 [2024-04-26 15:36:40.620435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.394 [2024-04-26 15:36:40.620442] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.394 [2024-04-26 15:36:40.620448] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.394 [2024-04-26 15:36:40.620461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.394 qpair failed and we were unable to recover it. 00:26:23.394 [2024-04-26 15:36:40.630389] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.394 [2024-04-26 15:36:40.630441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.394 [2024-04-26 15:36:40.630455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.394 [2024-04-26 15:36:40.630461] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.394 [2024-04-26 15:36:40.630467] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.394 [2024-04-26 15:36:40.630480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.394 qpair failed and we were unable to recover it. 00:26:23.394 [2024-04-26 15:36:40.640462] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.394 [2024-04-26 15:36:40.640514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.394 [2024-04-26 15:36:40.640527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.394 [2024-04-26 15:36:40.640534] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.394 [2024-04-26 15:36:40.640540] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.394 [2024-04-26 15:36:40.640553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.394 qpair failed and we were unable to recover it. 00:26:23.394 [2024-04-26 15:36:40.650443] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.394 [2024-04-26 15:36:40.650492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.394 [2024-04-26 15:36:40.650507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.394 [2024-04-26 15:36:40.650513] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.394 [2024-04-26 15:36:40.650520] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.394 [2024-04-26 15:36:40.650533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.394 qpair failed and we were unable to recover it. 00:26:23.394 [2024-04-26 15:36:40.660553] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.394 [2024-04-26 15:36:40.660624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.394 [2024-04-26 15:36:40.660641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.394 [2024-04-26 15:36:40.660648] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.394 [2024-04-26 15:36:40.660654] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.394 [2024-04-26 15:36:40.660669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.394 qpair failed and we were unable to recover it. 00:26:23.394 [2024-04-26 15:36:40.670524] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.394 [2024-04-26 15:36:40.670576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.394 [2024-04-26 15:36:40.670590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.394 [2024-04-26 15:36:40.670596] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.394 [2024-04-26 15:36:40.670602] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.394 [2024-04-26 15:36:40.670616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.394 qpair failed and we were unable to recover it. 00:26:23.394 [2024-04-26 15:36:40.680595] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.394 [2024-04-26 15:36:40.680645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.394 [2024-04-26 15:36:40.680658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.394 [2024-04-26 15:36:40.680665] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.394 [2024-04-26 15:36:40.680674] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.394 [2024-04-26 15:36:40.680688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.394 qpair failed and we were unable to recover it. 00:26:23.394 [2024-04-26 15:36:40.690576] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.394 [2024-04-26 15:36:40.690623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.394 [2024-04-26 15:36:40.690637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.394 [2024-04-26 15:36:40.690643] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.394 [2024-04-26 15:36:40.690649] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.394 [2024-04-26 15:36:40.690663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.394 qpair failed and we were unable to recover it. 00:26:23.394 [2024-04-26 15:36:40.700480] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.394 [2024-04-26 15:36:40.700528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.394 [2024-04-26 15:36:40.700541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.394 [2024-04-26 15:36:40.700548] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.394 [2024-04-26 15:36:40.700554] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.394 [2024-04-26 15:36:40.700568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.394 qpair failed and we were unable to recover it. 00:26:23.394 [2024-04-26 15:36:40.710500] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.394 [2024-04-26 15:36:40.710555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.394 [2024-04-26 15:36:40.710569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.394 [2024-04-26 15:36:40.710576] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.395 [2024-04-26 15:36:40.710582] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.395 [2024-04-26 15:36:40.710595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.395 qpair failed and we were unable to recover it. 00:26:23.395 [2024-04-26 15:36:40.720695] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.395 [2024-04-26 15:36:40.720751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.395 [2024-04-26 15:36:40.720764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.395 [2024-04-26 15:36:40.720771] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.395 [2024-04-26 15:36:40.720777] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.395 [2024-04-26 15:36:40.720790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.395 qpair failed and we were unable to recover it. 00:26:23.395 [2024-04-26 15:36:40.730678] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.395 [2024-04-26 15:36:40.730762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.395 [2024-04-26 15:36:40.730776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.395 [2024-04-26 15:36:40.730783] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.395 [2024-04-26 15:36:40.730789] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.395 [2024-04-26 15:36:40.730802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.395 qpair failed and we were unable to recover it. 00:26:23.395 [2024-04-26 15:36:40.740753] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.395 [2024-04-26 15:36:40.740812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.395 [2024-04-26 15:36:40.740826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.395 [2024-04-26 15:36:40.740832] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.395 [2024-04-26 15:36:40.740844] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.395 [2024-04-26 15:36:40.740857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.395 qpair failed and we were unable to recover it. 00:26:23.395 [2024-04-26 15:36:40.750604] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.395 [2024-04-26 15:36:40.750656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.395 [2024-04-26 15:36:40.750670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.395 [2024-04-26 15:36:40.750677] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.395 [2024-04-26 15:36:40.750683] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.395 [2024-04-26 15:36:40.750703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.395 qpair failed and we were unable to recover it. 00:26:23.395 [2024-04-26 15:36:40.760782] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.395 [2024-04-26 15:36:40.760835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.395 [2024-04-26 15:36:40.760853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.395 [2024-04-26 15:36:40.760860] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.395 [2024-04-26 15:36:40.760866] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.395 [2024-04-26 15:36:40.760880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.395 qpair failed and we were unable to recover it. 00:26:23.395 [2024-04-26 15:36:40.770772] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.395 [2024-04-26 15:36:40.770831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.395 [2024-04-26 15:36:40.770850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.395 [2024-04-26 15:36:40.770860] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.395 [2024-04-26 15:36:40.770866] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.395 [2024-04-26 15:36:40.770881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.395 qpair failed and we were unable to recover it. 00:26:23.395 [2024-04-26 15:36:40.780679] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.395 [2024-04-26 15:36:40.780732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.395 [2024-04-26 15:36:40.780746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.395 [2024-04-26 15:36:40.780753] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.395 [2024-04-26 15:36:40.780759] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.395 [2024-04-26 15:36:40.780772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.395 qpair failed and we were unable to recover it. 00:26:23.395 [2024-04-26 15:36:40.790824] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.395 [2024-04-26 15:36:40.790895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.395 [2024-04-26 15:36:40.790910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.395 [2024-04-26 15:36:40.790917] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.395 [2024-04-26 15:36:40.790924] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.395 [2024-04-26 15:36:40.790938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.395 qpair failed and we were unable to recover it. 00:26:23.395 [2024-04-26 15:36:40.800884] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.395 [2024-04-26 15:36:40.800936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.395 [2024-04-26 15:36:40.800949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.395 [2024-04-26 15:36:40.800956] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.395 [2024-04-26 15:36:40.800962] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.395 [2024-04-26 15:36:40.800976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.395 qpair failed and we were unable to recover it. 00:26:23.395 [2024-04-26 15:36:40.810886] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.395 [2024-04-26 15:36:40.810933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.395 [2024-04-26 15:36:40.810947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.395 [2024-04-26 15:36:40.810953] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.395 [2024-04-26 15:36:40.810959] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.395 [2024-04-26 15:36:40.810973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.395 qpair failed and we were unable to recover it. 00:26:23.395 [2024-04-26 15:36:40.820975] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.395 [2024-04-26 15:36:40.821028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.395 [2024-04-26 15:36:40.821041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.395 [2024-04-26 15:36:40.821048] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.395 [2024-04-26 15:36:40.821055] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.395 [2024-04-26 15:36:40.821068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.395 qpair failed and we were unable to recover it. 00:26:23.395 [2024-04-26 15:36:40.830932] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.395 [2024-04-26 15:36:40.830987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.395 [2024-04-26 15:36:40.831001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.395 [2024-04-26 15:36:40.831007] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.395 [2024-04-26 15:36:40.831014] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.395 [2024-04-26 15:36:40.831027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.395 qpair failed and we were unable to recover it. 00:26:23.658 [2024-04-26 15:36:40.840984] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.658 [2024-04-26 15:36:40.841036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.658 [2024-04-26 15:36:40.841050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.658 [2024-04-26 15:36:40.841057] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.658 [2024-04-26 15:36:40.841063] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.658 [2024-04-26 15:36:40.841077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.658 qpair failed and we were unable to recover it. 00:26:23.658 [2024-04-26 15:36:40.850977] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.658 [2024-04-26 15:36:40.851026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.658 [2024-04-26 15:36:40.851039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.658 [2024-04-26 15:36:40.851046] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.658 [2024-04-26 15:36:40.851052] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.658 [2024-04-26 15:36:40.851065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.658 qpair failed and we were unable to recover it. 00:26:23.658 [2024-04-26 15:36:40.861027] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.658 [2024-04-26 15:36:40.861080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.658 [2024-04-26 15:36:40.861096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.658 [2024-04-26 15:36:40.861103] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.658 [2024-04-26 15:36:40.861109] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.658 [2024-04-26 15:36:40.861123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.658 qpair failed and we were unable to recover it. 00:26:23.658 [2024-04-26 15:36:40.871028] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.658 [2024-04-26 15:36:40.871080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.658 [2024-04-26 15:36:40.871093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.658 [2024-04-26 15:36:40.871099] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.658 [2024-04-26 15:36:40.871105] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.658 [2024-04-26 15:36:40.871119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.658 qpair failed and we were unable to recover it. 00:26:23.658 [2024-04-26 15:36:40.881116] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.658 [2024-04-26 15:36:40.881189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.658 [2024-04-26 15:36:40.881202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.658 [2024-04-26 15:36:40.881209] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.658 [2024-04-26 15:36:40.881215] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.658 [2024-04-26 15:36:40.881228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.658 qpair failed and we were unable to recover it. 00:26:23.658 [2024-04-26 15:36:40.891129] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.658 [2024-04-26 15:36:40.891222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.658 [2024-04-26 15:36:40.891235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.658 [2024-04-26 15:36:40.891242] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.658 [2024-04-26 15:36:40.891248] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.658 [2024-04-26 15:36:40.891262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.658 qpair failed and we were unable to recover it. 00:26:23.658 [2024-04-26 15:36:40.901115] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.659 [2024-04-26 15:36:40.901172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.659 [2024-04-26 15:36:40.901186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.659 [2024-04-26 15:36:40.901193] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.659 [2024-04-26 15:36:40.901200] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.659 [2024-04-26 15:36:40.901217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.659 qpair failed and we were unable to recover it. 00:26:23.659 [2024-04-26 15:36:40.911193] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.659 [2024-04-26 15:36:40.911273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.659 [2024-04-26 15:36:40.911290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.659 [2024-04-26 15:36:40.911297] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.659 [2024-04-26 15:36:40.911303] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.659 [2024-04-26 15:36:40.911318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.659 qpair failed and we were unable to recover it. 00:26:23.659 [2024-04-26 15:36:40.921225] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.659 [2024-04-26 15:36:40.921277] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.659 [2024-04-26 15:36:40.921291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.659 [2024-04-26 15:36:40.921298] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.659 [2024-04-26 15:36:40.921304] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.659 [2024-04-26 15:36:40.921318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.659 qpair failed and we were unable to recover it. 00:26:23.659 [2024-04-26 15:36:40.931217] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.659 [2024-04-26 15:36:40.931262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.659 [2024-04-26 15:36:40.931276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.659 [2024-04-26 15:36:40.931283] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.659 [2024-04-26 15:36:40.931288] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.659 [2024-04-26 15:36:40.931302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.659 qpair failed and we were unable to recover it. 00:26:23.659 [2024-04-26 15:36:40.941239] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.659 [2024-04-26 15:36:40.941292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.659 [2024-04-26 15:36:40.941305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.659 [2024-04-26 15:36:40.941312] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.659 [2024-04-26 15:36:40.941318] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.659 [2024-04-26 15:36:40.941331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.659 qpair failed and we were unable to recover it. 00:26:23.659 [2024-04-26 15:36:40.951260] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.659 [2024-04-26 15:36:40.951312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.659 [2024-04-26 15:36:40.951329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.659 [2024-04-26 15:36:40.951336] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.659 [2024-04-26 15:36:40.951342] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.659 [2024-04-26 15:36:40.951355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.659 qpair failed and we were unable to recover it. 00:26:23.659 [2024-04-26 15:36:40.961300] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.659 [2024-04-26 15:36:40.961352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.659 [2024-04-26 15:36:40.961366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.659 [2024-04-26 15:36:40.961373] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.659 [2024-04-26 15:36:40.961379] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.659 [2024-04-26 15:36:40.961392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.659 qpair failed and we were unable to recover it. 00:26:23.659 [2024-04-26 15:36:40.971239] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.659 [2024-04-26 15:36:40.971292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.659 [2024-04-26 15:36:40.971306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.659 [2024-04-26 15:36:40.971312] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.659 [2024-04-26 15:36:40.971318] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.659 [2024-04-26 15:36:40.971332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.659 qpair failed and we were unable to recover it. 00:26:23.659 [2024-04-26 15:36:40.981347] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.659 [2024-04-26 15:36:40.981396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.659 [2024-04-26 15:36:40.981409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.659 [2024-04-26 15:36:40.981416] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.659 [2024-04-26 15:36:40.981422] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.659 [2024-04-26 15:36:40.981435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.659 qpair failed and we were unable to recover it. 00:26:23.659 [2024-04-26 15:36:40.991293] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.659 [2024-04-26 15:36:40.991348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.659 [2024-04-26 15:36:40.991362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.659 [2024-04-26 15:36:40.991369] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.659 [2024-04-26 15:36:40.991375] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.659 [2024-04-26 15:36:40.991395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.659 qpair failed and we were unable to recover it. 00:26:23.659 [2024-04-26 15:36:41.001432] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.659 [2024-04-26 15:36:41.001489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.659 [2024-04-26 15:36:41.001503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.659 [2024-04-26 15:36:41.001510] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.659 [2024-04-26 15:36:41.001516] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.659 [2024-04-26 15:36:41.001530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.659 qpair failed and we were unable to recover it. 00:26:23.659 [2024-04-26 15:36:41.011413] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.659 [2024-04-26 15:36:41.011465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.659 [2024-04-26 15:36:41.011480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.659 [2024-04-26 15:36:41.011487] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.659 [2024-04-26 15:36:41.011493] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.659 [2024-04-26 15:36:41.011506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.659 qpair failed and we were unable to recover it. 00:26:23.659 [2024-04-26 15:36:41.021448] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.659 [2024-04-26 15:36:41.021500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.659 [2024-04-26 15:36:41.021513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.659 [2024-04-26 15:36:41.021520] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.659 [2024-04-26 15:36:41.021526] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.659 [2024-04-26 15:36:41.021539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.659 qpair failed and we were unable to recover it. 00:26:23.659 [2024-04-26 15:36:41.031481] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.659 [2024-04-26 15:36:41.031532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.659 [2024-04-26 15:36:41.031546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.660 [2024-04-26 15:36:41.031552] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.660 [2024-04-26 15:36:41.031558] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.660 [2024-04-26 15:36:41.031572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.660 qpair failed and we were unable to recover it. 00:26:23.660 [2024-04-26 15:36:41.041369] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.660 [2024-04-26 15:36:41.041424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.660 [2024-04-26 15:36:41.041438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.660 [2024-04-26 15:36:41.041445] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.660 [2024-04-26 15:36:41.041451] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.660 [2024-04-26 15:36:41.041465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.660 qpair failed and we were unable to recover it. 00:26:23.660 [2024-04-26 15:36:41.041562] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b1c60 is same with the state(5) to be set 00:26:23.660 [2024-04-26 15:36:41.051539] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.660 [2024-04-26 15:36:41.051588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.660 [2024-04-26 15:36:41.051602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.660 [2024-04-26 15:36:41.051609] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.660 [2024-04-26 15:36:41.051615] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.660 [2024-04-26 15:36:41.051628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.660 qpair failed and we were unable to recover it. 00:26:23.660 [2024-04-26 15:36:41.061460] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.660 [2024-04-26 15:36:41.061515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.660 [2024-04-26 15:36:41.061530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.660 [2024-04-26 15:36:41.061537] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.660 [2024-04-26 15:36:41.061546] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.660 [2024-04-26 15:36:41.061561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.660 qpair failed and we were unable to recover it. 00:26:23.660 [2024-04-26 15:36:41.071588] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.660 [2024-04-26 15:36:41.071646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.660 [2024-04-26 15:36:41.071660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.660 [2024-04-26 15:36:41.071667] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.660 [2024-04-26 15:36:41.071673] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.660 [2024-04-26 15:36:41.071687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.660 qpair failed and we were unable to recover it. 00:26:23.660 [2024-04-26 15:36:41.081617] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.660 [2024-04-26 15:36:41.081668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.660 [2024-04-26 15:36:41.081682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.660 [2024-04-26 15:36:41.081692] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.660 [2024-04-26 15:36:41.081698] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.660 [2024-04-26 15:36:41.081712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.660 qpair failed and we were unable to recover it. 00:26:23.660 [2024-04-26 15:36:41.091595] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.660 [2024-04-26 15:36:41.091685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.660 [2024-04-26 15:36:41.091699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.660 [2024-04-26 15:36:41.091706] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.660 [2024-04-26 15:36:41.091712] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.660 [2024-04-26 15:36:41.091726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.660 qpair failed and we were unable to recover it. 00:26:23.660 [2024-04-26 15:36:41.101544] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.660 [2024-04-26 15:36:41.101595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.660 [2024-04-26 15:36:41.101609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.660 [2024-04-26 15:36:41.101616] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.660 [2024-04-26 15:36:41.101622] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.660 [2024-04-26 15:36:41.101636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.660 qpair failed and we were unable to recover it. 00:26:23.928 [2024-04-26 15:36:41.111689] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.929 [2024-04-26 15:36:41.111743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.929 [2024-04-26 15:36:41.111757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.929 [2024-04-26 15:36:41.111764] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.929 [2024-04-26 15:36:41.111770] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.929 [2024-04-26 15:36:41.111784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.929 qpair failed and we were unable to recover it. 00:26:23.929 [2024-04-26 15:36:41.121762] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.929 [2024-04-26 15:36:41.121810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.929 [2024-04-26 15:36:41.121823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.929 [2024-04-26 15:36:41.121830] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.929 [2024-04-26 15:36:41.121840] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.929 [2024-04-26 15:36:41.121855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.929 qpair failed and we were unable to recover it. 00:26:23.929 [2024-04-26 15:36:41.131739] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.929 [2024-04-26 15:36:41.131823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.929 [2024-04-26 15:36:41.131843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.929 [2024-04-26 15:36:41.131850] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.929 [2024-04-26 15:36:41.131856] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.929 [2024-04-26 15:36:41.131871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.929 qpair failed and we were unable to recover it. 00:26:23.929 [2024-04-26 15:36:41.141778] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.929 [2024-04-26 15:36:41.141828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.929 [2024-04-26 15:36:41.141847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.929 [2024-04-26 15:36:41.141853] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.929 [2024-04-26 15:36:41.141860] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.929 [2024-04-26 15:36:41.141873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.929 qpair failed and we were unable to recover it. 00:26:23.929 [2024-04-26 15:36:41.151675] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.929 [2024-04-26 15:36:41.151731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.929 [2024-04-26 15:36:41.151744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.929 [2024-04-26 15:36:41.151751] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.929 [2024-04-26 15:36:41.151757] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.929 [2024-04-26 15:36:41.151770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.929 qpair failed and we were unable to recover it. 00:26:23.929 [2024-04-26 15:36:41.161820] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.929 [2024-04-26 15:36:41.161881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.929 [2024-04-26 15:36:41.161900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.929 [2024-04-26 15:36:41.161908] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.929 [2024-04-26 15:36:41.161914] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.929 [2024-04-26 15:36:41.161931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.929 qpair failed and we were unable to recover it. 00:26:23.929 [2024-04-26 15:36:41.171843] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.929 [2024-04-26 15:36:41.171893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.929 [2024-04-26 15:36:41.171910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.929 [2024-04-26 15:36:41.171918] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.929 [2024-04-26 15:36:41.171924] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.929 [2024-04-26 15:36:41.171938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.929 qpair failed and we were unable to recover it. 00:26:23.929 [2024-04-26 15:36:41.181860] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.929 [2024-04-26 15:36:41.181918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.929 [2024-04-26 15:36:41.181932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.929 [2024-04-26 15:36:41.181938] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.929 [2024-04-26 15:36:41.181944] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.929 [2024-04-26 15:36:41.181959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.929 qpair failed and we were unable to recover it. 00:26:23.929 [2024-04-26 15:36:41.191896] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.929 [2024-04-26 15:36:41.191955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.929 [2024-04-26 15:36:41.191969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.929 [2024-04-26 15:36:41.191976] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.929 [2024-04-26 15:36:41.191982] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.929 [2024-04-26 15:36:41.191995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.929 qpair failed and we were unable to recover it. 00:26:23.929 [2024-04-26 15:36:41.201903] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.929 [2024-04-26 15:36:41.201959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.929 [2024-04-26 15:36:41.201972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.929 [2024-04-26 15:36:41.201979] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.929 [2024-04-26 15:36:41.201985] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.929 [2024-04-26 15:36:41.201999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.929 qpair failed and we were unable to recover it. 00:26:23.929 [2024-04-26 15:36:41.211821] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.929 [2024-04-26 15:36:41.211874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.929 [2024-04-26 15:36:41.211888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.929 [2024-04-26 15:36:41.211895] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.929 [2024-04-26 15:36:41.211901] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.929 [2024-04-26 15:36:41.211919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.929 qpair failed and we were unable to recover it. 00:26:23.929 [2024-04-26 15:36:41.221973] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.929 [2024-04-26 15:36:41.222026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.929 [2024-04-26 15:36:41.222041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.929 [2024-04-26 15:36:41.222047] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.929 [2024-04-26 15:36:41.222053] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.929 [2024-04-26 15:36:41.222067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.929 qpair failed and we were unable to recover it. 00:26:23.929 [2024-04-26 15:36:41.232001] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.929 [2024-04-26 15:36:41.232057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.929 [2024-04-26 15:36:41.232070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.929 [2024-04-26 15:36:41.232077] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.929 [2024-04-26 15:36:41.232083] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.929 [2024-04-26 15:36:41.232097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.930 qpair failed and we were unable to recover it. 00:26:23.930 [2024-04-26 15:36:41.241991] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.930 [2024-04-26 15:36:41.242039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.930 [2024-04-26 15:36:41.242054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.930 [2024-04-26 15:36:41.242060] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.930 [2024-04-26 15:36:41.242067] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.930 [2024-04-26 15:36:41.242080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.930 qpair failed and we were unable to recover it. 00:26:23.930 [2024-04-26 15:36:41.252057] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.930 [2024-04-26 15:36:41.252110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.930 [2024-04-26 15:36:41.252124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.930 [2024-04-26 15:36:41.252130] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.930 [2024-04-26 15:36:41.252136] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.930 [2024-04-26 15:36:41.252150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.930 qpair failed and we were unable to recover it. 00:26:23.930 [2024-04-26 15:36:41.262066] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.930 [2024-04-26 15:36:41.262157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.930 [2024-04-26 15:36:41.262174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.930 [2024-04-26 15:36:41.262181] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.930 [2024-04-26 15:36:41.262187] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.930 [2024-04-26 15:36:41.262200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.930 qpair failed and we were unable to recover it. 00:26:23.930 [2024-04-26 15:36:41.272116] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.930 [2024-04-26 15:36:41.272171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.930 [2024-04-26 15:36:41.272185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.930 [2024-04-26 15:36:41.272192] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.930 [2024-04-26 15:36:41.272198] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.930 [2024-04-26 15:36:41.272212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.930 qpair failed and we were unable to recover it. 00:26:23.930 [2024-04-26 15:36:41.282143] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.930 [2024-04-26 15:36:41.282191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.930 [2024-04-26 15:36:41.282205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.930 [2024-04-26 15:36:41.282212] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.930 [2024-04-26 15:36:41.282218] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.930 [2024-04-26 15:36:41.282232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.930 qpair failed and we were unable to recover it. 00:26:23.930 [2024-04-26 15:36:41.292185] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.930 [2024-04-26 15:36:41.292233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.930 [2024-04-26 15:36:41.292247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.930 [2024-04-26 15:36:41.292253] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.930 [2024-04-26 15:36:41.292259] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.930 [2024-04-26 15:36:41.292273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.930 qpair failed and we were unable to recover it. 00:26:23.930 [2024-04-26 15:36:41.302262] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.930 [2024-04-26 15:36:41.302314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.930 [2024-04-26 15:36:41.302328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.930 [2024-04-26 15:36:41.302335] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.930 [2024-04-26 15:36:41.302344] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.930 [2024-04-26 15:36:41.302358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.930 qpair failed and we were unable to recover it. 00:26:23.930 [2024-04-26 15:36:41.312204] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.930 [2024-04-26 15:36:41.312267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.930 [2024-04-26 15:36:41.312281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.930 [2024-04-26 15:36:41.312288] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.930 [2024-04-26 15:36:41.312294] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.930 [2024-04-26 15:36:41.312307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.930 qpair failed and we were unable to recover it. 00:26:23.930 [2024-04-26 15:36:41.322259] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.930 [2024-04-26 15:36:41.322310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.930 [2024-04-26 15:36:41.322324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.930 [2024-04-26 15:36:41.322331] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.930 [2024-04-26 15:36:41.322337] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.930 [2024-04-26 15:36:41.322350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.930 qpair failed and we were unable to recover it. 00:26:23.930 [2024-04-26 15:36:41.332146] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.930 [2024-04-26 15:36:41.332196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.930 [2024-04-26 15:36:41.332210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.930 [2024-04-26 15:36:41.332216] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.930 [2024-04-26 15:36:41.332222] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.930 [2024-04-26 15:36:41.332235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.930 qpair failed and we were unable to recover it. 00:26:23.930 [2024-04-26 15:36:41.342311] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.930 [2024-04-26 15:36:41.342392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.930 [2024-04-26 15:36:41.342406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.930 [2024-04-26 15:36:41.342412] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.930 [2024-04-26 15:36:41.342418] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.930 [2024-04-26 15:36:41.342432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.930 qpair failed and we were unable to recover it. 00:26:23.930 [2024-04-26 15:36:41.352354] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.930 [2024-04-26 15:36:41.352409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.930 [2024-04-26 15:36:41.352423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.930 [2024-04-26 15:36:41.352429] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.930 [2024-04-26 15:36:41.352436] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.930 [2024-04-26 15:36:41.352449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.930 qpair failed and we were unable to recover it. 00:26:23.930 [2024-04-26 15:36:41.362326] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.930 [2024-04-26 15:36:41.362376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.930 [2024-04-26 15:36:41.362390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.930 [2024-04-26 15:36:41.362396] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.931 [2024-04-26 15:36:41.362402] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.931 [2024-04-26 15:36:41.362415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.931 qpair failed and we were unable to recover it. 00:26:23.931 [2024-04-26 15:36:41.372364] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.931 [2024-04-26 15:36:41.372410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.931 [2024-04-26 15:36:41.372424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.931 [2024-04-26 15:36:41.372431] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.931 [2024-04-26 15:36:41.372437] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:23.931 [2024-04-26 15:36:41.372450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.931 qpair failed and we were unable to recover it. 00:26:24.196 [2024-04-26 15:36:41.382429] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.197 [2024-04-26 15:36:41.382482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.197 [2024-04-26 15:36:41.382496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.197 [2024-04-26 15:36:41.382503] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.197 [2024-04-26 15:36:41.382509] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.197 [2024-04-26 15:36:41.382522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.197 qpair failed and we were unable to recover it. 00:26:24.197 [2024-04-26 15:36:41.392435] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.197 [2024-04-26 15:36:41.392491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.197 [2024-04-26 15:36:41.392504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.197 [2024-04-26 15:36:41.392511] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.197 [2024-04-26 15:36:41.392520] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.197 [2024-04-26 15:36:41.392534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.197 qpair failed and we were unable to recover it. 00:26:24.197 [2024-04-26 15:36:41.402435] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.197 [2024-04-26 15:36:41.402483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.197 [2024-04-26 15:36:41.402496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.197 [2024-04-26 15:36:41.402503] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.197 [2024-04-26 15:36:41.402509] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.197 [2024-04-26 15:36:41.402523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.197 qpair failed and we were unable to recover it. 00:26:24.197 [2024-04-26 15:36:41.412510] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.197 [2024-04-26 15:36:41.412561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.197 [2024-04-26 15:36:41.412578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.197 [2024-04-26 15:36:41.412585] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.197 [2024-04-26 15:36:41.412591] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.197 [2024-04-26 15:36:41.412607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.197 qpair failed and we were unable to recover it. 00:26:24.197 [2024-04-26 15:36:41.422511] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.197 [2024-04-26 15:36:41.422562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.197 [2024-04-26 15:36:41.422576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.197 [2024-04-26 15:36:41.422582] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.197 [2024-04-26 15:36:41.422589] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.197 [2024-04-26 15:36:41.422602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.197 qpair failed and we were unable to recover it. 00:26:24.197 [2024-04-26 15:36:41.432464] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.197 [2024-04-26 15:36:41.432525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.197 [2024-04-26 15:36:41.432539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.197 [2024-04-26 15:36:41.432546] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.197 [2024-04-26 15:36:41.432552] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.197 [2024-04-26 15:36:41.432565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.197 qpair failed and we were unable to recover it. 00:26:24.197 [2024-04-26 15:36:41.442492] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.197 [2024-04-26 15:36:41.442538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.197 [2024-04-26 15:36:41.442551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.197 [2024-04-26 15:36:41.442558] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.197 [2024-04-26 15:36:41.442564] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.197 [2024-04-26 15:36:41.442578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.197 qpair failed and we were unable to recover it. 00:26:24.197 [2024-04-26 15:36:41.452561] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.197 [2024-04-26 15:36:41.452613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.197 [2024-04-26 15:36:41.452636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.197 [2024-04-26 15:36:41.452645] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.197 [2024-04-26 15:36:41.452652] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.197 [2024-04-26 15:36:41.452669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.197 qpair failed and we were unable to recover it. 00:26:24.197 [2024-04-26 15:36:41.462638] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.197 [2024-04-26 15:36:41.462694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.197 [2024-04-26 15:36:41.462718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.197 [2024-04-26 15:36:41.462726] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.197 [2024-04-26 15:36:41.462733] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.197 [2024-04-26 15:36:41.462751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.197 qpair failed and we were unable to recover it. 00:26:24.197 [2024-04-26 15:36:41.472657] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.197 [2024-04-26 15:36:41.472714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.197 [2024-04-26 15:36:41.472729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.197 [2024-04-26 15:36:41.472737] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.197 [2024-04-26 15:36:41.472744] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.197 [2024-04-26 15:36:41.472758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.197 qpair failed and we were unable to recover it. 00:26:24.197 [2024-04-26 15:36:41.482700] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.197 [2024-04-26 15:36:41.482752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.197 [2024-04-26 15:36:41.482767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.197 [2024-04-26 15:36:41.482778] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.197 [2024-04-26 15:36:41.482784] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.197 [2024-04-26 15:36:41.482798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.197 qpair failed and we were unable to recover it. 00:26:24.197 [2024-04-26 15:36:41.492727] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.197 [2024-04-26 15:36:41.492775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.197 [2024-04-26 15:36:41.492789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.197 [2024-04-26 15:36:41.492796] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.197 [2024-04-26 15:36:41.492802] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.197 [2024-04-26 15:36:41.492815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.197 qpair failed and we were unable to recover it. 00:26:24.197 [2024-04-26 15:36:41.502625] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.197 [2024-04-26 15:36:41.502674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.197 [2024-04-26 15:36:41.502688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.197 [2024-04-26 15:36:41.502695] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.197 [2024-04-26 15:36:41.502701] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.197 [2024-04-26 15:36:41.502714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.197 qpair failed and we were unable to recover it. 00:26:24.197 [2024-04-26 15:36:41.512759] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.197 [2024-04-26 15:36:41.512814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.197 [2024-04-26 15:36:41.512828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.198 [2024-04-26 15:36:41.512835] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.198 [2024-04-26 15:36:41.512848] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.198 [2024-04-26 15:36:41.512863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.198 qpair failed and we were unable to recover it. 00:26:24.198 [2024-04-26 15:36:41.522805] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.198 [2024-04-26 15:36:41.522860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.198 [2024-04-26 15:36:41.522874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.198 [2024-04-26 15:36:41.522881] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.198 [2024-04-26 15:36:41.522887] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.198 [2024-04-26 15:36:41.522901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.198 qpair failed and we were unable to recover it. 00:26:24.198 [2024-04-26 15:36:41.532792] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.198 [2024-04-26 15:36:41.532845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.198 [2024-04-26 15:36:41.532859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.198 [2024-04-26 15:36:41.532866] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.198 [2024-04-26 15:36:41.532872] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.198 [2024-04-26 15:36:41.532885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.198 qpair failed and we were unable to recover it. 00:26:24.198 [2024-04-26 15:36:41.542800] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.198 [2024-04-26 15:36:41.542855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.198 [2024-04-26 15:36:41.542869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.198 [2024-04-26 15:36:41.542876] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.198 [2024-04-26 15:36:41.542882] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.198 [2024-04-26 15:36:41.542895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.198 qpair failed and we were unable to recover it. 00:26:24.198 [2024-04-26 15:36:41.552888] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.198 [2024-04-26 15:36:41.552940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.198 [2024-04-26 15:36:41.552954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.198 [2024-04-26 15:36:41.552961] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.198 [2024-04-26 15:36:41.552967] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.198 [2024-04-26 15:36:41.552980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.198 qpair failed and we were unable to recover it. 00:26:24.198 [2024-04-26 15:36:41.562872] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.198 [2024-04-26 15:36:41.562918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.198 [2024-04-26 15:36:41.562931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.198 [2024-04-26 15:36:41.562938] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.198 [2024-04-26 15:36:41.562944] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.198 [2024-04-26 15:36:41.562958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.198 qpair failed and we were unable to recover it. 00:26:24.198 [2024-04-26 15:36:41.572870] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.198 [2024-04-26 15:36:41.572931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.198 [2024-04-26 15:36:41.572948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.198 [2024-04-26 15:36:41.572955] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.198 [2024-04-26 15:36:41.572961] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.198 [2024-04-26 15:36:41.572975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.198 qpair failed and we were unable to recover it. 00:26:24.198 [2024-04-26 15:36:41.582967] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.198 [2024-04-26 15:36:41.583014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.198 [2024-04-26 15:36:41.583027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.198 [2024-04-26 15:36:41.583034] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.198 [2024-04-26 15:36:41.583040] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.198 [2024-04-26 15:36:41.583054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.198 qpair failed and we were unable to recover it. 00:26:24.198 [2024-04-26 15:36:41.592996] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.198 [2024-04-26 15:36:41.593086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.198 [2024-04-26 15:36:41.593101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.198 [2024-04-26 15:36:41.593107] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.198 [2024-04-26 15:36:41.593113] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.198 [2024-04-26 15:36:41.593127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.198 qpair failed and we were unable to recover it. 00:26:24.198 [2024-04-26 15:36:41.602996] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.198 [2024-04-26 15:36:41.603045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.198 [2024-04-26 15:36:41.603059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.198 [2024-04-26 15:36:41.603066] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.198 [2024-04-26 15:36:41.603072] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.198 [2024-04-26 15:36:41.603085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.198 qpair failed and we were unable to recover it. 00:26:24.198 [2024-04-26 15:36:41.613040] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.198 [2024-04-26 15:36:41.613085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.198 [2024-04-26 15:36:41.613099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.198 [2024-04-26 15:36:41.613106] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.198 [2024-04-26 15:36:41.613112] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.198 [2024-04-26 15:36:41.613129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.198 qpair failed and we were unable to recover it. 00:26:24.198 [2024-04-26 15:36:41.623040] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.198 [2024-04-26 15:36:41.623090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.198 [2024-04-26 15:36:41.623104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.198 [2024-04-26 15:36:41.623110] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.198 [2024-04-26 15:36:41.623116] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.198 [2024-04-26 15:36:41.623130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.198 qpair failed and we were unable to recover it. 00:26:24.198 [2024-04-26 15:36:41.633114] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.198 [2024-04-26 15:36:41.633172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.199 [2024-04-26 15:36:41.633185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.199 [2024-04-26 15:36:41.633192] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.199 [2024-04-26 15:36:41.633198] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.199 [2024-04-26 15:36:41.633211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.199 qpair failed and we were unable to recover it. 00:26:24.199 [2024-04-26 15:36:41.643139] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.199 [2024-04-26 15:36:41.643198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.199 [2024-04-26 15:36:41.643212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.199 [2024-04-26 15:36:41.643219] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.199 [2024-04-26 15:36:41.643224] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.199 [2024-04-26 15:36:41.643238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.199 qpair failed and we were unable to recover it. 00:26:24.461 [2024-04-26 15:36:41.653069] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.462 [2024-04-26 15:36:41.653122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.462 [2024-04-26 15:36:41.653136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.462 [2024-04-26 15:36:41.653142] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.462 [2024-04-26 15:36:41.653148] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.462 [2024-04-26 15:36:41.653162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.462 qpair failed and we were unable to recover it. 00:26:24.462 [2024-04-26 15:36:41.663211] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.462 [2024-04-26 15:36:41.663262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.462 [2024-04-26 15:36:41.663284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.462 [2024-04-26 15:36:41.663291] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.462 [2024-04-26 15:36:41.663297] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.462 [2024-04-26 15:36:41.663313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.462 qpair failed and we were unable to recover it. 00:26:24.462 [2024-04-26 15:36:41.673196] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.462 [2024-04-26 15:36:41.673266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.462 [2024-04-26 15:36:41.673280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.462 [2024-04-26 15:36:41.673287] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.462 [2024-04-26 15:36:41.673292] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.462 [2024-04-26 15:36:41.673306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.462 qpair failed and we were unable to recover it. 00:26:24.462 [2024-04-26 15:36:41.683221] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.462 [2024-04-26 15:36:41.683275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.462 [2024-04-26 15:36:41.683288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.462 [2024-04-26 15:36:41.683295] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.462 [2024-04-26 15:36:41.683301] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.462 [2024-04-26 15:36:41.683315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.462 qpair failed and we were unable to recover it. 00:26:24.462 [2024-04-26 15:36:41.693284] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.462 [2024-04-26 15:36:41.693334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.462 [2024-04-26 15:36:41.693348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.462 [2024-04-26 15:36:41.693355] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.462 [2024-04-26 15:36:41.693361] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.462 [2024-04-26 15:36:41.693374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.462 qpair failed and we were unable to recover it. 00:26:24.462 [2024-04-26 15:36:41.703322] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.462 [2024-04-26 15:36:41.703380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.462 [2024-04-26 15:36:41.703393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.462 [2024-04-26 15:36:41.703399] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.462 [2024-04-26 15:36:41.703406] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.462 [2024-04-26 15:36:41.703423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.462 qpair failed and we were unable to recover it. 00:26:24.462 [2024-04-26 15:36:41.713328] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.462 [2024-04-26 15:36:41.713385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.462 [2024-04-26 15:36:41.713399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.462 [2024-04-26 15:36:41.713406] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.462 [2024-04-26 15:36:41.713412] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.462 [2024-04-26 15:36:41.713426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.462 qpair failed and we were unable to recover it. 00:26:24.462 [2024-04-26 15:36:41.723328] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.462 [2024-04-26 15:36:41.723380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.462 [2024-04-26 15:36:41.723393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.462 [2024-04-26 15:36:41.723400] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.462 [2024-04-26 15:36:41.723406] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.462 [2024-04-26 15:36:41.723419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.462 qpair failed and we were unable to recover it. 00:26:24.462 [2024-04-26 15:36:41.733369] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.462 [2024-04-26 15:36:41.733429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.462 [2024-04-26 15:36:41.733443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.462 [2024-04-26 15:36:41.733450] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.462 [2024-04-26 15:36:41.733456] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.462 [2024-04-26 15:36:41.733469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.462 qpair failed and we were unable to recover it. 00:26:24.462 [2024-04-26 15:36:41.743416] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.462 [2024-04-26 15:36:41.743476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.462 [2024-04-26 15:36:41.743490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.462 [2024-04-26 15:36:41.743497] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.462 [2024-04-26 15:36:41.743503] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.462 [2024-04-26 15:36:41.743516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.462 qpair failed and we were unable to recover it. 00:26:24.462 [2024-04-26 15:36:41.753430] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.462 [2024-04-26 15:36:41.753491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.462 [2024-04-26 15:36:41.753505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.462 [2024-04-26 15:36:41.753512] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.462 [2024-04-26 15:36:41.753518] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.462 [2024-04-26 15:36:41.753531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.462 qpair failed and we were unable to recover it. 00:26:24.462 [2024-04-26 15:36:41.763430] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.462 [2024-04-26 15:36:41.763479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.462 [2024-04-26 15:36:41.763492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.462 [2024-04-26 15:36:41.763499] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.462 [2024-04-26 15:36:41.763505] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.462 [2024-04-26 15:36:41.763518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.462 qpair failed and we were unable to recover it. 00:26:24.462 [2024-04-26 15:36:41.773498] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.462 [2024-04-26 15:36:41.773550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.462 [2024-04-26 15:36:41.773574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.462 [2024-04-26 15:36:41.773582] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.462 [2024-04-26 15:36:41.773589] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.462 [2024-04-26 15:36:41.773607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.462 qpair failed and we were unable to recover it. 00:26:24.462 [2024-04-26 15:36:41.783524] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.462 [2024-04-26 15:36:41.783579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.463 [2024-04-26 15:36:41.783603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.463 [2024-04-26 15:36:41.783611] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.463 [2024-04-26 15:36:41.783618] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.463 [2024-04-26 15:36:41.783636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.463 qpair failed and we were unable to recover it. 00:26:24.463 [2024-04-26 15:36:41.793562] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.463 [2024-04-26 15:36:41.793618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.463 [2024-04-26 15:36:41.793641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.463 [2024-04-26 15:36:41.793650] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.463 [2024-04-26 15:36:41.793664] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.463 [2024-04-26 15:36:41.793683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.463 qpair failed and we were unable to recover it. 00:26:24.463 [2024-04-26 15:36:41.803572] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.463 [2024-04-26 15:36:41.803630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.463 [2024-04-26 15:36:41.803653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.463 [2024-04-26 15:36:41.803662] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.463 [2024-04-26 15:36:41.803668] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.463 [2024-04-26 15:36:41.803686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.463 qpair failed and we were unable to recover it. 00:26:24.463 [2024-04-26 15:36:41.813482] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.463 [2024-04-26 15:36:41.813534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.463 [2024-04-26 15:36:41.813551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.463 [2024-04-26 15:36:41.813558] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.463 [2024-04-26 15:36:41.813564] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.463 [2024-04-26 15:36:41.813578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.463 qpair failed and we were unable to recover it. 00:26:24.463 [2024-04-26 15:36:41.823646] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.463 [2024-04-26 15:36:41.823695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.463 [2024-04-26 15:36:41.823709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.463 [2024-04-26 15:36:41.823715] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.463 [2024-04-26 15:36:41.823721] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.463 [2024-04-26 15:36:41.823735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.463 qpair failed and we were unable to recover it. 00:26:24.463 [2024-04-26 15:36:41.833673] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.463 [2024-04-26 15:36:41.833731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.463 [2024-04-26 15:36:41.833745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.463 [2024-04-26 15:36:41.833752] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.463 [2024-04-26 15:36:41.833758] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.463 [2024-04-26 15:36:41.833771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.463 qpair failed and we were unable to recover it. 00:26:24.463 [2024-04-26 15:36:41.843558] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.463 [2024-04-26 15:36:41.843621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.463 [2024-04-26 15:36:41.843635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.463 [2024-04-26 15:36:41.843642] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.463 [2024-04-26 15:36:41.843648] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.463 [2024-04-26 15:36:41.843661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.463 qpair failed and we were unable to recover it. 00:26:24.463 [2024-04-26 15:36:41.853714] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.463 [2024-04-26 15:36:41.853763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.463 [2024-04-26 15:36:41.853776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.463 [2024-04-26 15:36:41.853783] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.463 [2024-04-26 15:36:41.853789] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.463 [2024-04-26 15:36:41.853802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.463 qpair failed and we were unable to recover it. 00:26:24.463 [2024-04-26 15:36:41.863743] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.463 [2024-04-26 15:36:41.863794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.463 [2024-04-26 15:36:41.863808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.463 [2024-04-26 15:36:41.863815] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.463 [2024-04-26 15:36:41.863821] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.463 [2024-04-26 15:36:41.863835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.463 qpair failed and we were unable to recover it. 00:26:24.463 [2024-04-26 15:36:41.873745] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.463 [2024-04-26 15:36:41.873802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.463 [2024-04-26 15:36:41.873816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.463 [2024-04-26 15:36:41.873823] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.463 [2024-04-26 15:36:41.873828] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.463 [2024-04-26 15:36:41.873847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.463 qpair failed and we were unable to recover it. 00:26:24.463 [2024-04-26 15:36:41.883645] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.463 [2024-04-26 15:36:41.883695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.463 [2024-04-26 15:36:41.883708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.463 [2024-04-26 15:36:41.883718] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.463 [2024-04-26 15:36:41.883725] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.463 [2024-04-26 15:36:41.883739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.463 qpair failed and we were unable to recover it. 00:26:24.463 [2024-04-26 15:36:41.893810] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.463 [2024-04-26 15:36:41.893860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.463 [2024-04-26 15:36:41.893874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.463 [2024-04-26 15:36:41.893881] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.463 [2024-04-26 15:36:41.893887] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.463 [2024-04-26 15:36:41.893901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.463 qpair failed and we were unable to recover it. 00:26:24.463 [2024-04-26 15:36:41.903841] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.463 [2024-04-26 15:36:41.903893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.463 [2024-04-26 15:36:41.903907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.463 [2024-04-26 15:36:41.903914] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.463 [2024-04-26 15:36:41.903921] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.463 [2024-04-26 15:36:41.903935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.463 qpair failed and we were unable to recover it. 00:26:24.727 [2024-04-26 15:36:41.913925] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.727 [2024-04-26 15:36:41.914012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.727 [2024-04-26 15:36:41.914031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.727 [2024-04-26 15:36:41.914038] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.727 [2024-04-26 15:36:41.914044] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.727 [2024-04-26 15:36:41.914060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.727 qpair failed and we were unable to recover it. 00:26:24.727 [2024-04-26 15:36:41.923899] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.727 [2024-04-26 15:36:41.923945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.727 [2024-04-26 15:36:41.923959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.727 [2024-04-26 15:36:41.923966] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.727 [2024-04-26 15:36:41.923972] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.727 [2024-04-26 15:36:41.923986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.727 qpair failed and we were unable to recover it. 00:26:24.727 [2024-04-26 15:36:41.933929] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.727 [2024-04-26 15:36:41.933979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.727 [2024-04-26 15:36:41.933992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.727 [2024-04-26 15:36:41.933999] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.727 [2024-04-26 15:36:41.934005] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.727 [2024-04-26 15:36:41.934019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.727 qpair failed and we were unable to recover it. 00:26:24.727 [2024-04-26 15:36:41.943923] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.727 [2024-04-26 15:36:41.944003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.727 [2024-04-26 15:36:41.944017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.727 [2024-04-26 15:36:41.944024] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.727 [2024-04-26 15:36:41.944031] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.727 [2024-04-26 15:36:41.944045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.727 qpair failed and we were unable to recover it. 00:26:24.727 [2024-04-26 15:36:41.953995] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.727 [2024-04-26 15:36:41.954051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.727 [2024-04-26 15:36:41.954064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.727 [2024-04-26 15:36:41.954070] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.727 [2024-04-26 15:36:41.954076] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.727 [2024-04-26 15:36:41.954090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.727 qpair failed and we were unable to recover it. 00:26:24.727 [2024-04-26 15:36:41.963875] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.727 [2024-04-26 15:36:41.963924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.727 [2024-04-26 15:36:41.963937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.727 [2024-04-26 15:36:41.963944] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.727 [2024-04-26 15:36:41.963950] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.727 [2024-04-26 15:36:41.963964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.727 qpair failed and we were unable to recover it. 00:26:24.727 [2024-04-26 15:36:41.974032] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.727 [2024-04-26 15:36:41.974081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.727 [2024-04-26 15:36:41.974098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.727 [2024-04-26 15:36:41.974105] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.727 [2024-04-26 15:36:41.974111] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.727 [2024-04-26 15:36:41.974124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.727 qpair failed and we were unable to recover it. 00:26:24.727 [2024-04-26 15:36:41.984119] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.727 [2024-04-26 15:36:41.984196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.727 [2024-04-26 15:36:41.984209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.727 [2024-04-26 15:36:41.984215] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.727 [2024-04-26 15:36:41.984221] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.727 [2024-04-26 15:36:41.984235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.727 qpair failed and we were unable to recover it. 00:26:24.727 [2024-04-26 15:36:41.993959] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.727 [2024-04-26 15:36:41.994010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.727 [2024-04-26 15:36:41.994023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.727 [2024-04-26 15:36:41.994030] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.727 [2024-04-26 15:36:41.994035] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.727 [2024-04-26 15:36:41.994049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.727 qpair failed and we were unable to recover it. 00:26:24.727 [2024-04-26 15:36:42.003992] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.727 [2024-04-26 15:36:42.004041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.727 [2024-04-26 15:36:42.004055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.727 [2024-04-26 15:36:42.004062] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.727 [2024-04-26 15:36:42.004068] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.727 [2024-04-26 15:36:42.004087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.727 qpair failed and we were unable to recover it. 00:26:24.727 [2024-04-26 15:36:42.014149] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.727 [2024-04-26 15:36:42.014197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.728 [2024-04-26 15:36:42.014211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.728 [2024-04-26 15:36:42.014218] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.728 [2024-04-26 15:36:42.014224] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.728 [2024-04-26 15:36:42.014241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.728 qpair failed and we were unable to recover it. 00:26:24.728 [2024-04-26 15:36:42.024160] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.728 [2024-04-26 15:36:42.024208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.728 [2024-04-26 15:36:42.024222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.728 [2024-04-26 15:36:42.024228] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.728 [2024-04-26 15:36:42.024234] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.728 [2024-04-26 15:36:42.024248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.728 qpair failed and we were unable to recover it. 00:26:24.728 [2024-04-26 15:36:42.034190] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.728 [2024-04-26 15:36:42.034240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.728 [2024-04-26 15:36:42.034254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.728 [2024-04-26 15:36:42.034261] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.728 [2024-04-26 15:36:42.034266] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.728 [2024-04-26 15:36:42.034280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.728 qpair failed and we were unable to recover it. 00:26:24.728 [2024-04-26 15:36:42.044191] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.728 [2024-04-26 15:36:42.044241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.728 [2024-04-26 15:36:42.044254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.728 [2024-04-26 15:36:42.044261] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.728 [2024-04-26 15:36:42.044267] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.728 [2024-04-26 15:36:42.044280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.728 qpair failed and we were unable to recover it. 00:26:24.728 [2024-04-26 15:36:42.054237] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.728 [2024-04-26 15:36:42.054286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.728 [2024-04-26 15:36:42.054300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.728 [2024-04-26 15:36:42.054306] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.728 [2024-04-26 15:36:42.054312] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.728 [2024-04-26 15:36:42.054326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.728 qpair failed and we were unable to recover it. 00:26:24.728 [2024-04-26 15:36:42.064271] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.728 [2024-04-26 15:36:42.064319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.728 [2024-04-26 15:36:42.064336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.728 [2024-04-26 15:36:42.064343] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.728 [2024-04-26 15:36:42.064349] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.728 [2024-04-26 15:36:42.064362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.728 qpair failed and we were unable to recover it. 00:26:24.728 [2024-04-26 15:36:42.074294] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.728 [2024-04-26 15:36:42.074349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.728 [2024-04-26 15:36:42.074362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.728 [2024-04-26 15:36:42.074369] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.728 [2024-04-26 15:36:42.074375] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.728 [2024-04-26 15:36:42.074388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.728 qpair failed and we were unable to recover it. 00:26:24.728 [2024-04-26 15:36:42.084338] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.728 [2024-04-26 15:36:42.084386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.728 [2024-04-26 15:36:42.084399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.728 [2024-04-26 15:36:42.084406] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.728 [2024-04-26 15:36:42.084412] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.728 [2024-04-26 15:36:42.084425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.728 qpair failed and we were unable to recover it. 00:26:24.728 [2024-04-26 15:36:42.094356] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.728 [2024-04-26 15:36:42.094405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.728 [2024-04-26 15:36:42.094418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.728 [2024-04-26 15:36:42.094425] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.728 [2024-04-26 15:36:42.094431] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.728 [2024-04-26 15:36:42.094444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.728 qpair failed and we were unable to recover it. 00:26:24.728 [2024-04-26 15:36:42.104381] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.728 [2024-04-26 15:36:42.104429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.728 [2024-04-26 15:36:42.104443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.728 [2024-04-26 15:36:42.104450] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.728 [2024-04-26 15:36:42.104456] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.728 [2024-04-26 15:36:42.104473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.728 qpair failed and we were unable to recover it. 00:26:24.728 [2024-04-26 15:36:42.114391] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.728 [2024-04-26 15:36:42.114446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.728 [2024-04-26 15:36:42.114460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.728 [2024-04-26 15:36:42.114467] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.728 [2024-04-26 15:36:42.114473] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.728 [2024-04-26 15:36:42.114486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.728 qpair failed and we were unable to recover it. 00:26:24.728 [2024-04-26 15:36:42.124411] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.728 [2024-04-26 15:36:42.124459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.728 [2024-04-26 15:36:42.124472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.728 [2024-04-26 15:36:42.124479] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.728 [2024-04-26 15:36:42.124485] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.728 [2024-04-26 15:36:42.124499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.728 qpair failed and we were unable to recover it. 00:26:24.728 [2024-04-26 15:36:42.134430] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.728 [2024-04-26 15:36:42.134474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.728 [2024-04-26 15:36:42.134488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.728 [2024-04-26 15:36:42.134495] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.728 [2024-04-26 15:36:42.134501] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.728 [2024-04-26 15:36:42.134514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.728 qpair failed and we were unable to recover it. 00:26:24.728 [2024-04-26 15:36:42.144385] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.728 [2024-04-26 15:36:42.144437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.728 [2024-04-26 15:36:42.144451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.729 [2024-04-26 15:36:42.144458] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.729 [2024-04-26 15:36:42.144464] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.729 [2024-04-26 15:36:42.144478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.729 qpair failed and we were unable to recover it. 00:26:24.729 [2024-04-26 15:36:42.154572] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.729 [2024-04-26 15:36:42.154628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.729 [2024-04-26 15:36:42.154646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.729 [2024-04-26 15:36:42.154653] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.729 [2024-04-26 15:36:42.154659] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.729 [2024-04-26 15:36:42.154672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.729 qpair failed and we were unable to recover it. 00:26:24.729 [2024-04-26 15:36:42.164534] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.729 [2024-04-26 15:36:42.164588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.729 [2024-04-26 15:36:42.164612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.729 [2024-04-26 15:36:42.164620] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.729 [2024-04-26 15:36:42.164627] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.729 [2024-04-26 15:36:42.164646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.729 qpair failed and we were unable to recover it. 00:26:24.992 [2024-04-26 15:36:42.174583] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.992 [2024-04-26 15:36:42.174639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.992 [2024-04-26 15:36:42.174663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.992 [2024-04-26 15:36:42.174672] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.992 [2024-04-26 15:36:42.174678] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.992 [2024-04-26 15:36:42.174697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.992 qpair failed and we were unable to recover it. 00:26:24.992 [2024-04-26 15:36:42.184665] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.992 [2024-04-26 15:36:42.184742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.992 [2024-04-26 15:36:42.184765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.992 [2024-04-26 15:36:42.184773] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.992 [2024-04-26 15:36:42.184780] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.992 [2024-04-26 15:36:42.184798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.992 qpair failed and we were unable to recover it. 00:26:24.992 [2024-04-26 15:36:42.194664] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.992 [2024-04-26 15:36:42.194719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.992 [2024-04-26 15:36:42.194735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.992 [2024-04-26 15:36:42.194742] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.992 [2024-04-26 15:36:42.194754] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.992 [2024-04-26 15:36:42.194769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.992 qpair failed and we were unable to recover it. 00:26:24.992 [2024-04-26 15:36:42.204650] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.992 [2024-04-26 15:36:42.204755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.992 [2024-04-26 15:36:42.204769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.992 [2024-04-26 15:36:42.204776] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.992 [2024-04-26 15:36:42.204782] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.992 [2024-04-26 15:36:42.204797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.992 qpair failed and we were unable to recover it. 00:26:24.992 [2024-04-26 15:36:42.214683] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.992 [2024-04-26 15:36:42.214733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.992 [2024-04-26 15:36:42.214747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.992 [2024-04-26 15:36:42.214754] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.992 [2024-04-26 15:36:42.214760] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.992 [2024-04-26 15:36:42.214773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.992 qpair failed and we were unable to recover it. 00:26:24.992 [2024-04-26 15:36:42.224720] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.992 [2024-04-26 15:36:42.224766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.992 [2024-04-26 15:36:42.224780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.992 [2024-04-26 15:36:42.224787] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.992 [2024-04-26 15:36:42.224792] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.992 [2024-04-26 15:36:42.224806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.992 qpair failed and we were unable to recover it. 00:26:24.992 [2024-04-26 15:36:42.234738] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.992 [2024-04-26 15:36:42.234786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.992 [2024-04-26 15:36:42.234800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.992 [2024-04-26 15:36:42.234807] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.992 [2024-04-26 15:36:42.234813] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.992 [2024-04-26 15:36:42.234827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.992 qpair failed and we were unable to recover it. 00:26:24.992 [2024-04-26 15:36:42.244633] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.992 [2024-04-26 15:36:42.244683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.992 [2024-04-26 15:36:42.244697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.992 [2024-04-26 15:36:42.244704] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.992 [2024-04-26 15:36:42.244710] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.992 [2024-04-26 15:36:42.244723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.992 qpair failed and we were unable to recover it. 00:26:24.992 [2024-04-26 15:36:42.254805] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.992 [2024-04-26 15:36:42.254862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.992 [2024-04-26 15:36:42.254876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.992 [2024-04-26 15:36:42.254883] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.992 [2024-04-26 15:36:42.254889] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.992 [2024-04-26 15:36:42.254903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.992 qpair failed and we were unable to recover it. 00:26:24.992 [2024-04-26 15:36:42.264808] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.992 [2024-04-26 15:36:42.264861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.993 [2024-04-26 15:36:42.264875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.993 [2024-04-26 15:36:42.264881] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.993 [2024-04-26 15:36:42.264887] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.993 [2024-04-26 15:36:42.264901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.993 qpair failed and we were unable to recover it. 00:26:24.993 [2024-04-26 15:36:42.274849] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.993 [2024-04-26 15:36:42.274940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.993 [2024-04-26 15:36:42.274954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.993 [2024-04-26 15:36:42.274961] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.993 [2024-04-26 15:36:42.274967] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.993 [2024-04-26 15:36:42.274980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.993 qpair failed and we were unable to recover it. 00:26:24.993 [2024-04-26 15:36:42.284832] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.993 [2024-04-26 15:36:42.284886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.993 [2024-04-26 15:36:42.284900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.993 [2024-04-26 15:36:42.284910] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.993 [2024-04-26 15:36:42.284916] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.993 [2024-04-26 15:36:42.284930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.993 qpair failed and we were unable to recover it. 00:26:24.993 [2024-04-26 15:36:42.294908] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.993 [2024-04-26 15:36:42.294954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.993 [2024-04-26 15:36:42.294968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.993 [2024-04-26 15:36:42.294974] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.993 [2024-04-26 15:36:42.294980] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.993 [2024-04-26 15:36:42.294994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.993 qpair failed and we were unable to recover it. 00:26:24.993 [2024-04-26 15:36:42.304917] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.993 [2024-04-26 15:36:42.304963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.993 [2024-04-26 15:36:42.304976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.993 [2024-04-26 15:36:42.304983] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.993 [2024-04-26 15:36:42.304989] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.993 [2024-04-26 15:36:42.305003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.993 qpair failed and we were unable to recover it. 00:26:24.993 [2024-04-26 15:36:42.314931] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.993 [2024-04-26 15:36:42.314997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.993 [2024-04-26 15:36:42.315011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.993 [2024-04-26 15:36:42.315018] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.993 [2024-04-26 15:36:42.315024] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.993 [2024-04-26 15:36:42.315037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.993 qpair failed and we were unable to recover it. 00:26:24.993 [2024-04-26 15:36:42.324984] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.993 [2024-04-26 15:36:42.325042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.993 [2024-04-26 15:36:42.325055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.993 [2024-04-26 15:36:42.325062] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.993 [2024-04-26 15:36:42.325068] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.993 [2024-04-26 15:36:42.325082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.993 qpair failed and we were unable to recover it. 00:26:24.993 [2024-04-26 15:36:42.334985] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.993 [2024-04-26 15:36:42.335036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.993 [2024-04-26 15:36:42.335050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.993 [2024-04-26 15:36:42.335056] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.993 [2024-04-26 15:36:42.335062] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.993 [2024-04-26 15:36:42.335076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.993 qpair failed and we were unable to recover it. 00:26:24.993 [2024-04-26 15:36:42.345087] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.993 [2024-04-26 15:36:42.345138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.993 [2024-04-26 15:36:42.345152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.993 [2024-04-26 15:36:42.345158] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.993 [2024-04-26 15:36:42.345164] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.993 [2024-04-26 15:36:42.345177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.993 qpair failed and we were unable to recover it. 00:26:24.993 [2024-04-26 15:36:42.355046] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.993 [2024-04-26 15:36:42.355100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.993 [2024-04-26 15:36:42.355114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.993 [2024-04-26 15:36:42.355120] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.993 [2024-04-26 15:36:42.355126] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.993 [2024-04-26 15:36:42.355140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.993 qpair failed and we were unable to recover it. 00:26:24.993 [2024-04-26 15:36:42.364988] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.993 [2024-04-26 15:36:42.365040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.993 [2024-04-26 15:36:42.365053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.993 [2024-04-26 15:36:42.365060] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.993 [2024-04-26 15:36:42.365066] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.993 [2024-04-26 15:36:42.365079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.993 qpair failed and we were unable to recover it. 00:26:24.993 [2024-04-26 15:36:42.375093] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.993 [2024-04-26 15:36:42.375145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.993 [2024-04-26 15:36:42.375158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.993 [2024-04-26 15:36:42.375168] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.993 [2024-04-26 15:36:42.375175] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.993 [2024-04-26 15:36:42.375188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.993 qpair failed and we were unable to recover it. 00:26:24.993 [2024-04-26 15:36:42.385161] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.993 [2024-04-26 15:36:42.385210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.993 [2024-04-26 15:36:42.385224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.994 [2024-04-26 15:36:42.385231] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.994 [2024-04-26 15:36:42.385237] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.994 [2024-04-26 15:36:42.385250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.994 qpair failed and we were unable to recover it. 00:26:24.994 [2024-04-26 15:36:42.395176] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.994 [2024-04-26 15:36:42.395230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.994 [2024-04-26 15:36:42.395244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.994 [2024-04-26 15:36:42.395250] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.994 [2024-04-26 15:36:42.395257] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.994 [2024-04-26 15:36:42.395270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.994 qpair failed and we were unable to recover it. 00:26:24.994 [2024-04-26 15:36:42.405224] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.994 [2024-04-26 15:36:42.405276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.994 [2024-04-26 15:36:42.405289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.994 [2024-04-26 15:36:42.405296] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.994 [2024-04-26 15:36:42.405302] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.994 [2024-04-26 15:36:42.405315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.994 qpair failed and we were unable to recover it. 00:26:24.994 [2024-04-26 15:36:42.415198] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.994 [2024-04-26 15:36:42.415249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.994 [2024-04-26 15:36:42.415267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.994 [2024-04-26 15:36:42.415274] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.994 [2024-04-26 15:36:42.415280] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.994 [2024-04-26 15:36:42.415295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.994 qpair failed and we were unable to recover it. 00:26:24.994 [2024-04-26 15:36:42.425248] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.994 [2024-04-26 15:36:42.425297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.994 [2024-04-26 15:36:42.425311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.994 [2024-04-26 15:36:42.425318] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.994 [2024-04-26 15:36:42.425324] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.994 [2024-04-26 15:36:42.425338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.994 qpair failed and we were unable to recover it. 00:26:24.994 [2024-04-26 15:36:42.435169] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.994 [2024-04-26 15:36:42.435233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.994 [2024-04-26 15:36:42.435248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.994 [2024-04-26 15:36:42.435257] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.994 [2024-04-26 15:36:42.435264] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb678000b90 00:26:24.994 [2024-04-26 15:36:42.435278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.994 qpair failed and we were unable to recover it. 00:26:24.994 Read completed with error (sct=0, sc=8) 00:26:24.994 starting I/O failed 00:26:24.994 Read completed with error (sct=0, sc=8) 00:26:24.994 starting I/O failed 00:26:24.994 Read completed with error (sct=0, sc=8) 00:26:24.994 starting I/O failed 00:26:24.994 Read completed with error (sct=0, sc=8) 00:26:24.994 starting I/O failed 00:26:24.994 Read completed with error (sct=0, sc=8) 00:26:24.994 starting I/O failed 00:26:24.994 Read completed with error (sct=0, sc=8) 00:26:24.994 starting I/O failed 00:26:24.994 Read completed with error (sct=0, sc=8) 00:26:24.994 starting I/O failed 00:26:24.994 Read completed with error (sct=0, sc=8) 00:26:24.994 starting I/O failed 00:26:24.994 Read completed with error (sct=0, sc=8) 00:26:24.994 starting I/O failed 00:26:24.994 Read completed with error (sct=0, sc=8) 00:26:24.994 starting I/O failed 00:26:24.994 Read completed with error (sct=0, sc=8) 00:26:24.994 starting I/O failed 00:26:24.994 Write completed with error (sct=0, sc=8) 00:26:24.994 starting I/O failed 00:26:24.994 Write completed with error (sct=0, sc=8) 00:26:24.994 starting I/O failed 00:26:24.994 Write completed with error (sct=0, sc=8) 00:26:24.994 starting I/O failed 00:26:24.994 Read completed with error (sct=0, sc=8) 00:26:24.994 starting I/O failed 00:26:24.994 Read completed with error (sct=0, sc=8) 00:26:24.994 starting I/O failed 00:26:24.994 Write completed with error (sct=0, sc=8) 00:26:24.994 starting I/O failed 00:26:24.994 Write completed with error (sct=0, sc=8) 00:26:24.994 starting I/O failed 00:26:24.994 Write completed with error (sct=0, sc=8) 00:26:24.994 starting I/O failed 00:26:24.994 Read completed with error (sct=0, sc=8) 00:26:24.994 starting I/O failed 00:26:24.994 Write completed with error (sct=0, sc=8) 00:26:24.994 starting I/O failed 00:26:24.994 Read completed with error (sct=0, sc=8) 00:26:24.994 starting I/O failed 00:26:24.994 Write completed with error (sct=0, sc=8) 00:26:24.994 starting I/O failed 00:26:24.994 Read completed with error (sct=0, sc=8) 00:26:24.994 starting I/O failed 00:26:24.994 Read completed with error (sct=0, sc=8) 00:26:24.994 starting I/O failed 00:26:24.994 Read completed with error (sct=0, sc=8) 00:26:24.994 starting I/O failed 00:26:24.994 Read completed with error (sct=0, sc=8) 00:26:24.994 starting I/O failed 00:26:24.994 Read completed with error (sct=0, sc=8) 00:26:24.994 starting I/O failed 00:26:24.994 Read completed with error (sct=0, sc=8) 00:26:24.994 starting I/O failed 00:26:24.994 Write completed with error (sct=0, sc=8) 00:26:24.994 starting I/O failed 00:26:24.994 Write completed with error (sct=0, sc=8) 00:26:24.994 starting I/O failed 00:26:24.994 Write completed with error (sct=0, sc=8) 00:26:24.994 starting I/O failed 00:26:24.994 [2024-04-26 15:36:42.436176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:25.256 [2024-04-26 15:36:42.445307] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.256 [2024-04-26 15:36:42.445448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.256 [2024-04-26 15:36:42.445497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.256 [2024-04-26 15:36:42.445520] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.256 [2024-04-26 15:36:42.445539] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb688000b90 00:26:25.256 [2024-04-26 15:36:42.445584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:25.256 qpair failed and we were unable to recover it. 00:26:25.256 [2024-04-26 15:36:42.455271] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.256 [2024-04-26 15:36:42.455348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.256 [2024-04-26 15:36:42.455376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.256 [2024-04-26 15:36:42.455391] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.256 [2024-04-26 15:36:42.455403] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb688000b90 00:26:25.256 [2024-04-26 15:36:42.455431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:25.256 qpair failed and we were unable to recover it. 00:26:25.256 [2024-04-26 15:36:42.455705] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d40d0 is same with the state(5) to be set 00:26:25.256 Write completed with error (sct=0, sc=8) 00:26:25.256 starting I/O failed 00:26:25.256 Read completed with error (sct=0, sc=8) 00:26:25.256 starting I/O failed 00:26:25.256 Write completed with error (sct=0, sc=8) 00:26:25.256 starting I/O failed 00:26:25.256 Read completed with error (sct=0, sc=8) 00:26:25.256 starting I/O failed 00:26:25.256 Read completed with error (sct=0, sc=8) 00:26:25.256 starting I/O failed 00:26:25.256 Write completed with error (sct=0, sc=8) 00:26:25.256 starting I/O failed 00:26:25.256 Write completed with error (sct=0, sc=8) 00:26:25.256 starting I/O failed 00:26:25.256 Write completed with error (sct=0, sc=8) 00:26:25.256 starting I/O failed 00:26:25.256 Write completed with error (sct=0, sc=8) 00:26:25.256 starting I/O failed 00:26:25.256 Read completed with error (sct=0, sc=8) 00:26:25.256 starting I/O failed 00:26:25.256 Write completed with error (sct=0, sc=8) 00:26:25.256 starting I/O failed 00:26:25.256 Read completed with error (sct=0, sc=8) 00:26:25.256 starting I/O failed 00:26:25.256 Write completed with error (sct=0, sc=8) 00:26:25.256 starting I/O failed 00:26:25.256 Read completed with error (sct=0, sc=8) 00:26:25.256 starting I/O failed 00:26:25.256 Write completed with error (sct=0, sc=8) 00:26:25.256 starting I/O failed 00:26:25.256 Read completed with error (sct=0, sc=8) 00:26:25.256 starting I/O failed 00:26:25.256 Read completed with error (sct=0, sc=8) 00:26:25.256 starting I/O failed 00:26:25.256 Read completed with error (sct=0, sc=8) 00:26:25.256 starting I/O failed 00:26:25.256 Read completed with error (sct=0, sc=8) 00:26:25.256 starting I/O failed 00:26:25.256 Read completed with error (sct=0, sc=8) 00:26:25.256 starting I/O failed 00:26:25.256 Write completed with error (sct=0, sc=8) 00:26:25.256 starting I/O failed 00:26:25.256 Read completed with error (sct=0, sc=8) 00:26:25.256 starting I/O failed 00:26:25.256 Read completed with error (sct=0, sc=8) 00:26:25.256 starting I/O failed 00:26:25.256 Write completed with error (sct=0, sc=8) 00:26:25.256 starting I/O failed 00:26:25.256 Write completed with error (sct=0, sc=8) 00:26:25.256 starting I/O failed 00:26:25.256 Read completed with error (sct=0, sc=8) 00:26:25.256 starting I/O failed 00:26:25.256 Write completed with error (sct=0, sc=8) 00:26:25.256 starting I/O failed 00:26:25.256 Read completed with error (sct=0, sc=8) 00:26:25.256 starting I/O failed 00:26:25.256 Write completed with error (sct=0, sc=8) 00:26:25.256 starting I/O failed 00:26:25.256 Write completed with error (sct=0, sc=8) 00:26:25.256 starting I/O failed 00:26:25.256 Write completed with error (sct=0, sc=8) 00:26:25.256 starting I/O failed 00:26:25.256 Write completed with error (sct=0, sc=8) 00:26:25.256 starting I/O failed 00:26:25.256 [2024-04-26 15:36:42.456055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:25.256 [2024-04-26 15:36:42.465358] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.256 [2024-04-26 15:36:42.465406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.256 [2024-04-26 15:36:42.465420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.256 [2024-04-26 15:36:42.465426] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.256 [2024-04-26 15:36:42.465431] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb680000b90 00:26:25.256 [2024-04-26 15:36:42.465442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:25.256 qpair failed and we were unable to recover it. 00:26:25.256 [2024-04-26 15:36:42.475394] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.256 [2024-04-26 15:36:42.475439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.256 [2024-04-26 15:36:42.475451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.256 [2024-04-26 15:36:42.475456] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.256 [2024-04-26 15:36:42.475460] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb680000b90 00:26:25.256 [2024-04-26 15:36:42.475470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:25.256 qpair failed and we were unable to recover it. 00:26:25.256 Read completed with error (sct=0, sc=8) 00:26:25.256 starting I/O failed 00:26:25.256 Read completed with error (sct=0, sc=8) 00:26:25.256 starting I/O failed 00:26:25.256 Read completed with error (sct=0, sc=8) 00:26:25.256 starting I/O failed 00:26:25.256 Read completed with error (sct=0, sc=8) 00:26:25.256 starting I/O failed 00:26:25.256 Read completed with error (sct=0, sc=8) 00:26:25.256 starting I/O failed 00:26:25.256 Read completed with error (sct=0, sc=8) 00:26:25.256 starting I/O failed 00:26:25.256 Write completed with error (sct=0, sc=8) 00:26:25.256 starting I/O failed 00:26:25.256 Write completed with error (sct=0, sc=8) 00:26:25.256 starting I/O failed 00:26:25.256 Write completed with error (sct=0, sc=8) 00:26:25.256 starting I/O failed 00:26:25.257 Read completed with error (sct=0, sc=8) 00:26:25.257 starting I/O failed 00:26:25.257 Write completed with error (sct=0, sc=8) 00:26:25.257 starting I/O failed 00:26:25.257 Write completed with error (sct=0, sc=8) 00:26:25.257 starting I/O failed 00:26:25.257 Read completed with error (sct=0, sc=8) 00:26:25.257 starting I/O failed 00:26:25.257 Write completed with error (sct=0, sc=8) 00:26:25.257 starting I/O failed 00:26:25.257 Write completed with error (sct=0, sc=8) 00:26:25.257 starting I/O failed 00:26:25.257 Read completed with error (sct=0, sc=8) 00:26:25.257 starting I/O failed 00:26:25.257 Read completed with error (sct=0, sc=8) 00:26:25.257 starting I/O failed 00:26:25.257 Write completed with error (sct=0, sc=8) 00:26:25.257 starting I/O failed 00:26:25.257 Read completed with error (sct=0, sc=8) 00:26:25.257 starting I/O failed 00:26:25.257 Read completed with error (sct=0, sc=8) 00:26:25.257 starting I/O failed 00:26:25.257 Read completed with error (sct=0, sc=8) 00:26:25.257 starting I/O failed 00:26:25.257 Read completed with error (sct=0, sc=8) 00:26:25.257 starting I/O failed 00:26:25.257 Write completed with error (sct=0, sc=8) 00:26:25.257 starting I/O failed 00:26:25.257 Write completed with error (sct=0, sc=8) 00:26:25.257 starting I/O failed 00:26:25.257 Write completed with error (sct=0, sc=8) 00:26:25.257 starting I/O failed 00:26:25.257 Write completed with error (sct=0, sc=8) 00:26:25.257 starting I/O failed 00:26:25.257 Write completed with error (sct=0, sc=8) 00:26:25.257 starting I/O failed 00:26:25.257 Write completed with error (sct=0, sc=8) 00:26:25.257 starting I/O failed 00:26:25.257 Write completed with error (sct=0, sc=8) 00:26:25.257 starting I/O failed 00:26:25.257 Write completed with error (sct=0, sc=8) 00:26:25.257 starting I/O failed 00:26:25.257 Write completed with error (sct=0, sc=8) 00:26:25.257 starting I/O failed 00:26:25.257 Read completed with error (sct=0, sc=8) 00:26:25.257 starting I/O failed 00:26:25.257 [2024-04-26 15:36:42.475892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.257 [2024-04-26 15:36:42.485418] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.257 [2024-04-26 15:36:42.485472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.257 [2024-04-26 15:36:42.485501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.257 [2024-04-26 15:36:42.485510] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.257 [2024-04-26 15:36:42.485516] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14c6570 00:26:25.257 [2024-04-26 15:36:42.485534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.257 qpair failed and we were unable to recover it. 00:26:25.257 [2024-04-26 15:36:42.495434] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.257 [2024-04-26 15:36:42.495489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.257 [2024-04-26 15:36:42.495514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.257 [2024-04-26 15:36:42.495522] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.257 [2024-04-26 15:36:42.495528] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14c6570 00:26:25.257 [2024-04-26 15:36:42.495547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.257 qpair failed and we were unable to recover it. 00:26:25.257 [2024-04-26 15:36:42.496114] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d40d0 (9): Bad file descriptor 00:26:25.257 Initializing NVMe Controllers 00:26:25.257 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:25.257 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:25.257 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:26:25.257 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:26:25.257 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:26:25.257 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:26:25.257 Initialization complete. Launching workers. 00:26:25.257 Starting thread on core 1 00:26:25.257 Starting thread on core 2 00:26:25.257 Starting thread on core 3 00:26:25.257 Starting thread on core 0 00:26:25.257 15:36:42 -- host/target_disconnect.sh@59 -- # sync 00:26:25.257 00:26:25.257 real 0m11.376s 00:26:25.257 user 0m21.300s 00:26:25.257 sys 0m3.626s 00:26:25.257 15:36:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:25.257 15:36:42 -- common/autotest_common.sh@10 -- # set +x 00:26:25.257 ************************************ 00:26:25.257 END TEST nvmf_target_disconnect_tc2 00:26:25.257 ************************************ 00:26:25.257 15:36:42 -- host/target_disconnect.sh@80 -- # '[' -n '' ']' 00:26:25.257 15:36:42 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:26:25.257 15:36:42 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:26:25.257 15:36:42 -- nvmf/common.sh@477 -- # nvmfcleanup 00:26:25.257 15:36:42 -- nvmf/common.sh@117 -- # sync 00:26:25.257 15:36:42 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:25.257 15:36:42 -- nvmf/common.sh@120 -- # set +e 00:26:25.257 15:36:42 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:25.257 15:36:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:25.257 rmmod nvme_tcp 00:26:25.257 rmmod nvme_fabrics 00:26:25.257 rmmod nvme_keyring 00:26:25.257 15:36:42 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:25.257 15:36:42 -- nvmf/common.sh@124 -- # set -e 00:26:25.257 15:36:42 -- nvmf/common.sh@125 -- # return 0 00:26:25.257 15:36:42 -- nvmf/common.sh@478 -- # '[' -n 1790497 ']' 00:26:25.257 15:36:42 -- nvmf/common.sh@479 -- # killprocess 1790497 00:26:25.257 15:36:42 -- common/autotest_common.sh@936 -- # '[' -z 1790497 ']' 00:26:25.257 15:36:42 -- common/autotest_common.sh@940 -- # kill -0 1790497 00:26:25.257 15:36:42 -- common/autotest_common.sh@941 -- # uname 00:26:25.257 15:36:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:25.257 15:36:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1790497 00:26:25.257 15:36:42 -- common/autotest_common.sh@942 -- # process_name=reactor_4 00:26:25.257 15:36:42 -- common/autotest_common.sh@946 -- # '[' reactor_4 = sudo ']' 00:26:25.257 15:36:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1790497' 00:26:25.257 killing process with pid 1790497 00:26:25.257 15:36:42 -- common/autotest_common.sh@955 -- # kill 1790497 00:26:25.257 15:36:42 -- common/autotest_common.sh@960 -- # wait 1790497 00:26:25.518 15:36:42 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:26:25.518 15:36:42 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:26:25.518 15:36:42 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:26:25.518 15:36:42 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:25.518 15:36:42 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:25.518 15:36:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:25.518 15:36:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:25.518 15:36:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:27.430 15:36:44 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:27.430 00:26:27.430 real 0m21.496s 00:26:27.430 user 0m48.982s 00:26:27.430 sys 0m9.468s 00:26:27.430 15:36:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:27.430 15:36:44 -- common/autotest_common.sh@10 -- # set +x 00:26:27.430 ************************************ 00:26:27.430 END TEST nvmf_target_disconnect 00:26:27.430 ************************************ 00:26:27.692 15:36:44 -- nvmf/nvmf.sh@123 -- # timing_exit host 00:26:27.692 15:36:44 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:27.692 15:36:44 -- common/autotest_common.sh@10 -- # set +x 00:26:27.692 15:36:44 -- nvmf/nvmf.sh@125 -- # trap - SIGINT SIGTERM EXIT 00:26:27.692 00:26:27.692 real 19m39.963s 00:26:27.692 user 40m15.951s 00:26:27.692 sys 6m28.799s 00:26:27.692 15:36:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:27.692 15:36:44 -- common/autotest_common.sh@10 -- # set +x 00:26:27.692 ************************************ 00:26:27.692 END TEST nvmf_tcp 00:26:27.692 ************************************ 00:26:27.692 15:36:44 -- spdk/autotest.sh@286 -- # [[ 0 -eq 0 ]] 00:26:27.692 15:36:44 -- spdk/autotest.sh@287 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:26:27.692 15:36:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:27.692 15:36:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:27.692 15:36:44 -- common/autotest_common.sh@10 -- # set +x 00:26:27.952 ************************************ 00:26:27.952 START TEST spdkcli_nvmf_tcp 00:26:27.952 ************************************ 00:26:27.952 15:36:45 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:26:27.952 * Looking for test storage... 00:26:27.952 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:26:27.953 15:36:45 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:26:27.953 15:36:45 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:26:27.953 15:36:45 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:26:27.953 15:36:45 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:27.953 15:36:45 -- nvmf/common.sh@7 -- # uname -s 00:26:27.953 15:36:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:27.953 15:36:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:27.953 15:36:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:27.953 15:36:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:27.953 15:36:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:27.953 15:36:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:27.953 15:36:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:27.953 15:36:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:27.953 15:36:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:27.953 15:36:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:27.953 15:36:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:27.953 15:36:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:27.953 15:36:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:27.953 15:36:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:27.953 15:36:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:27.953 15:36:45 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:27.953 15:36:45 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:27.953 15:36:45 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:27.953 15:36:45 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:27.953 15:36:45 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:27.953 15:36:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:27.953 15:36:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:27.953 15:36:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:27.953 15:36:45 -- paths/export.sh@5 -- # export PATH 00:26:27.953 15:36:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:27.953 15:36:45 -- nvmf/common.sh@47 -- # : 0 00:26:27.953 15:36:45 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:27.953 15:36:45 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:27.953 15:36:45 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:27.953 15:36:45 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:27.953 15:36:45 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:27.953 15:36:45 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:27.953 15:36:45 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:27.953 15:36:45 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:27.953 15:36:45 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:26:27.953 15:36:45 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:26:27.953 15:36:45 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:26:27.953 15:36:45 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:26:27.953 15:36:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:27.953 15:36:45 -- common/autotest_common.sh@10 -- # set +x 00:26:27.953 15:36:45 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:26:27.953 15:36:45 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1792442 00:26:27.953 15:36:45 -- spdkcli/common.sh@34 -- # waitforlisten 1792442 00:26:27.953 15:36:45 -- common/autotest_common.sh@817 -- # '[' -z 1792442 ']' 00:26:27.953 15:36:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:27.953 15:36:45 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:26:27.953 15:36:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:27.953 15:36:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:27.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:27.953 15:36:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:27.953 15:36:45 -- common/autotest_common.sh@10 -- # set +x 00:26:27.953 [2024-04-26 15:36:45.352101] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:26:27.953 [2024-04-26 15:36:45.352168] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1792442 ] 00:26:27.953 EAL: No free 2048 kB hugepages reported on node 1 00:26:28.213 [2024-04-26 15:36:45.414946] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:28.213 [2024-04-26 15:36:45.478636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:28.213 [2024-04-26 15:36:45.478639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:28.782 15:36:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:28.782 15:36:46 -- common/autotest_common.sh@850 -- # return 0 00:26:28.782 15:36:46 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:26:28.782 15:36:46 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:28.782 15:36:46 -- common/autotest_common.sh@10 -- # set +x 00:26:28.782 15:36:46 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:26:28.782 15:36:46 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:26:28.782 15:36:46 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:26:28.782 15:36:46 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:28.782 15:36:46 -- common/autotest_common.sh@10 -- # set +x 00:26:28.782 15:36:46 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:26:28.782 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:26:28.782 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:26:28.782 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:26:28.782 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:26:28.782 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:26:28.782 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:26:28.782 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:26:28.782 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:26:28.782 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:26:28.782 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:28.782 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:28.782 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:26:28.782 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:28.782 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:28.782 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:26:28.782 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:28.782 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:26:28.782 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:26:28.782 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:28.783 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:26:28.783 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:26:28.783 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:26:28.783 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:26:28.783 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:28.783 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:26:28.783 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:26:28.783 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:26:28.783 ' 00:26:29.043 [2024-04-26 15:36:46.479210] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:26:31.585 [2024-04-26 15:36:48.482718] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:32.526 [2024-04-26 15:36:49.646449] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:26:34.471 [2024-04-26 15:36:51.784919] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:26:36.382 [2024-04-26 15:36:53.618303] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:26:37.765 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:26:37.765 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:26:37.765 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:26:37.765 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:26:37.765 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:26:37.765 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:26:37.765 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:26:37.765 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:26:37.765 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:26:37.765 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:26:37.765 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:26:37.765 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:37.765 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:26:37.765 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:26:37.765 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:37.765 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:26:37.765 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:26:37.765 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:26:37.765 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:26:37.765 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:37.765 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:26:37.765 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:26:37.765 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:26:37.765 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:26:37.765 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:37.765 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:26:37.765 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:26:37.765 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:26:37.765 15:36:55 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:26:37.765 15:36:55 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:37.765 15:36:55 -- common/autotest_common.sh@10 -- # set +x 00:26:37.765 15:36:55 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:26:37.765 15:36:55 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:37.765 15:36:55 -- common/autotest_common.sh@10 -- # set +x 00:26:37.765 15:36:55 -- spdkcli/nvmf.sh@69 -- # check_match 00:26:37.765 15:36:55 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:26:38.339 15:36:55 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:26:38.339 15:36:55 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:26:38.339 15:36:55 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:26:38.339 15:36:55 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:38.339 15:36:55 -- common/autotest_common.sh@10 -- # set +x 00:26:38.339 15:36:55 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:26:38.339 15:36:55 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:38.339 15:36:55 -- common/autotest_common.sh@10 -- # set +x 00:26:38.339 15:36:55 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:26:38.339 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:26:38.339 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:26:38.339 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:26:38.339 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:26:38.339 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:26:38.339 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:26:38.339 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:26:38.339 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:26:38.339 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:26:38.339 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:26:38.339 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:26:38.339 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:26:38.339 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:26:38.339 ' 00:26:43.634 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:26:43.634 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:26:43.634 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:26:43.634 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:26:43.634 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:26:43.634 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:26:43.634 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:26:43.634 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:26:43.634 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:26:43.634 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:26:43.634 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:26:43.634 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:26:43.634 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:26:43.634 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:26:43.634 15:37:00 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:26:43.634 15:37:00 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:43.634 15:37:00 -- common/autotest_common.sh@10 -- # set +x 00:26:43.634 15:37:00 -- spdkcli/nvmf.sh@90 -- # killprocess 1792442 00:26:43.634 15:37:00 -- common/autotest_common.sh@936 -- # '[' -z 1792442 ']' 00:26:43.634 15:37:00 -- common/autotest_common.sh@940 -- # kill -0 1792442 00:26:43.634 15:37:00 -- common/autotest_common.sh@941 -- # uname 00:26:43.634 15:37:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:43.634 15:37:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1792442 00:26:43.634 15:37:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:43.634 15:37:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:43.634 15:37:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1792442' 00:26:43.634 killing process with pid 1792442 00:26:43.634 15:37:00 -- common/autotest_common.sh@955 -- # kill 1792442 00:26:43.634 [2024-04-26 15:37:00.561740] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:26:43.634 15:37:00 -- common/autotest_common.sh@960 -- # wait 1792442 00:26:43.634 15:37:00 -- spdkcli/nvmf.sh@1 -- # cleanup 00:26:43.634 15:37:00 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:26:43.634 15:37:00 -- spdkcli/common.sh@13 -- # '[' -n 1792442 ']' 00:26:43.634 15:37:00 -- spdkcli/common.sh@14 -- # killprocess 1792442 00:26:43.634 15:37:00 -- common/autotest_common.sh@936 -- # '[' -z 1792442 ']' 00:26:43.634 15:37:00 -- common/autotest_common.sh@940 -- # kill -0 1792442 00:26:43.634 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (1792442) - No such process 00:26:43.634 15:37:00 -- common/autotest_common.sh@963 -- # echo 'Process with pid 1792442 is not found' 00:26:43.634 Process with pid 1792442 is not found 00:26:43.634 15:37:00 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:26:43.634 15:37:00 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:26:43.634 15:37:00 -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:26:43.634 00:26:43.634 real 0m15.540s 00:26:43.634 user 0m31.972s 00:26:43.634 sys 0m0.684s 00:26:43.634 15:37:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:43.634 15:37:00 -- common/autotest_common.sh@10 -- # set +x 00:26:43.634 ************************************ 00:26:43.634 END TEST spdkcli_nvmf_tcp 00:26:43.634 ************************************ 00:26:43.634 15:37:00 -- spdk/autotest.sh@288 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:26:43.634 15:37:00 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:43.634 15:37:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:43.634 15:37:00 -- common/autotest_common.sh@10 -- # set +x 00:26:43.634 ************************************ 00:26:43.634 START TEST nvmf_identify_passthru 00:26:43.634 ************************************ 00:26:43.634 15:37:00 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:26:43.634 * Looking for test storage... 00:26:43.634 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:43.634 15:37:00 -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:43.634 15:37:00 -- nvmf/common.sh@7 -- # uname -s 00:26:43.634 15:37:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:43.634 15:37:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:43.634 15:37:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:43.634 15:37:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:43.634 15:37:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:43.634 15:37:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:43.634 15:37:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:43.634 15:37:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:43.634 15:37:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:43.634 15:37:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:43.634 15:37:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:43.634 15:37:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:43.634 15:37:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:43.634 15:37:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:43.634 15:37:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:43.634 15:37:01 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:43.634 15:37:01 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:43.634 15:37:01 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:43.634 15:37:01 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:43.634 15:37:01 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:43.634 15:37:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:43.634 15:37:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:43.634 15:37:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:43.634 15:37:01 -- paths/export.sh@5 -- # export PATH 00:26:43.634 15:37:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:43.634 15:37:01 -- nvmf/common.sh@47 -- # : 0 00:26:43.634 15:37:01 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:43.634 15:37:01 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:43.634 15:37:01 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:43.634 15:37:01 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:43.634 15:37:01 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:43.634 15:37:01 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:43.634 15:37:01 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:43.634 15:37:01 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:43.634 15:37:01 -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:43.634 15:37:01 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:43.634 15:37:01 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:43.634 15:37:01 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:43.634 15:37:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:43.634 15:37:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:43.634 15:37:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:43.634 15:37:01 -- paths/export.sh@5 -- # export PATH 00:26:43.634 15:37:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:43.634 15:37:01 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:26:43.634 15:37:01 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:26:43.634 15:37:01 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:43.634 15:37:01 -- nvmf/common.sh@437 -- # prepare_net_devs 00:26:43.634 15:37:01 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:26:43.634 15:37:01 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:26:43.634 15:37:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:43.634 15:37:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:43.634 15:37:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:43.634 15:37:01 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:26:43.634 15:37:01 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:26:43.634 15:37:01 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:43.634 15:37:01 -- common/autotest_common.sh@10 -- # set +x 00:26:51.779 15:37:07 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:51.779 15:37:07 -- nvmf/common.sh@291 -- # pci_devs=() 00:26:51.779 15:37:07 -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:51.779 15:37:07 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:51.779 15:37:07 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:51.779 15:37:07 -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:51.779 15:37:07 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:51.779 15:37:07 -- nvmf/common.sh@295 -- # net_devs=() 00:26:51.779 15:37:07 -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:51.779 15:37:07 -- nvmf/common.sh@296 -- # e810=() 00:26:51.779 15:37:07 -- nvmf/common.sh@296 -- # local -ga e810 00:26:51.779 15:37:07 -- nvmf/common.sh@297 -- # x722=() 00:26:51.779 15:37:07 -- nvmf/common.sh@297 -- # local -ga x722 00:26:51.779 15:37:07 -- nvmf/common.sh@298 -- # mlx=() 00:26:51.779 15:37:07 -- nvmf/common.sh@298 -- # local -ga mlx 00:26:51.779 15:37:07 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:51.779 15:37:07 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:51.779 15:37:07 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:51.779 15:37:07 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:51.779 15:37:07 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:51.779 15:37:07 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:51.779 15:37:07 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:51.779 15:37:07 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:51.779 15:37:07 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:51.779 15:37:07 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:51.779 15:37:07 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:51.779 15:37:07 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:51.779 15:37:07 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:51.779 15:37:07 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:51.779 15:37:07 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:51.779 15:37:07 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:51.779 15:37:07 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:51.779 15:37:07 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:51.779 15:37:07 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:51.779 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:51.779 15:37:07 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:51.779 15:37:07 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:51.779 15:37:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:51.779 15:37:07 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:51.779 15:37:07 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:51.779 15:37:07 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:51.779 15:37:07 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:51.779 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:51.779 15:37:07 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:51.779 15:37:07 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:51.779 15:37:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:51.779 15:37:07 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:51.779 15:37:07 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:51.779 15:37:07 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:51.779 15:37:07 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:51.779 15:37:07 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:51.779 15:37:07 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:51.779 15:37:07 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:51.779 15:37:07 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:51.779 15:37:07 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:51.779 15:37:07 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:51.779 Found net devices under 0000:31:00.0: cvl_0_0 00:26:51.779 15:37:07 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:51.779 15:37:07 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:51.779 15:37:07 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:51.779 15:37:07 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:51.780 15:37:07 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:51.780 15:37:07 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:51.780 Found net devices under 0000:31:00.1: cvl_0_1 00:26:51.780 15:37:07 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:51.780 15:37:07 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:26:51.780 15:37:07 -- nvmf/common.sh@403 -- # is_hw=yes 00:26:51.780 15:37:07 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:26:51.780 15:37:07 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:26:51.780 15:37:07 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:26:51.780 15:37:07 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:51.780 15:37:07 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:51.780 15:37:07 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:51.780 15:37:07 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:51.780 15:37:07 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:51.780 15:37:07 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:51.780 15:37:07 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:51.780 15:37:07 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:51.780 15:37:07 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:51.780 15:37:07 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:51.780 15:37:07 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:51.780 15:37:07 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:51.780 15:37:07 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:51.780 15:37:08 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:51.780 15:37:08 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:51.780 15:37:08 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:51.780 15:37:08 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:51.780 15:37:08 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:51.780 15:37:08 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:51.780 15:37:08 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:51.780 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:51.780 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.719 ms 00:26:51.780 00:26:51.780 --- 10.0.0.2 ping statistics --- 00:26:51.780 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:51.780 rtt min/avg/max/mdev = 0.719/0.719/0.719/0.000 ms 00:26:51.780 15:37:08 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:51.780 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:51.780 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:26:51.780 00:26:51.780 --- 10.0.0.1 ping statistics --- 00:26:51.780 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:51.780 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:26:51.780 15:37:08 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:51.780 15:37:08 -- nvmf/common.sh@411 -- # return 0 00:26:51.780 15:37:08 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:26:51.780 15:37:08 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:51.780 15:37:08 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:26:51.780 15:37:08 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:26:51.780 15:37:08 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:51.780 15:37:08 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:26:51.780 15:37:08 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:26:51.780 15:37:08 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:26:51.780 15:37:08 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:51.780 15:37:08 -- common/autotest_common.sh@10 -- # set +x 00:26:51.780 15:37:08 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:26:51.780 15:37:08 -- common/autotest_common.sh@1510 -- # bdfs=() 00:26:51.780 15:37:08 -- common/autotest_common.sh@1510 -- # local bdfs 00:26:51.780 15:37:08 -- common/autotest_common.sh@1511 -- # bdfs=($(get_nvme_bdfs)) 00:26:51.780 15:37:08 -- common/autotest_common.sh@1511 -- # get_nvme_bdfs 00:26:51.780 15:37:08 -- common/autotest_common.sh@1499 -- # bdfs=() 00:26:51.780 15:37:08 -- common/autotest_common.sh@1499 -- # local bdfs 00:26:51.780 15:37:08 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:26:51.780 15:37:08 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:26:51.780 15:37:08 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:26:51.780 15:37:08 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:26:51.780 15:37:08 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:65:00.0 00:26:51.780 15:37:08 -- common/autotest_common.sh@1513 -- # echo 0000:65:00.0 00:26:51.780 15:37:08 -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:26:51.780 15:37:08 -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:26:51.780 15:37:08 -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:26:51.780 15:37:08 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:26:51.780 15:37:08 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:26:51.780 EAL: No free 2048 kB hugepages reported on node 1 00:26:51.780 15:37:08 -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605494 00:26:51.780 15:37:08 -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:26:51.780 15:37:08 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:26:51.780 15:37:08 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:26:51.780 EAL: No free 2048 kB hugepages reported on node 1 00:26:52.040 15:37:09 -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:26:52.040 15:37:09 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:26:52.040 15:37:09 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:52.040 15:37:09 -- common/autotest_common.sh@10 -- # set +x 00:26:52.040 15:37:09 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:26:52.040 15:37:09 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:52.040 15:37:09 -- common/autotest_common.sh@10 -- # set +x 00:26:52.040 15:37:09 -- target/identify_passthru.sh@31 -- # nvmfpid=1799286 00:26:52.040 15:37:09 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:52.040 15:37:09 -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:52.040 15:37:09 -- target/identify_passthru.sh@35 -- # waitforlisten 1799286 00:26:52.040 15:37:09 -- common/autotest_common.sh@817 -- # '[' -z 1799286 ']' 00:26:52.040 15:37:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:52.040 15:37:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:52.040 15:37:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:52.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:52.040 15:37:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:52.040 15:37:09 -- common/autotest_common.sh@10 -- # set +x 00:26:52.040 [2024-04-26 15:37:09.411074] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:26:52.040 [2024-04-26 15:37:09.411127] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:52.040 EAL: No free 2048 kB hugepages reported on node 1 00:26:52.040 [2024-04-26 15:37:09.476805] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:52.301 [2024-04-26 15:37:09.541242] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:52.301 [2024-04-26 15:37:09.541280] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:52.301 [2024-04-26 15:37:09.541289] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:52.301 [2024-04-26 15:37:09.541297] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:52.301 [2024-04-26 15:37:09.541304] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:52.301 [2024-04-26 15:37:09.541452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:52.301 [2024-04-26 15:37:09.541568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:52.301 [2024-04-26 15:37:09.541723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:52.301 [2024-04-26 15:37:09.541724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:52.872 15:37:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:52.872 15:37:10 -- common/autotest_common.sh@850 -- # return 0 00:26:52.872 15:37:10 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:26:52.872 15:37:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:52.872 15:37:10 -- common/autotest_common.sh@10 -- # set +x 00:26:52.872 INFO: Log level set to 20 00:26:52.872 INFO: Requests: 00:26:52.872 { 00:26:52.872 "jsonrpc": "2.0", 00:26:52.872 "method": "nvmf_set_config", 00:26:52.872 "id": 1, 00:26:52.872 "params": { 00:26:52.872 "admin_cmd_passthru": { 00:26:52.872 "identify_ctrlr": true 00:26:52.872 } 00:26:52.872 } 00:26:52.872 } 00:26:52.872 00:26:52.872 INFO: response: 00:26:52.872 { 00:26:52.872 "jsonrpc": "2.0", 00:26:52.872 "id": 1, 00:26:52.872 "result": true 00:26:52.872 } 00:26:52.872 00:26:52.872 15:37:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:52.872 15:37:10 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:26:52.872 15:37:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:52.872 15:37:10 -- common/autotest_common.sh@10 -- # set +x 00:26:52.872 INFO: Setting log level to 20 00:26:52.872 INFO: Setting log level to 20 00:26:52.872 INFO: Log level set to 20 00:26:52.872 INFO: Log level set to 20 00:26:52.872 INFO: Requests: 00:26:52.872 { 00:26:52.872 "jsonrpc": "2.0", 00:26:52.872 "method": "framework_start_init", 00:26:52.872 "id": 1 00:26:52.872 } 00:26:52.872 00:26:52.872 INFO: Requests: 00:26:52.872 { 00:26:52.872 "jsonrpc": "2.0", 00:26:52.872 "method": "framework_start_init", 00:26:52.872 "id": 1 00:26:52.872 } 00:26:52.872 00:26:52.872 [2024-04-26 15:37:10.245279] nvmf_tgt.c: 453:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:26:52.872 INFO: response: 00:26:52.872 { 00:26:52.872 "jsonrpc": "2.0", 00:26:52.872 "id": 1, 00:26:52.872 "result": true 00:26:52.872 } 00:26:52.872 00:26:52.872 INFO: response: 00:26:52.872 { 00:26:52.872 "jsonrpc": "2.0", 00:26:52.872 "id": 1, 00:26:52.872 "result": true 00:26:52.872 } 00:26:52.872 00:26:52.872 15:37:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:52.872 15:37:10 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:52.872 15:37:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:52.872 15:37:10 -- common/autotest_common.sh@10 -- # set +x 00:26:52.872 INFO: Setting log level to 40 00:26:52.872 INFO: Setting log level to 40 00:26:52.872 INFO: Setting log level to 40 00:26:52.872 [2024-04-26 15:37:10.258531] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:52.872 15:37:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:52.872 15:37:10 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:26:52.872 15:37:10 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:52.872 15:37:10 -- common/autotest_common.sh@10 -- # set +x 00:26:52.872 15:37:10 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:26:52.872 15:37:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:52.872 15:37:10 -- common/autotest_common.sh@10 -- # set +x 00:26:53.442 Nvme0n1 00:26:53.442 15:37:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:53.442 15:37:10 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:26:53.442 15:37:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:53.442 15:37:10 -- common/autotest_common.sh@10 -- # set +x 00:26:53.442 15:37:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:53.442 15:37:10 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:26:53.442 15:37:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:53.443 15:37:10 -- common/autotest_common.sh@10 -- # set +x 00:26:53.443 15:37:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:53.443 15:37:10 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:53.443 15:37:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:53.443 15:37:10 -- common/autotest_common.sh@10 -- # set +x 00:26:53.443 [2024-04-26 15:37:10.644081] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:53.443 15:37:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:53.443 15:37:10 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:26:53.443 15:37:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:53.443 15:37:10 -- common/autotest_common.sh@10 -- # set +x 00:26:53.443 [2024-04-26 15:37:10.651861] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:26:53.443 [ 00:26:53.443 { 00:26:53.443 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:53.443 "subtype": "Discovery", 00:26:53.443 "listen_addresses": [], 00:26:53.443 "allow_any_host": true, 00:26:53.443 "hosts": [] 00:26:53.443 }, 00:26:53.443 { 00:26:53.443 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:53.443 "subtype": "NVMe", 00:26:53.443 "listen_addresses": [ 00:26:53.443 { 00:26:53.443 "transport": "TCP", 00:26:53.443 "trtype": "TCP", 00:26:53.443 "adrfam": "IPv4", 00:26:53.443 "traddr": "10.0.0.2", 00:26:53.443 "trsvcid": "4420" 00:26:53.443 } 00:26:53.443 ], 00:26:53.443 "allow_any_host": true, 00:26:53.443 "hosts": [], 00:26:53.443 "serial_number": "SPDK00000000000001", 00:26:53.443 "model_number": "SPDK bdev Controller", 00:26:53.443 "max_namespaces": 1, 00:26:53.443 "min_cntlid": 1, 00:26:53.443 "max_cntlid": 65519, 00:26:53.443 "namespaces": [ 00:26:53.443 { 00:26:53.443 "nsid": 1, 00:26:53.443 "bdev_name": "Nvme0n1", 00:26:53.443 "name": "Nvme0n1", 00:26:53.443 "nguid": "3634473052605494002538450000001F", 00:26:53.443 "uuid": "36344730-5260-5494-0025-38450000001f" 00:26:53.443 } 00:26:53.443 ] 00:26:53.443 } 00:26:53.443 ] 00:26:53.443 15:37:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:53.443 15:37:10 -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:53.443 15:37:10 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:26:53.443 15:37:10 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:26:53.443 EAL: No free 2048 kB hugepages reported on node 1 00:26:53.443 15:37:10 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605494 00:26:53.443 15:37:10 -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:53.443 15:37:10 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:26:53.443 15:37:10 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:26:53.443 EAL: No free 2048 kB hugepages reported on node 1 00:26:53.704 15:37:10 -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:26:53.704 15:37:10 -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605494 '!=' S64GNE0R605494 ']' 00:26:53.704 15:37:10 -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:26:53.704 15:37:10 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:53.704 15:37:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:53.704 15:37:10 -- common/autotest_common.sh@10 -- # set +x 00:26:53.704 15:37:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:53.704 15:37:10 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:26:53.704 15:37:10 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:26:53.704 15:37:10 -- nvmf/common.sh@477 -- # nvmfcleanup 00:26:53.704 15:37:10 -- nvmf/common.sh@117 -- # sync 00:26:53.704 15:37:10 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:53.704 15:37:10 -- nvmf/common.sh@120 -- # set +e 00:26:53.704 15:37:10 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:53.704 15:37:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:53.704 rmmod nvme_tcp 00:26:53.704 rmmod nvme_fabrics 00:26:53.704 rmmod nvme_keyring 00:26:53.704 15:37:10 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:53.704 15:37:10 -- nvmf/common.sh@124 -- # set -e 00:26:53.704 15:37:10 -- nvmf/common.sh@125 -- # return 0 00:26:53.704 15:37:10 -- nvmf/common.sh@478 -- # '[' -n 1799286 ']' 00:26:53.704 15:37:10 -- nvmf/common.sh@479 -- # killprocess 1799286 00:26:53.704 15:37:10 -- common/autotest_common.sh@936 -- # '[' -z 1799286 ']' 00:26:53.704 15:37:10 -- common/autotest_common.sh@940 -- # kill -0 1799286 00:26:53.704 15:37:10 -- common/autotest_common.sh@941 -- # uname 00:26:53.704 15:37:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:53.704 15:37:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1799286 00:26:53.704 15:37:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:53.704 15:37:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:53.704 15:37:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1799286' 00:26:53.704 killing process with pid 1799286 00:26:53.704 15:37:11 -- common/autotest_common.sh@955 -- # kill 1799286 00:26:53.704 [2024-04-26 15:37:11.032598] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:26:53.704 15:37:11 -- common/autotest_common.sh@960 -- # wait 1799286 00:26:53.964 15:37:11 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:26:53.965 15:37:11 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:26:53.965 15:37:11 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:26:53.965 15:37:11 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:53.965 15:37:11 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:53.965 15:37:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:53.965 15:37:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:53.965 15:37:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:56.513 15:37:13 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:56.513 00:26:56.513 real 0m12.479s 00:26:56.513 user 0m9.342s 00:26:56.513 sys 0m6.041s 00:26:56.513 15:37:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:56.513 15:37:13 -- common/autotest_common.sh@10 -- # set +x 00:26:56.513 ************************************ 00:26:56.513 END TEST nvmf_identify_passthru 00:26:56.513 ************************************ 00:26:56.513 15:37:13 -- spdk/autotest.sh@290 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:26:56.513 15:37:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:56.513 15:37:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:56.513 15:37:13 -- common/autotest_common.sh@10 -- # set +x 00:26:56.513 ************************************ 00:26:56.513 START TEST nvmf_dif 00:26:56.513 ************************************ 00:26:56.513 15:37:13 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:26:56.513 * Looking for test storage... 00:26:56.513 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:56.513 15:37:13 -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:56.513 15:37:13 -- nvmf/common.sh@7 -- # uname -s 00:26:56.513 15:37:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:56.513 15:37:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:56.513 15:37:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:56.513 15:37:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:56.513 15:37:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:56.513 15:37:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:56.513 15:37:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:56.513 15:37:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:56.513 15:37:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:56.513 15:37:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:56.513 15:37:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:56.513 15:37:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:56.513 15:37:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:56.513 15:37:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:56.513 15:37:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:56.513 15:37:13 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:56.513 15:37:13 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:56.513 15:37:13 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:56.513 15:37:13 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:56.513 15:37:13 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:56.513 15:37:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.513 15:37:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.514 15:37:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.514 15:37:13 -- paths/export.sh@5 -- # export PATH 00:26:56.514 15:37:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.514 15:37:13 -- nvmf/common.sh@47 -- # : 0 00:26:56.514 15:37:13 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:56.514 15:37:13 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:56.514 15:37:13 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:56.514 15:37:13 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:56.514 15:37:13 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:56.514 15:37:13 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:56.514 15:37:13 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:56.514 15:37:13 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:56.514 15:37:13 -- target/dif.sh@15 -- # NULL_META=16 00:26:56.514 15:37:13 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:26:56.514 15:37:13 -- target/dif.sh@15 -- # NULL_SIZE=64 00:26:56.514 15:37:13 -- target/dif.sh@15 -- # NULL_DIF=1 00:26:56.514 15:37:13 -- target/dif.sh@135 -- # nvmftestinit 00:26:56.514 15:37:13 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:26:56.514 15:37:13 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:56.514 15:37:13 -- nvmf/common.sh@437 -- # prepare_net_devs 00:26:56.514 15:37:13 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:26:56.514 15:37:13 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:26:56.514 15:37:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:56.514 15:37:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:56.514 15:37:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:56.514 15:37:13 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:26:56.514 15:37:13 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:26:56.514 15:37:13 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:56.514 15:37:13 -- common/autotest_common.sh@10 -- # set +x 00:27:03.105 15:37:20 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:03.105 15:37:20 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:03.105 15:37:20 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:03.105 15:37:20 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:03.105 15:37:20 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:03.105 15:37:20 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:03.105 15:37:20 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:03.105 15:37:20 -- nvmf/common.sh@295 -- # net_devs=() 00:27:03.105 15:37:20 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:03.105 15:37:20 -- nvmf/common.sh@296 -- # e810=() 00:27:03.105 15:37:20 -- nvmf/common.sh@296 -- # local -ga e810 00:27:03.105 15:37:20 -- nvmf/common.sh@297 -- # x722=() 00:27:03.105 15:37:20 -- nvmf/common.sh@297 -- # local -ga x722 00:27:03.105 15:37:20 -- nvmf/common.sh@298 -- # mlx=() 00:27:03.105 15:37:20 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:03.105 15:37:20 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:03.105 15:37:20 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:03.105 15:37:20 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:03.105 15:37:20 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:03.105 15:37:20 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:03.105 15:37:20 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:03.105 15:37:20 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:03.105 15:37:20 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:03.105 15:37:20 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:03.105 15:37:20 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:03.105 15:37:20 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:03.105 15:37:20 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:03.105 15:37:20 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:03.105 15:37:20 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:03.105 15:37:20 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:03.105 15:37:20 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:03.105 15:37:20 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:03.105 15:37:20 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:03.105 15:37:20 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:03.105 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:03.105 15:37:20 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:03.105 15:37:20 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:03.105 15:37:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:03.105 15:37:20 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:03.105 15:37:20 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:03.105 15:37:20 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:03.105 15:37:20 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:03.105 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:03.105 15:37:20 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:03.105 15:37:20 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:03.105 15:37:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:03.105 15:37:20 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:03.105 15:37:20 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:03.105 15:37:20 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:03.105 15:37:20 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:03.105 15:37:20 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:03.105 15:37:20 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:03.105 15:37:20 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:03.105 15:37:20 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:27:03.105 15:37:20 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:03.105 15:37:20 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:03.105 Found net devices under 0000:31:00.0: cvl_0_0 00:27:03.105 15:37:20 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:27:03.105 15:37:20 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:03.105 15:37:20 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:03.105 15:37:20 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:27:03.105 15:37:20 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:03.105 15:37:20 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:03.105 Found net devices under 0000:31:00.1: cvl_0_1 00:27:03.105 15:37:20 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:27:03.105 15:37:20 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:27:03.105 15:37:20 -- nvmf/common.sh@403 -- # is_hw=yes 00:27:03.105 15:37:20 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:27:03.105 15:37:20 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:27:03.105 15:37:20 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:27:03.105 15:37:20 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:03.105 15:37:20 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:03.105 15:37:20 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:03.105 15:37:20 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:03.105 15:37:20 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:03.105 15:37:20 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:03.105 15:37:20 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:03.105 15:37:20 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:03.105 15:37:20 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:03.105 15:37:20 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:03.366 15:37:20 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:03.367 15:37:20 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:03.367 15:37:20 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:03.367 15:37:20 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:03.367 15:37:20 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:03.367 15:37:20 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:03.367 15:37:20 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:03.628 15:37:20 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:03.628 15:37:20 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:03.628 15:37:20 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:03.628 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:03.628 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.513 ms 00:27:03.628 00:27:03.628 --- 10.0.0.2 ping statistics --- 00:27:03.628 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:03.628 rtt min/avg/max/mdev = 0.513/0.513/0.513/0.000 ms 00:27:03.628 15:37:20 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:03.628 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:03.628 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:27:03.628 00:27:03.628 --- 10.0.0.1 ping statistics --- 00:27:03.628 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:03.628 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:27:03.628 15:37:20 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:03.628 15:37:20 -- nvmf/common.sh@411 -- # return 0 00:27:03.628 15:37:20 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:27:03.629 15:37:20 -- nvmf/common.sh@440 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:07.061 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:27:07.061 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:27:07.061 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:27:07.061 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:27:07.061 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:27:07.061 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:27:07.061 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:27:07.061 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:27:07.061 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:27:07.061 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:27:07.061 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:27:07.061 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:27:07.061 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:27:07.061 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:27:07.061 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:27:07.061 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:27:07.061 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:27:07.322 15:37:24 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:07.322 15:37:24 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:27:07.322 15:37:24 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:27:07.322 15:37:24 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:07.322 15:37:24 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:27:07.322 15:37:24 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:27:07.322 15:37:24 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:27:07.322 15:37:24 -- target/dif.sh@137 -- # nvmfappstart 00:27:07.322 15:37:24 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:27:07.322 15:37:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:07.322 15:37:24 -- common/autotest_common.sh@10 -- # set +x 00:27:07.322 15:37:24 -- nvmf/common.sh@470 -- # nvmfpid=1805554 00:27:07.322 15:37:24 -- nvmf/common.sh@471 -- # waitforlisten 1805554 00:27:07.322 15:37:24 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:27:07.322 15:37:24 -- common/autotest_common.sh@817 -- # '[' -z 1805554 ']' 00:27:07.322 15:37:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:07.322 15:37:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:07.322 15:37:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:07.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:07.322 15:37:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:07.322 15:37:24 -- common/autotest_common.sh@10 -- # set +x 00:27:07.322 [2024-04-26 15:37:24.607985] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:27:07.322 [2024-04-26 15:37:24.608034] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:07.322 EAL: No free 2048 kB hugepages reported on node 1 00:27:07.322 [2024-04-26 15:37:24.673882] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:07.322 [2024-04-26 15:37:24.736879] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:07.322 [2024-04-26 15:37:24.736914] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:07.322 [2024-04-26 15:37:24.736922] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:07.322 [2024-04-26 15:37:24.736928] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:07.322 [2024-04-26 15:37:24.736933] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:07.322 [2024-04-26 15:37:24.736957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:08.267 15:37:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:08.267 15:37:25 -- common/autotest_common.sh@850 -- # return 0 00:27:08.267 15:37:25 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:27:08.267 15:37:25 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:08.267 15:37:25 -- common/autotest_common.sh@10 -- # set +x 00:27:08.267 15:37:25 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:08.267 15:37:25 -- target/dif.sh@139 -- # create_transport 00:27:08.267 15:37:25 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:27:08.267 15:37:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:08.267 15:37:25 -- common/autotest_common.sh@10 -- # set +x 00:27:08.267 [2024-04-26 15:37:25.407703] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:08.267 15:37:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:08.267 15:37:25 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:27:08.267 15:37:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:08.267 15:37:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:08.267 15:37:25 -- common/autotest_common.sh@10 -- # set +x 00:27:08.267 ************************************ 00:27:08.267 START TEST fio_dif_1_default 00:27:08.267 ************************************ 00:27:08.267 15:37:25 -- common/autotest_common.sh@1111 -- # fio_dif_1 00:27:08.267 15:37:25 -- target/dif.sh@86 -- # create_subsystems 0 00:27:08.267 15:37:25 -- target/dif.sh@28 -- # local sub 00:27:08.267 15:37:25 -- target/dif.sh@30 -- # for sub in "$@" 00:27:08.267 15:37:25 -- target/dif.sh@31 -- # create_subsystem 0 00:27:08.267 15:37:25 -- target/dif.sh@18 -- # local sub_id=0 00:27:08.267 15:37:25 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:08.267 15:37:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:08.267 15:37:25 -- common/autotest_common.sh@10 -- # set +x 00:27:08.267 bdev_null0 00:27:08.267 15:37:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:08.267 15:37:25 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:08.267 15:37:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:08.267 15:37:25 -- common/autotest_common.sh@10 -- # set +x 00:27:08.267 15:37:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:08.267 15:37:25 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:08.267 15:37:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:08.267 15:37:25 -- common/autotest_common.sh@10 -- # set +x 00:27:08.267 15:37:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:08.267 15:37:25 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:08.267 15:37:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:08.267 15:37:25 -- common/autotest_common.sh@10 -- # set +x 00:27:08.267 [2024-04-26 15:37:25.604332] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:08.267 15:37:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:08.267 15:37:25 -- target/dif.sh@87 -- # fio /dev/fd/62 00:27:08.267 15:37:25 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:27:08.267 15:37:25 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:08.267 15:37:25 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:08.267 15:37:25 -- nvmf/common.sh@521 -- # config=() 00:27:08.267 15:37:25 -- nvmf/common.sh@521 -- # local subsystem config 00:27:08.267 15:37:25 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:08.267 15:37:25 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:08.267 15:37:25 -- target/dif.sh@82 -- # gen_fio_conf 00:27:08.267 15:37:25 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:27:08.267 15:37:25 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:08.267 { 00:27:08.267 "params": { 00:27:08.267 "name": "Nvme$subsystem", 00:27:08.267 "trtype": "$TEST_TRANSPORT", 00:27:08.268 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:08.268 "adrfam": "ipv4", 00:27:08.268 "trsvcid": "$NVMF_PORT", 00:27:08.268 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:08.268 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:08.268 "hdgst": ${hdgst:-false}, 00:27:08.268 "ddgst": ${ddgst:-false} 00:27:08.268 }, 00:27:08.268 "method": "bdev_nvme_attach_controller" 00:27:08.268 } 00:27:08.268 EOF 00:27:08.268 )") 00:27:08.268 15:37:25 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:08.268 15:37:25 -- target/dif.sh@54 -- # local file 00:27:08.268 15:37:25 -- common/autotest_common.sh@1325 -- # local sanitizers 00:27:08.268 15:37:25 -- target/dif.sh@56 -- # cat 00:27:08.268 15:37:25 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:08.268 15:37:25 -- common/autotest_common.sh@1327 -- # shift 00:27:08.268 15:37:25 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:27:08.268 15:37:25 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:08.268 15:37:25 -- nvmf/common.sh@543 -- # cat 00:27:08.268 15:37:25 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:08.268 15:37:25 -- target/dif.sh@72 -- # (( file = 1 )) 00:27:08.268 15:37:25 -- common/autotest_common.sh@1331 -- # grep libasan 00:27:08.268 15:37:25 -- target/dif.sh@72 -- # (( file <= files )) 00:27:08.268 15:37:25 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:08.268 15:37:25 -- nvmf/common.sh@545 -- # jq . 00:27:08.268 15:37:25 -- nvmf/common.sh@546 -- # IFS=, 00:27:08.268 15:37:25 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:27:08.268 "params": { 00:27:08.268 "name": "Nvme0", 00:27:08.268 "trtype": "tcp", 00:27:08.268 "traddr": "10.0.0.2", 00:27:08.268 "adrfam": "ipv4", 00:27:08.268 "trsvcid": "4420", 00:27:08.268 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:08.268 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:08.268 "hdgst": false, 00:27:08.268 "ddgst": false 00:27:08.268 }, 00:27:08.268 "method": "bdev_nvme_attach_controller" 00:27:08.268 }' 00:27:08.268 15:37:25 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:08.268 15:37:25 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:08.268 15:37:25 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:08.268 15:37:25 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:08.268 15:37:25 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:27:08.268 15:37:25 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:08.268 15:37:25 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:08.268 15:37:25 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:08.268 15:37:25 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:08.268 15:37:25 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:08.839 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:08.839 fio-3.35 00:27:08.839 Starting 1 thread 00:27:08.839 EAL: No free 2048 kB hugepages reported on node 1 00:27:21.078 00:27:21.078 filename0: (groupid=0, jobs=1): err= 0: pid=1806091: Fri Apr 26 15:37:36 2024 00:27:21.078 read: IOPS=95, BW=383KiB/s (392kB/s)(3840KiB/10038msec) 00:27:21.078 slat (nsec): min=5320, max=39105, avg=6105.48, stdev=1822.49 00:27:21.078 clat (usec): min=40880, max=43028, avg=41808.25, stdev=440.25 00:27:21.078 lat (usec): min=40885, max=43034, avg=41814.35, stdev=440.31 00:27:21.078 clat percentiles (usec): 00:27:21.078 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:27:21.078 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:27:21.078 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:27:21.078 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:27:21.078 | 99.99th=[43254] 00:27:21.078 bw ( KiB/s): min= 351, max= 384, per=99.86%, avg=382.35, stdev= 7.38, samples=20 00:27:21.078 iops : min= 87, max= 96, avg=95.55, stdev= 2.01, samples=20 00:27:21.078 lat (msec) : 50=100.00% 00:27:21.078 cpu : usr=95.13%, sys=4.68%, ctx=14, majf=0, minf=241 00:27:21.078 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:21.078 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:21.078 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:21.078 issued rwts: total=960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:21.078 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:21.078 00:27:21.078 Run status group 0 (all jobs): 00:27:21.078 READ: bw=383KiB/s (392kB/s), 383KiB/s-383KiB/s (392kB/s-392kB/s), io=3840KiB (3932kB), run=10038-10038msec 00:27:21.078 15:37:36 -- target/dif.sh@88 -- # destroy_subsystems 0 00:27:21.078 15:37:36 -- target/dif.sh@43 -- # local sub 00:27:21.078 15:37:36 -- target/dif.sh@45 -- # for sub in "$@" 00:27:21.078 15:37:36 -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:21.078 15:37:36 -- target/dif.sh@36 -- # local sub_id=0 00:27:21.078 15:37:36 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:21.078 15:37:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:21.078 15:37:36 -- common/autotest_common.sh@10 -- # set +x 00:27:21.078 15:37:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:21.078 15:37:36 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:21.078 15:37:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:21.078 15:37:36 -- common/autotest_common.sh@10 -- # set +x 00:27:21.078 15:37:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:21.078 00:27:21.078 real 0m11.143s 00:27:21.078 user 0m22.784s 00:27:21.078 sys 0m0.789s 00:27:21.078 15:37:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:21.078 15:37:36 -- common/autotest_common.sh@10 -- # set +x 00:27:21.078 ************************************ 00:27:21.078 END TEST fio_dif_1_default 00:27:21.078 ************************************ 00:27:21.078 15:37:36 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:27:21.078 15:37:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:21.078 15:37:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:21.078 15:37:36 -- common/autotest_common.sh@10 -- # set +x 00:27:21.078 ************************************ 00:27:21.078 START TEST fio_dif_1_multi_subsystems 00:27:21.078 ************************************ 00:27:21.078 15:37:36 -- common/autotest_common.sh@1111 -- # fio_dif_1_multi_subsystems 00:27:21.078 15:37:36 -- target/dif.sh@92 -- # local files=1 00:27:21.078 15:37:36 -- target/dif.sh@94 -- # create_subsystems 0 1 00:27:21.078 15:37:36 -- target/dif.sh@28 -- # local sub 00:27:21.078 15:37:36 -- target/dif.sh@30 -- # for sub in "$@" 00:27:21.078 15:37:36 -- target/dif.sh@31 -- # create_subsystem 0 00:27:21.078 15:37:36 -- target/dif.sh@18 -- # local sub_id=0 00:27:21.078 15:37:36 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:21.078 15:37:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:21.078 15:37:36 -- common/autotest_common.sh@10 -- # set +x 00:27:21.078 bdev_null0 00:27:21.078 15:37:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:21.078 15:37:36 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:21.078 15:37:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:21.078 15:37:36 -- common/autotest_common.sh@10 -- # set +x 00:27:21.078 15:37:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:21.078 15:37:36 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:21.078 15:37:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:21.078 15:37:36 -- common/autotest_common.sh@10 -- # set +x 00:27:21.078 15:37:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:21.079 15:37:36 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:21.079 15:37:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:21.079 15:37:36 -- common/autotest_common.sh@10 -- # set +x 00:27:21.079 [2024-04-26 15:37:36.925701] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:21.079 15:37:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:21.079 15:37:36 -- target/dif.sh@30 -- # for sub in "$@" 00:27:21.079 15:37:36 -- target/dif.sh@31 -- # create_subsystem 1 00:27:21.079 15:37:36 -- target/dif.sh@18 -- # local sub_id=1 00:27:21.079 15:37:36 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:27:21.079 15:37:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:21.079 15:37:36 -- common/autotest_common.sh@10 -- # set +x 00:27:21.079 bdev_null1 00:27:21.079 15:37:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:21.079 15:37:36 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:21.079 15:37:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:21.079 15:37:36 -- common/autotest_common.sh@10 -- # set +x 00:27:21.079 15:37:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:21.079 15:37:36 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:21.079 15:37:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:21.079 15:37:36 -- common/autotest_common.sh@10 -- # set +x 00:27:21.079 15:37:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:21.079 15:37:36 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:21.079 15:37:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:21.079 15:37:36 -- common/autotest_common.sh@10 -- # set +x 00:27:21.079 15:37:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:21.079 15:37:36 -- target/dif.sh@95 -- # fio /dev/fd/62 00:27:21.079 15:37:36 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:21.079 15:37:36 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:21.079 15:37:36 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:27:21.079 15:37:36 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:27:21.079 15:37:36 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:21.079 15:37:36 -- common/autotest_common.sh@1325 -- # local sanitizers 00:27:21.079 15:37:36 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:27:21.079 15:37:36 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:21.079 15:37:36 -- common/autotest_common.sh@1327 -- # shift 00:27:21.079 15:37:36 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:27:21.079 15:37:36 -- nvmf/common.sh@521 -- # config=() 00:27:21.079 15:37:36 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:21.079 15:37:36 -- target/dif.sh@82 -- # gen_fio_conf 00:27:21.079 15:37:36 -- nvmf/common.sh@521 -- # local subsystem config 00:27:21.079 15:37:36 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:21.079 15:37:36 -- target/dif.sh@54 -- # local file 00:27:21.079 15:37:36 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:21.079 { 00:27:21.079 "params": { 00:27:21.079 "name": "Nvme$subsystem", 00:27:21.079 "trtype": "$TEST_TRANSPORT", 00:27:21.079 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:21.079 "adrfam": "ipv4", 00:27:21.079 "trsvcid": "$NVMF_PORT", 00:27:21.079 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:21.079 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:21.079 "hdgst": ${hdgst:-false}, 00:27:21.079 "ddgst": ${ddgst:-false} 00:27:21.079 }, 00:27:21.079 "method": "bdev_nvme_attach_controller" 00:27:21.079 } 00:27:21.079 EOF 00:27:21.079 )") 00:27:21.079 15:37:36 -- target/dif.sh@56 -- # cat 00:27:21.079 15:37:36 -- nvmf/common.sh@543 -- # cat 00:27:21.079 15:37:36 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:21.079 15:37:36 -- common/autotest_common.sh@1331 -- # grep libasan 00:27:21.079 15:37:36 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:21.079 15:37:36 -- target/dif.sh@72 -- # (( file = 1 )) 00:27:21.079 15:37:36 -- target/dif.sh@72 -- # (( file <= files )) 00:27:21.079 15:37:36 -- target/dif.sh@73 -- # cat 00:27:21.079 15:37:36 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:21.079 15:37:36 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:21.079 { 00:27:21.079 "params": { 00:27:21.079 "name": "Nvme$subsystem", 00:27:21.079 "trtype": "$TEST_TRANSPORT", 00:27:21.079 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:21.079 "adrfam": "ipv4", 00:27:21.079 "trsvcid": "$NVMF_PORT", 00:27:21.079 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:21.079 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:21.079 "hdgst": ${hdgst:-false}, 00:27:21.079 "ddgst": ${ddgst:-false} 00:27:21.079 }, 00:27:21.079 "method": "bdev_nvme_attach_controller" 00:27:21.079 } 00:27:21.079 EOF 00:27:21.079 )") 00:27:21.079 15:37:36 -- target/dif.sh@72 -- # (( file++ )) 00:27:21.079 15:37:36 -- target/dif.sh@72 -- # (( file <= files )) 00:27:21.079 15:37:36 -- nvmf/common.sh@543 -- # cat 00:27:21.079 15:37:36 -- nvmf/common.sh@545 -- # jq . 00:27:21.079 15:37:36 -- nvmf/common.sh@546 -- # IFS=, 00:27:21.079 15:37:36 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:27:21.079 "params": { 00:27:21.079 "name": "Nvme0", 00:27:21.079 "trtype": "tcp", 00:27:21.079 "traddr": "10.0.0.2", 00:27:21.079 "adrfam": "ipv4", 00:27:21.079 "trsvcid": "4420", 00:27:21.079 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:21.079 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:21.079 "hdgst": false, 00:27:21.079 "ddgst": false 00:27:21.079 }, 00:27:21.079 "method": "bdev_nvme_attach_controller" 00:27:21.079 },{ 00:27:21.079 "params": { 00:27:21.079 "name": "Nvme1", 00:27:21.079 "trtype": "tcp", 00:27:21.079 "traddr": "10.0.0.2", 00:27:21.079 "adrfam": "ipv4", 00:27:21.079 "trsvcid": "4420", 00:27:21.079 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:21.079 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:21.079 "hdgst": false, 00:27:21.079 "ddgst": false 00:27:21.079 }, 00:27:21.079 "method": "bdev_nvme_attach_controller" 00:27:21.079 }' 00:27:21.079 15:37:37 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:21.079 15:37:37 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:21.079 15:37:37 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:21.079 15:37:37 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:21.079 15:37:37 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:27:21.079 15:37:37 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:21.079 15:37:37 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:21.079 15:37:37 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:21.079 15:37:37 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:21.079 15:37:37 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:21.079 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:21.079 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:21.079 fio-3.35 00:27:21.079 Starting 2 threads 00:27:21.079 EAL: No free 2048 kB hugepages reported on node 1 00:27:31.085 00:27:31.085 filename0: (groupid=0, jobs=1): err= 0: pid=1808358: Fri Apr 26 15:37:48 2024 00:27:31.085 read: IOPS=173, BW=695KiB/s (712kB/s)(6976KiB/10039msec) 00:27:31.085 slat (nsec): min=5331, max=32326, avg=6174.44, stdev=1353.44 00:27:31.085 clat (usec): min=636, max=42963, avg=23008.00, stdev=20086.81 00:27:31.085 lat (usec): min=641, max=42996, avg=23014.17, stdev=20086.79 00:27:31.085 clat percentiles (usec): 00:27:31.085 | 1.00th=[ 701], 5.00th=[ 816], 10.00th=[ 865], 20.00th=[ 889], 00:27:31.085 | 30.00th=[ 906], 40.00th=[ 955], 50.00th=[41157], 60.00th=[41157], 00:27:31.085 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:27:31.085 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:27:31.085 | 99.99th=[42730] 00:27:31.085 bw ( KiB/s): min= 640, max= 768, per=47.83%, avg=696.00, stdev=42.65, samples=20 00:27:31.085 iops : min= 160, max= 192, avg=174.00, stdev=10.66, samples=20 00:27:31.085 lat (usec) : 750=2.98%, 1000=39.74% 00:27:31.085 lat (msec) : 2=2.47%, 50=54.82% 00:27:31.085 cpu : usr=96.74%, sys=3.02%, ctx=37, majf=0, minf=32 00:27:31.085 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:31.085 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.085 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.085 issued rwts: total=1744,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:31.085 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:31.085 filename1: (groupid=0, jobs=1): err= 0: pid=1808359: Fri Apr 26 15:37:48 2024 00:27:31.085 read: IOPS=190, BW=760KiB/s (779kB/s)(7632KiB/10038msec) 00:27:31.085 slat (nsec): min=5327, max=32820, avg=6133.84, stdev=1324.39 00:27:31.085 clat (usec): min=539, max=41688, avg=21026.65, stdev=20305.46 00:27:31.085 lat (usec): min=547, max=41721, avg=21032.78, stdev=20305.46 00:27:31.085 clat percentiles (usec): 00:27:31.085 | 1.00th=[ 635], 5.00th=[ 652], 10.00th=[ 668], 20.00th=[ 676], 00:27:31.085 | 30.00th=[ 685], 40.00th=[ 693], 50.00th=[41157], 60.00th=[41157], 00:27:31.085 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:27:31.085 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:27:31.085 | 99.99th=[41681] 00:27:31.085 bw ( KiB/s): min= 704, max= 768, per=52.30%, avg=761.60, stdev=19.70, samples=20 00:27:31.085 iops : min= 176, max= 192, avg=190.40, stdev= 4.92, samples=20 00:27:31.085 lat (usec) : 750=48.64%, 1000=1.26% 00:27:31.085 lat (msec) : 50=50.10% 00:27:31.085 cpu : usr=97.38%, sys=2.42%, ctx=10, majf=0, minf=166 00:27:31.086 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:31.086 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.086 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.086 issued rwts: total=1908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:31.086 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:31.086 00:27:31.086 Run status group 0 (all jobs): 00:27:31.086 READ: bw=1455KiB/s (1490kB/s), 695KiB/s-760KiB/s (712kB/s-779kB/s), io=14.3MiB (15.0MB), run=10038-10039msec 00:27:31.086 15:37:48 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:27:31.086 15:37:48 -- target/dif.sh@43 -- # local sub 00:27:31.086 15:37:48 -- target/dif.sh@45 -- # for sub in "$@" 00:27:31.086 15:37:48 -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:31.086 15:37:48 -- target/dif.sh@36 -- # local sub_id=0 00:27:31.086 15:37:48 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:31.086 15:37:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:31.086 15:37:48 -- common/autotest_common.sh@10 -- # set +x 00:27:31.086 15:37:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:31.086 15:37:48 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:31.086 15:37:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:31.086 15:37:48 -- common/autotest_common.sh@10 -- # set +x 00:27:31.086 15:37:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:31.086 15:37:48 -- target/dif.sh@45 -- # for sub in "$@" 00:27:31.086 15:37:48 -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:31.086 15:37:48 -- target/dif.sh@36 -- # local sub_id=1 00:27:31.086 15:37:48 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:31.086 15:37:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:31.086 15:37:48 -- common/autotest_common.sh@10 -- # set +x 00:27:31.086 15:37:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:31.086 15:37:48 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:31.086 15:37:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:31.086 15:37:48 -- common/autotest_common.sh@10 -- # set +x 00:27:31.086 15:37:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:31.086 00:27:31.086 real 0m11.331s 00:27:31.086 user 0m37.237s 00:27:31.086 sys 0m0.875s 00:27:31.086 15:37:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:31.086 15:37:48 -- common/autotest_common.sh@10 -- # set +x 00:27:31.086 ************************************ 00:27:31.086 END TEST fio_dif_1_multi_subsystems 00:27:31.086 ************************************ 00:27:31.086 15:37:48 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:27:31.086 15:37:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:31.086 15:37:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:31.086 15:37:48 -- common/autotest_common.sh@10 -- # set +x 00:27:31.086 ************************************ 00:27:31.086 START TEST fio_dif_rand_params 00:27:31.086 ************************************ 00:27:31.086 15:37:48 -- common/autotest_common.sh@1111 -- # fio_dif_rand_params 00:27:31.086 15:37:48 -- target/dif.sh@100 -- # local NULL_DIF 00:27:31.086 15:37:48 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:27:31.086 15:37:48 -- target/dif.sh@103 -- # NULL_DIF=3 00:27:31.086 15:37:48 -- target/dif.sh@103 -- # bs=128k 00:27:31.086 15:37:48 -- target/dif.sh@103 -- # numjobs=3 00:27:31.086 15:37:48 -- target/dif.sh@103 -- # iodepth=3 00:27:31.086 15:37:48 -- target/dif.sh@103 -- # runtime=5 00:27:31.086 15:37:48 -- target/dif.sh@105 -- # create_subsystems 0 00:27:31.086 15:37:48 -- target/dif.sh@28 -- # local sub 00:27:31.086 15:37:48 -- target/dif.sh@30 -- # for sub in "$@" 00:27:31.086 15:37:48 -- target/dif.sh@31 -- # create_subsystem 0 00:27:31.086 15:37:48 -- target/dif.sh@18 -- # local sub_id=0 00:27:31.086 15:37:48 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:27:31.086 15:37:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:31.086 15:37:48 -- common/autotest_common.sh@10 -- # set +x 00:27:31.086 bdev_null0 00:27:31.086 15:37:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:31.086 15:37:48 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:31.086 15:37:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:31.086 15:37:48 -- common/autotest_common.sh@10 -- # set +x 00:27:31.086 15:37:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:31.086 15:37:48 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:31.086 15:37:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:31.086 15:37:48 -- common/autotest_common.sh@10 -- # set +x 00:27:31.086 15:37:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:31.086 15:37:48 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:31.086 15:37:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:31.086 15:37:48 -- common/autotest_common.sh@10 -- # set +x 00:27:31.086 [2024-04-26 15:37:48.470602] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:31.086 15:37:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:31.086 15:37:48 -- target/dif.sh@106 -- # fio /dev/fd/62 00:27:31.086 15:37:48 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:27:31.086 15:37:48 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:31.086 15:37:48 -- nvmf/common.sh@521 -- # config=() 00:27:31.086 15:37:48 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:31.086 15:37:48 -- nvmf/common.sh@521 -- # local subsystem config 00:27:31.086 15:37:48 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:31.086 15:37:48 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:31.086 15:37:48 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:31.086 { 00:27:31.086 "params": { 00:27:31.086 "name": "Nvme$subsystem", 00:27:31.086 "trtype": "$TEST_TRANSPORT", 00:27:31.086 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:31.086 "adrfam": "ipv4", 00:27:31.086 "trsvcid": "$NVMF_PORT", 00:27:31.086 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:31.086 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:31.086 "hdgst": ${hdgst:-false}, 00:27:31.086 "ddgst": ${ddgst:-false} 00:27:31.086 }, 00:27:31.086 "method": "bdev_nvme_attach_controller" 00:27:31.086 } 00:27:31.086 EOF 00:27:31.086 )") 00:27:31.086 15:37:48 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:27:31.086 15:37:48 -- target/dif.sh@82 -- # gen_fio_conf 00:27:31.086 15:37:48 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:31.086 15:37:48 -- target/dif.sh@54 -- # local file 00:27:31.086 15:37:48 -- common/autotest_common.sh@1325 -- # local sanitizers 00:27:31.086 15:37:48 -- target/dif.sh@56 -- # cat 00:27:31.086 15:37:48 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:31.086 15:37:48 -- common/autotest_common.sh@1327 -- # shift 00:27:31.086 15:37:48 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:27:31.086 15:37:48 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:31.086 15:37:48 -- nvmf/common.sh@543 -- # cat 00:27:31.086 15:37:48 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:31.086 15:37:48 -- target/dif.sh@72 -- # (( file = 1 )) 00:27:31.086 15:37:48 -- common/autotest_common.sh@1331 -- # grep libasan 00:27:31.086 15:37:48 -- target/dif.sh@72 -- # (( file <= files )) 00:27:31.086 15:37:48 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:31.086 15:37:48 -- nvmf/common.sh@545 -- # jq . 00:27:31.086 15:37:48 -- nvmf/common.sh@546 -- # IFS=, 00:27:31.086 15:37:48 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:27:31.086 "params": { 00:27:31.086 "name": "Nvme0", 00:27:31.086 "trtype": "tcp", 00:27:31.086 "traddr": "10.0.0.2", 00:27:31.086 "adrfam": "ipv4", 00:27:31.086 "trsvcid": "4420", 00:27:31.087 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:31.087 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:31.087 "hdgst": false, 00:27:31.087 "ddgst": false 00:27:31.087 }, 00:27:31.087 "method": "bdev_nvme_attach_controller" 00:27:31.087 }' 00:27:31.087 15:37:48 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:31.087 15:37:48 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:31.087 15:37:48 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:31.087 15:37:48 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:31.087 15:37:48 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:27:31.087 15:37:48 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:31.373 15:37:48 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:31.373 15:37:48 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:31.373 15:37:48 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:31.373 15:37:48 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:31.637 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:27:31.637 ... 00:27:31.637 fio-3.35 00:27:31.637 Starting 3 threads 00:27:31.637 EAL: No free 2048 kB hugepages reported on node 1 00:27:38.224 00:27:38.224 filename0: (groupid=0, jobs=1): err= 0: pid=1810820: Fri Apr 26 15:37:54 2024 00:27:38.224 read: IOPS=236, BW=29.5MiB/s (30.9MB/s)(149MiB/5044msec) 00:27:38.224 slat (nsec): min=5410, max=36313, avg=7617.04, stdev=1852.21 00:27:38.224 clat (usec): min=4945, max=52771, avg=12658.16, stdev=9346.22 00:27:38.224 lat (usec): min=4953, max=52780, avg=12665.77, stdev=9346.28 00:27:38.224 clat percentiles (usec): 00:27:38.224 | 1.00th=[ 5604], 5.00th=[ 6849], 10.00th=[ 7373], 20.00th=[ 8225], 00:27:38.224 | 30.00th=[ 9110], 40.00th=[ 9765], 50.00th=[10552], 60.00th=[11338], 00:27:38.224 | 70.00th=[12125], 80.00th=[13042], 90.00th=[14353], 95.00th=[47449], 00:27:38.224 | 99.00th=[50070], 99.50th=[50594], 99.90th=[52167], 99.95th=[52691], 00:27:38.224 | 99.99th=[52691] 00:27:38.224 bw ( KiB/s): min=23552, max=37194, per=35.60%, avg=30452.00, stdev=3882.06, samples=10 00:27:38.224 iops : min= 184, max= 290, avg=237.80, stdev=30.21, samples=10 00:27:38.224 lat (msec) : 10=42.99%, 20=51.05%, 50=4.87%, 100=1.09% 00:27:38.224 cpu : usr=95.93%, sys=3.81%, ctx=16, majf=0, minf=38 00:27:38.224 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:38.224 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:38.224 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:38.224 issued rwts: total=1191,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:38.224 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:38.224 filename0: (groupid=0, jobs=1): err= 0: pid=1810821: Fri Apr 26 15:37:54 2024 00:27:38.224 read: IOPS=214, BW=26.8MiB/s (28.1MB/s)(135MiB/5016msec) 00:27:38.224 slat (nsec): min=5417, max=49180, avg=7928.31, stdev=2408.73 00:27:38.224 clat (usec): min=5485, max=89732, avg=13972.03, stdev=10605.78 00:27:38.224 lat (usec): min=5491, max=89741, avg=13979.96, stdev=10605.97 00:27:38.224 clat percentiles (usec): 00:27:38.224 | 1.00th=[ 5932], 5.00th=[ 7111], 10.00th=[ 8029], 20.00th=[ 8979], 00:27:38.224 | 30.00th=[ 9896], 40.00th=[10552], 50.00th=[11338], 60.00th=[12125], 00:27:38.224 | 70.00th=[12911], 80.00th=[13829], 90.00th=[15401], 95.00th=[48497], 00:27:38.224 | 99.00th=[51643], 99.50th=[52691], 99.90th=[54264], 99.95th=[89654], 00:27:38.224 | 99.99th=[89654] 00:27:38.224 bw ( KiB/s): min=21760, max=33280, per=32.11%, avg=27468.80, stdev=3594.25, samples=10 00:27:38.224 iops : min= 170, max= 260, avg=214.60, stdev=28.08, samples=10 00:27:38.224 lat (msec) : 10=30.95%, 20=61.62%, 50=4.37%, 100=3.07% 00:27:38.224 cpu : usr=95.83%, sys=3.89%, ctx=11, majf=0, minf=111 00:27:38.224 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:38.224 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:38.224 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:38.224 issued rwts: total=1076,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:38.224 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:38.224 filename0: (groupid=0, jobs=1): err= 0: pid=1810822: Fri Apr 26 15:37:54 2024 00:27:38.224 read: IOPS=220, BW=27.5MiB/s (28.9MB/s)(138MiB/5015msec) 00:27:38.224 slat (nsec): min=7794, max=59279, avg=8935.66, stdev=2727.12 00:27:38.224 clat (usec): min=5614, max=92474, avg=13614.43, stdev=9988.49 00:27:38.224 lat (usec): min=5623, max=92482, avg=13623.36, stdev=9988.42 00:27:38.224 clat percentiles (usec): 00:27:38.224 | 1.00th=[ 6259], 5.00th=[ 6915], 10.00th=[ 7832], 20.00th=[ 8979], 00:27:38.224 | 30.00th=[ 9896], 40.00th=[10683], 50.00th=[11469], 60.00th=[12387], 00:27:38.224 | 70.00th=[13435], 80.00th=[14353], 90.00th=[15664], 95.00th=[47973], 00:27:38.224 | 99.00th=[52167], 99.50th=[53216], 99.90th=[91751], 99.95th=[92799], 00:27:38.224 | 99.99th=[92799] 00:27:38.224 bw ( KiB/s): min=22272, max=32512, per=32.95%, avg=28185.60, stdev=3410.03, samples=10 00:27:38.224 iops : min= 174, max= 254, avg=220.20, stdev=26.64, samples=10 00:27:38.224 lat (msec) : 10=30.98%, 20=63.32%, 50=3.35%, 100=2.36% 00:27:38.224 cpu : usr=96.35%, sys=3.39%, ctx=13, majf=0, minf=154 00:27:38.224 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:38.224 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:38.224 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:38.224 issued rwts: total=1104,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:38.224 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:38.224 00:27:38.224 Run status group 0 (all jobs): 00:27:38.224 READ: bw=83.5MiB/s (87.6MB/s), 26.8MiB/s-29.5MiB/s (28.1MB/s-30.9MB/s), io=421MiB (442MB), run=5015-5044msec 00:27:38.224 15:37:54 -- target/dif.sh@107 -- # destroy_subsystems 0 00:27:38.224 15:37:54 -- target/dif.sh@43 -- # local sub 00:27:38.224 15:37:54 -- target/dif.sh@45 -- # for sub in "$@" 00:27:38.224 15:37:54 -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:38.224 15:37:54 -- target/dif.sh@36 -- # local sub_id=0 00:27:38.224 15:37:54 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:38.224 15:37:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:38.224 15:37:54 -- common/autotest_common.sh@10 -- # set +x 00:27:38.224 15:37:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:38.224 15:37:54 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:38.224 15:37:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:38.224 15:37:54 -- common/autotest_common.sh@10 -- # set +x 00:27:38.224 15:37:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:38.224 15:37:54 -- target/dif.sh@109 -- # NULL_DIF=2 00:27:38.224 15:37:54 -- target/dif.sh@109 -- # bs=4k 00:27:38.224 15:37:54 -- target/dif.sh@109 -- # numjobs=8 00:27:38.224 15:37:54 -- target/dif.sh@109 -- # iodepth=16 00:27:38.224 15:37:54 -- target/dif.sh@109 -- # runtime= 00:27:38.224 15:37:54 -- target/dif.sh@109 -- # files=2 00:27:38.224 15:37:54 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:27:38.224 15:37:54 -- target/dif.sh@28 -- # local sub 00:27:38.224 15:37:54 -- target/dif.sh@30 -- # for sub in "$@" 00:27:38.224 15:37:54 -- target/dif.sh@31 -- # create_subsystem 0 00:27:38.224 15:37:54 -- target/dif.sh@18 -- # local sub_id=0 00:27:38.224 15:37:54 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:27:38.224 15:37:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:38.224 15:37:54 -- common/autotest_common.sh@10 -- # set +x 00:27:38.224 bdev_null0 00:27:38.224 15:37:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:38.224 15:37:54 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:38.224 15:37:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:38.224 15:37:54 -- common/autotest_common.sh@10 -- # set +x 00:27:38.224 15:37:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:38.224 15:37:54 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:38.224 15:37:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:38.224 15:37:54 -- common/autotest_common.sh@10 -- # set +x 00:27:38.224 15:37:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:38.224 15:37:54 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:38.224 15:37:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:38.224 15:37:54 -- common/autotest_common.sh@10 -- # set +x 00:27:38.224 [2024-04-26 15:37:54.632825] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:38.224 15:37:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:38.224 15:37:54 -- target/dif.sh@30 -- # for sub in "$@" 00:27:38.224 15:37:54 -- target/dif.sh@31 -- # create_subsystem 1 00:27:38.224 15:37:54 -- target/dif.sh@18 -- # local sub_id=1 00:27:38.224 15:37:54 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:27:38.224 15:37:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:38.224 15:37:54 -- common/autotest_common.sh@10 -- # set +x 00:27:38.224 bdev_null1 00:27:38.224 15:37:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:38.224 15:37:54 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:38.224 15:37:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:38.224 15:37:54 -- common/autotest_common.sh@10 -- # set +x 00:27:38.224 15:37:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:38.224 15:37:54 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:38.224 15:37:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:38.224 15:37:54 -- common/autotest_common.sh@10 -- # set +x 00:27:38.224 15:37:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:38.225 15:37:54 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:38.225 15:37:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:38.225 15:37:54 -- common/autotest_common.sh@10 -- # set +x 00:27:38.225 15:37:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:38.225 15:37:54 -- target/dif.sh@30 -- # for sub in "$@" 00:27:38.225 15:37:54 -- target/dif.sh@31 -- # create_subsystem 2 00:27:38.225 15:37:54 -- target/dif.sh@18 -- # local sub_id=2 00:27:38.225 15:37:54 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:27:38.225 15:37:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:38.225 15:37:54 -- common/autotest_common.sh@10 -- # set +x 00:27:38.225 bdev_null2 00:27:38.225 15:37:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:38.225 15:37:54 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:27:38.225 15:37:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:38.225 15:37:54 -- common/autotest_common.sh@10 -- # set +x 00:27:38.225 15:37:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:38.225 15:37:54 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:27:38.225 15:37:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:38.225 15:37:54 -- common/autotest_common.sh@10 -- # set +x 00:27:38.225 15:37:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:38.225 15:37:54 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:38.225 15:37:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:38.225 15:37:54 -- common/autotest_common.sh@10 -- # set +x 00:27:38.225 15:37:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:38.225 15:37:54 -- target/dif.sh@112 -- # fio /dev/fd/62 00:27:38.225 15:37:54 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:27:38.225 15:37:54 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:27:38.225 15:37:54 -- nvmf/common.sh@521 -- # config=() 00:27:38.225 15:37:54 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:38.225 15:37:54 -- nvmf/common.sh@521 -- # local subsystem config 00:27:38.225 15:37:54 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:38.225 15:37:54 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:38.225 15:37:54 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:27:38.225 15:37:54 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:38.225 { 00:27:38.225 "params": { 00:27:38.225 "name": "Nvme$subsystem", 00:27:38.225 "trtype": "$TEST_TRANSPORT", 00:27:38.225 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:38.225 "adrfam": "ipv4", 00:27:38.225 "trsvcid": "$NVMF_PORT", 00:27:38.225 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:38.225 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:38.225 "hdgst": ${hdgst:-false}, 00:27:38.225 "ddgst": ${ddgst:-false} 00:27:38.225 }, 00:27:38.225 "method": "bdev_nvme_attach_controller" 00:27:38.225 } 00:27:38.225 EOF 00:27:38.225 )") 00:27:38.225 15:37:54 -- target/dif.sh@82 -- # gen_fio_conf 00:27:38.225 15:37:54 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:38.225 15:37:54 -- target/dif.sh@54 -- # local file 00:27:38.225 15:37:54 -- common/autotest_common.sh@1325 -- # local sanitizers 00:27:38.225 15:37:54 -- target/dif.sh@56 -- # cat 00:27:38.225 15:37:54 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:38.225 15:37:54 -- common/autotest_common.sh@1327 -- # shift 00:27:38.225 15:37:54 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:27:38.225 15:37:54 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:38.225 15:37:54 -- nvmf/common.sh@543 -- # cat 00:27:38.225 15:37:54 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:38.225 15:37:54 -- common/autotest_common.sh@1331 -- # grep libasan 00:27:38.225 15:37:54 -- target/dif.sh@72 -- # (( file = 1 )) 00:27:38.225 15:37:54 -- target/dif.sh@72 -- # (( file <= files )) 00:27:38.225 15:37:54 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:38.225 15:37:54 -- target/dif.sh@73 -- # cat 00:27:38.225 15:37:54 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:38.225 15:37:54 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:38.225 { 00:27:38.225 "params": { 00:27:38.225 "name": "Nvme$subsystem", 00:27:38.225 "trtype": "$TEST_TRANSPORT", 00:27:38.225 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:38.225 "adrfam": "ipv4", 00:27:38.225 "trsvcid": "$NVMF_PORT", 00:27:38.225 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:38.225 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:38.225 "hdgst": ${hdgst:-false}, 00:27:38.225 "ddgst": ${ddgst:-false} 00:27:38.225 }, 00:27:38.225 "method": "bdev_nvme_attach_controller" 00:27:38.225 } 00:27:38.225 EOF 00:27:38.225 )") 00:27:38.225 15:37:54 -- target/dif.sh@72 -- # (( file++ )) 00:27:38.225 15:37:54 -- target/dif.sh@72 -- # (( file <= files )) 00:27:38.225 15:37:54 -- target/dif.sh@73 -- # cat 00:27:38.225 15:37:54 -- nvmf/common.sh@543 -- # cat 00:27:38.225 15:37:54 -- target/dif.sh@72 -- # (( file++ )) 00:27:38.225 15:37:54 -- target/dif.sh@72 -- # (( file <= files )) 00:27:38.225 15:37:54 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:38.225 15:37:54 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:38.225 { 00:27:38.225 "params": { 00:27:38.225 "name": "Nvme$subsystem", 00:27:38.225 "trtype": "$TEST_TRANSPORT", 00:27:38.225 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:38.225 "adrfam": "ipv4", 00:27:38.225 "trsvcid": "$NVMF_PORT", 00:27:38.225 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:38.225 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:38.225 "hdgst": ${hdgst:-false}, 00:27:38.225 "ddgst": ${ddgst:-false} 00:27:38.225 }, 00:27:38.225 "method": "bdev_nvme_attach_controller" 00:27:38.225 } 00:27:38.225 EOF 00:27:38.225 )") 00:27:38.225 15:37:54 -- nvmf/common.sh@543 -- # cat 00:27:38.225 15:37:54 -- nvmf/common.sh@545 -- # jq . 00:27:38.225 15:37:54 -- nvmf/common.sh@546 -- # IFS=, 00:27:38.225 15:37:54 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:27:38.225 "params": { 00:27:38.225 "name": "Nvme0", 00:27:38.225 "trtype": "tcp", 00:27:38.225 "traddr": "10.0.0.2", 00:27:38.225 "adrfam": "ipv4", 00:27:38.225 "trsvcid": "4420", 00:27:38.225 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:38.225 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:38.225 "hdgst": false, 00:27:38.225 "ddgst": false 00:27:38.225 }, 00:27:38.225 "method": "bdev_nvme_attach_controller" 00:27:38.225 },{ 00:27:38.225 "params": { 00:27:38.225 "name": "Nvme1", 00:27:38.225 "trtype": "tcp", 00:27:38.225 "traddr": "10.0.0.2", 00:27:38.225 "adrfam": "ipv4", 00:27:38.225 "trsvcid": "4420", 00:27:38.225 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:38.225 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:38.225 "hdgst": false, 00:27:38.225 "ddgst": false 00:27:38.225 }, 00:27:38.225 "method": "bdev_nvme_attach_controller" 00:27:38.225 },{ 00:27:38.225 "params": { 00:27:38.225 "name": "Nvme2", 00:27:38.225 "trtype": "tcp", 00:27:38.225 "traddr": "10.0.0.2", 00:27:38.225 "adrfam": "ipv4", 00:27:38.225 "trsvcid": "4420", 00:27:38.225 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:38.225 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:38.225 "hdgst": false, 00:27:38.225 "ddgst": false 00:27:38.225 }, 00:27:38.225 "method": "bdev_nvme_attach_controller" 00:27:38.225 }' 00:27:38.225 15:37:54 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:38.225 15:37:54 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:38.225 15:37:54 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:38.225 15:37:54 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:38.225 15:37:54 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:27:38.225 15:37:54 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:38.225 15:37:54 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:38.225 15:37:54 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:38.225 15:37:54 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:38.225 15:37:54 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:38.225 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:38.225 ... 00:27:38.225 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:38.225 ... 00:27:38.225 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:38.225 ... 00:27:38.225 fio-3.35 00:27:38.225 Starting 24 threads 00:27:38.225 EAL: No free 2048 kB hugepages reported on node 1 00:27:50.467 00:27:50.467 filename0: (groupid=0, jobs=1): err= 0: pid=1812253: Fri Apr 26 15:38:06 2024 00:27:50.467 read: IOPS=564, BW=2257KiB/s (2311kB/s)(22.1MiB/10031msec) 00:27:50.467 slat (nsec): min=5506, max=57992, avg=7971.78, stdev=3246.11 00:27:50.467 clat (usec): min=1105, max=35346, avg=28298.00, stdev=6537.47 00:27:50.467 lat (usec): min=1129, max=35353, avg=28305.97, stdev=6536.38 00:27:50.467 clat percentiles (usec): 00:27:50.467 | 1.00th=[ 2900], 5.00th=[19268], 10.00th=[19530], 20.00th=[22938], 00:27:50.467 | 30.00th=[24511], 40.00th=[26346], 50.00th=[32637], 60.00th=[32637], 00:27:50.467 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:27:50.467 | 99.00th=[34341], 99.50th=[34866], 99.90th=[35390], 99.95th=[35390], 00:27:50.467 | 99.99th=[35390] 00:27:50.467 bw ( KiB/s): min= 1916, max= 3712, per=4.83%, avg=2257.15, stdev=493.98, samples=20 00:27:50.467 iops : min= 479, max= 928, avg=564.25, stdev=123.51, samples=20 00:27:50.467 lat (msec) : 2=0.53%, 4=0.74%, 10=0.92%, 20=11.82%, 50=85.99% 00:27:50.467 cpu : usr=98.68%, sys=0.76%, ctx=63, majf=0, minf=30 00:27:50.467 IO depths : 1=4.1%, 2=8.2%, 4=18.5%, 8=60.8%, 16=8.5%, 32=0.0%, >=64=0.0% 00:27:50.467 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.467 complete : 0=0.0%, 4=92.3%, 8=2.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.467 issued rwts: total=5660,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.467 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.467 filename0: (groupid=0, jobs=1): err= 0: pid=1812254: Fri Apr 26 15:38:06 2024 00:27:50.467 read: IOPS=524, BW=2096KiB/s (2147kB/s)(20.5MiB/10014msec) 00:27:50.467 slat (nsec): min=5502, max=77964, avg=9103.79, stdev=6194.85 00:27:50.467 clat (usec): min=1657, max=35271, avg=30452.24, stdev=5602.77 00:27:50.467 lat (usec): min=1676, max=35278, avg=30461.35, stdev=5603.08 00:27:50.467 clat percentiles (usec): 00:27:50.467 | 1.00th=[ 4686], 5.00th=[19530], 10.00th=[22414], 20.00th=[25297], 00:27:50.467 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:27:50.467 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:27:50.467 | 99.00th=[34341], 99.50th=[34341], 99.90th=[35390], 99.95th=[35390], 00:27:50.467 | 99.99th=[35390] 00:27:50.467 bw ( KiB/s): min= 1916, max= 2560, per=4.48%, avg=2092.15, stdev=200.28, samples=20 00:27:50.467 iops : min= 479, max= 640, avg=523.00, stdev=50.00, samples=20 00:27:50.467 lat (msec) : 2=0.11%, 4=0.74%, 10=0.97%, 20=3.51%, 50=94.66% 00:27:50.467 cpu : usr=99.19%, sys=0.53%, ctx=13, majf=0, minf=13 00:27:50.467 IO depths : 1=6.1%, 2=12.3%, 4=24.6%, 8=50.6%, 16=6.4%, 32=0.0%, >=64=0.0% 00:27:50.467 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.467 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.467 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.467 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.467 filename0: (groupid=0, jobs=1): err= 0: pid=1812255: Fri Apr 26 15:38:06 2024 00:27:50.467 read: IOPS=480, BW=1922KiB/s (1968kB/s)(18.8MiB/10001msec) 00:27:50.467 slat (nsec): min=4648, max=65582, avg=17730.64, stdev=10999.69 00:27:50.467 clat (usec): min=19857, max=60780, avg=33131.58, stdev=1936.48 00:27:50.467 lat (usec): min=19904, max=60794, avg=33149.31, stdev=1935.65 00:27:50.467 clat percentiles (usec): 00:27:50.467 | 1.00th=[31065], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:27:50.467 | 30.00th=[32637], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:27:50.467 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:27:50.467 | 99.00th=[35914], 99.50th=[47449], 99.90th=[60556], 99.95th=[60556], 00:27:50.467 | 99.99th=[60556] 00:27:50.467 bw ( KiB/s): min= 1795, max= 2048, per=4.11%, avg=1921.84, stdev=55.49, samples=19 00:27:50.467 iops : min= 448, max= 512, avg=480.42, stdev=13.97, samples=19 00:27:50.467 lat (msec) : 20=0.10%, 50=99.50%, 100=0.40% 00:27:50.467 cpu : usr=99.14%, sys=0.51%, ctx=69, majf=0, minf=22 00:27:50.467 IO depths : 1=6.0%, 2=12.1%, 4=24.6%, 8=50.8%, 16=6.6%, 32=0.0%, >=64=0.0% 00:27:50.467 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.467 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.467 issued rwts: total=4806,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.467 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.467 filename0: (groupid=0, jobs=1): err= 0: pid=1812256: Fri Apr 26 15:38:06 2024 00:27:50.467 read: IOPS=481, BW=1928KiB/s (1974kB/s)(18.8MiB/10006msec) 00:27:50.467 slat (nsec): min=5516, max=86764, avg=15723.40, stdev=11463.96 00:27:50.467 clat (usec): min=18524, max=74462, avg=33069.13, stdev=2309.74 00:27:50.467 lat (usec): min=18538, max=74478, avg=33084.86, stdev=2309.69 00:27:50.467 clat percentiles (usec): 00:27:50.467 | 1.00th=[23987], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:27:50.467 | 30.00th=[32637], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:27:50.467 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:27:50.467 | 99.00th=[35390], 99.50th=[47449], 99.90th=[55837], 99.95th=[55837], 00:27:50.467 | 99.99th=[74974] 00:27:50.467 bw ( KiB/s): min= 1840, max= 2048, per=4.11%, avg=1922.26, stdev=35.53, samples=19 00:27:50.467 iops : min= 460, max= 512, avg=480.53, stdev= 8.88, samples=19 00:27:50.467 lat (msec) : 20=0.37%, 50=99.13%, 100=0.50% 00:27:50.467 cpu : usr=99.31%, sys=0.42%, ctx=13, majf=0, minf=18 00:27:50.467 IO depths : 1=6.0%, 2=12.1%, 4=24.6%, 8=50.7%, 16=6.5%, 32=0.0%, >=64=0.0% 00:27:50.467 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.467 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.467 issued rwts: total=4822,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.467 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.467 filename0: (groupid=0, jobs=1): err= 0: pid=1812257: Fri Apr 26 15:38:06 2024 00:27:50.467 read: IOPS=481, BW=1924KiB/s (1970kB/s)(18.8MiB/10011msec) 00:27:50.467 slat (nsec): min=5500, max=83434, avg=18513.15, stdev=12732.94 00:27:50.467 clat (usec): min=18138, max=48748, avg=33094.16, stdev=1631.00 00:27:50.467 lat (usec): min=18146, max=48758, avg=33112.67, stdev=1630.79 00:27:50.467 clat percentiles (usec): 00:27:50.467 | 1.00th=[31327], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:27:50.467 | 30.00th=[32637], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:27:50.467 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:27:50.467 | 99.00th=[35390], 99.50th=[46924], 99.90th=[47449], 99.95th=[47449], 00:27:50.467 | 99.99th=[48497] 00:27:50.467 bw ( KiB/s): min= 1792, max= 2048, per=4.11%, avg=1919.37, stdev=42.69, samples=19 00:27:50.467 iops : min= 448, max= 512, avg=479.84, stdev=10.67, samples=19 00:27:50.467 lat (msec) : 20=0.54%, 50=99.46% 00:27:50.467 cpu : usr=99.08%, sys=0.63%, ctx=14, majf=0, minf=16 00:27:50.467 IO depths : 1=6.1%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:27:50.467 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.467 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.467 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.467 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.467 filename0: (groupid=0, jobs=1): err= 0: pid=1812258: Fri Apr 26 15:38:06 2024 00:27:50.467 read: IOPS=483, BW=1933KiB/s (1979kB/s)(18.9MiB/10013msec) 00:27:50.467 slat (nsec): min=5531, max=54588, avg=11418.50, stdev=7754.97 00:27:50.467 clat (usec): min=15874, max=41217, avg=33017.99, stdev=1564.53 00:27:50.467 lat (usec): min=15881, max=41243, avg=33029.41, stdev=1564.68 00:27:50.467 clat percentiles (usec): 00:27:50.467 | 1.00th=[23200], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:27:50.468 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:27:50.468 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:27:50.468 | 99.00th=[35390], 99.50th=[35914], 99.90th=[40633], 99.95th=[40633], 00:27:50.468 | 99.99th=[41157] 00:27:50.468 bw ( KiB/s): min= 1792, max= 2043, per=4.12%, avg=1925.79, stdev=64.64, samples=19 00:27:50.468 iops : min= 448, max= 510, avg=481.37, stdev=16.01, samples=19 00:27:50.468 lat (msec) : 20=0.12%, 50=99.88% 00:27:50.468 cpu : usr=98.51%, sys=0.86%, ctx=72, majf=0, minf=30 00:27:50.468 IO depths : 1=6.1%, 2=12.2%, 4=24.6%, 8=50.7%, 16=6.4%, 32=0.0%, >=64=0.0% 00:27:50.468 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.468 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.468 issued rwts: total=4838,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.468 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.468 filename0: (groupid=0, jobs=1): err= 0: pid=1812259: Fri Apr 26 15:38:06 2024 00:27:50.468 read: IOPS=480, BW=1923KiB/s (1969kB/s)(18.8MiB/10019msec) 00:27:50.468 slat (nsec): min=5536, max=81037, avg=18243.92, stdev=11532.02 00:27:50.468 clat (usec): min=19315, max=59450, avg=33132.42, stdev=1870.17 00:27:50.468 lat (usec): min=19321, max=59466, avg=33150.66, stdev=1869.82 00:27:50.468 clat percentiles (usec): 00:27:50.468 | 1.00th=[31327], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:27:50.468 | 30.00th=[32637], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:27:50.468 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:27:50.468 | 99.00th=[35914], 99.50th=[35914], 99.90th=[59507], 99.95th=[59507], 00:27:50.468 | 99.99th=[59507] 00:27:50.468 bw ( KiB/s): min= 1788, max= 2048, per=4.11%, avg=1919.00, stdev=59.21, samples=20 00:27:50.468 iops : min= 447, max= 512, avg=479.75, stdev=14.80, samples=20 00:27:50.468 lat (msec) : 20=0.04%, 50=99.63%, 100=0.33% 00:27:50.468 cpu : usr=98.51%, sys=0.89%, ctx=131, majf=0, minf=22 00:27:50.468 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:27:50.468 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.468 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.468 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.468 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.468 filename0: (groupid=0, jobs=1): err= 0: pid=1812260: Fri Apr 26 15:38:06 2024 00:27:50.468 read: IOPS=483, BW=1934KiB/s (1980kB/s)(18.9MiB/10027msec) 00:27:50.468 slat (nsec): min=5515, max=89385, avg=16567.49, stdev=12964.24 00:27:50.468 clat (usec): min=15605, max=35844, avg=32954.62, stdev=1559.82 00:27:50.468 lat (usec): min=15612, max=35851, avg=32971.19, stdev=1560.53 00:27:50.468 clat percentiles (usec): 00:27:50.468 | 1.00th=[27132], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:27:50.468 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:27:50.468 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:27:50.468 | 99.00th=[35390], 99.50th=[35390], 99.90th=[35914], 99.95th=[35914], 00:27:50.468 | 99.99th=[35914] 00:27:50.468 bw ( KiB/s): min= 1792, max= 2048, per=4.14%, avg=1932.35, stdev=56.78, samples=20 00:27:50.468 iops : min= 448, max= 512, avg=483.05, stdev=14.12, samples=20 00:27:50.468 lat (msec) : 20=0.47%, 50=99.53% 00:27:50.468 cpu : usr=98.53%, sys=0.87%, ctx=102, majf=0, minf=23 00:27:50.468 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:27:50.468 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.468 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.468 issued rwts: total=4848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.468 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.468 filename1: (groupid=0, jobs=1): err= 0: pid=1812262: Fri Apr 26 15:38:06 2024 00:27:50.468 read: IOPS=480, BW=1923KiB/s (1969kB/s)(18.8MiB/10019msec) 00:27:50.468 slat (nsec): min=5568, max=82394, avg=19814.49, stdev=13631.02 00:27:50.468 clat (usec): min=19922, max=58885, avg=33102.64, stdev=1827.80 00:27:50.468 lat (usec): min=19930, max=58902, avg=33122.46, stdev=1827.63 00:27:50.468 clat percentiles (usec): 00:27:50.468 | 1.00th=[31065], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:27:50.468 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:27:50.468 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34866], 00:27:50.468 | 99.00th=[35390], 99.50th=[35914], 99.90th=[58983], 99.95th=[58983], 00:27:50.468 | 99.99th=[58983] 00:27:50.468 bw ( KiB/s): min= 1788, max= 2048, per=4.11%, avg=1919.15, stdev=58.88, samples=20 00:27:50.468 iops : min= 447, max= 512, avg=479.75, stdev=14.80, samples=20 00:27:50.468 lat (msec) : 20=0.21%, 50=99.46%, 100=0.33% 00:27:50.468 cpu : usr=99.04%, sys=0.62%, ctx=48, majf=0, minf=25 00:27:50.468 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:27:50.468 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.468 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.468 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.468 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.468 filename1: (groupid=0, jobs=1): err= 0: pid=1812263: Fri Apr 26 15:38:06 2024 00:27:50.468 read: IOPS=484, BW=1937KiB/s (1984kB/s)(18.9MiB/10007msec) 00:27:50.468 slat (nsec): min=5459, max=81947, avg=16717.52, stdev=12402.84 00:27:50.468 clat (usec): min=6783, max=53570, avg=32904.41, stdev=3746.51 00:27:50.468 lat (usec): min=6789, max=53588, avg=32921.12, stdev=3747.30 00:27:50.468 clat percentiles (usec): 00:27:50.468 | 1.00th=[20579], 5.00th=[26084], 10.00th=[29230], 20.00th=[32637], 00:27:50.468 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:27:50.468 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34866], 95.00th=[39584], 00:27:50.468 | 99.00th=[45351], 99.50th=[48497], 99.90th=[53740], 99.95th=[53740], 00:27:50.468 | 99.99th=[53740] 00:27:50.468 bw ( KiB/s): min= 1600, max= 2080, per=4.14%, avg=1932.32, stdev=103.94, samples=19 00:27:50.468 iops : min= 400, max= 520, avg=483.00, stdev=25.95, samples=19 00:27:50.468 lat (msec) : 10=0.08%, 20=0.50%, 50=99.26%, 100=0.17% 00:27:50.468 cpu : usr=98.68%, sys=0.66%, ctx=104, majf=0, minf=29 00:27:50.468 IO depths : 1=3.6%, 2=7.5%, 4=16.8%, 8=61.9%, 16=10.2%, 32=0.0%, >=64=0.0% 00:27:50.468 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.468 complete : 0=0.0%, 4=92.2%, 8=3.4%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.468 issued rwts: total=4846,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.468 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.468 filename1: (groupid=0, jobs=1): err= 0: pid=1812264: Fri Apr 26 15:38:06 2024 00:27:50.468 read: IOPS=485, BW=1941KiB/s (1988kB/s)(19.0MiB/10009msec) 00:27:50.468 slat (nsec): min=5495, max=75560, avg=17306.65, stdev=12759.38 00:27:50.468 clat (usec): min=10800, max=54627, avg=32813.60, stdev=3206.52 00:27:50.468 lat (usec): min=10807, max=54634, avg=32830.91, stdev=3207.30 00:27:50.468 clat percentiles (usec): 00:27:50.468 | 1.00th=[20579], 5.00th=[26870], 10.00th=[31851], 20.00th=[32637], 00:27:50.468 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:27:50.468 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[35390], 00:27:50.468 | 99.00th=[44303], 99.50th=[47973], 99.90th=[54264], 99.95th=[54264], 00:27:50.468 | 99.99th=[54789] 00:27:50.468 bw ( KiB/s): min= 1792, max= 2096, per=4.15%, avg=1937.89, stdev=64.49, samples=19 00:27:50.468 iops : min= 448, max= 524, avg=484.47, stdev=16.12, samples=19 00:27:50.468 lat (msec) : 20=0.29%, 50=99.34%, 100=0.37% 00:27:50.468 cpu : usr=99.25%, sys=0.42%, ctx=37, majf=0, minf=21 00:27:50.468 IO depths : 1=4.1%, 2=8.7%, 4=19.5%, 8=58.5%, 16=9.2%, 32=0.0%, >=64=0.0% 00:27:50.468 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.468 complete : 0=0.0%, 4=92.8%, 8=2.3%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.468 issued rwts: total=4858,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.468 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.468 filename1: (groupid=0, jobs=1): err= 0: pid=1812265: Fri Apr 26 15:38:06 2024 00:27:50.468 read: IOPS=482, BW=1930KiB/s (1976kB/s)(18.9MiB/10037msec) 00:27:50.468 slat (nsec): min=5552, max=58600, avg=13677.38, stdev=8303.22 00:27:50.468 clat (usec): min=8516, max=46283, avg=32995.65, stdev=1975.02 00:27:50.468 lat (usec): min=8525, max=46291, avg=33009.33, stdev=1974.90 00:27:50.468 clat percentiles (usec): 00:27:50.468 | 1.00th=[25822], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:27:50.468 | 30.00th=[32637], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:27:50.468 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34866], 00:27:50.468 | 99.00th=[35390], 99.50th=[36439], 99.90th=[46400], 99.95th=[46400], 00:27:50.468 | 99.99th=[46400] 00:27:50.468 bw ( KiB/s): min= 1916, max= 2048, per=4.14%, avg=1932.60, stdev=39.48, samples=20 00:27:50.468 iops : min= 479, max= 512, avg=483.15, stdev= 9.87, samples=20 00:27:50.468 lat (msec) : 10=0.29%, 20=0.04%, 50=99.67% 00:27:50.468 cpu : usr=98.93%, sys=0.72%, ctx=69, majf=0, minf=20 00:27:50.468 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:27:50.468 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.468 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.468 issued rwts: total=4843,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.468 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.468 filename1: (groupid=0, jobs=1): err= 0: pid=1812266: Fri Apr 26 15:38:06 2024 00:27:50.468 read: IOPS=481, BW=1925KiB/s (1971kB/s)(18.8MiB/10009msec) 00:27:50.468 slat (nsec): min=5546, max=87688, avg=16226.11, stdev=11624.68 00:27:50.468 clat (usec): min=19843, max=46857, avg=33100.06, stdev=1520.35 00:27:50.468 lat (usec): min=19861, max=46866, avg=33116.29, stdev=1519.79 00:27:50.468 clat percentiles (usec): 00:27:50.468 | 1.00th=[27395], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:27:50.468 | 30.00th=[32637], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:27:50.468 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:27:50.468 | 99.00th=[35390], 99.50th=[43779], 99.90th=[46924], 99.95th=[46924], 00:27:50.468 | 99.99th=[46924] 00:27:50.468 bw ( KiB/s): min= 1788, max= 2048, per=4.11%, avg=1919.16, stdev=43.36, samples=19 00:27:50.468 iops : min= 447, max= 512, avg=479.79, stdev=10.84, samples=19 00:27:50.468 lat (msec) : 20=0.04%, 50=99.96% 00:27:50.468 cpu : usr=99.24%, sys=0.47%, ctx=9, majf=0, minf=22 00:27:50.468 IO depths : 1=5.9%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:27:50.469 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.469 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.469 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.469 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.469 filename1: (groupid=0, jobs=1): err= 0: pid=1812267: Fri Apr 26 15:38:06 2024 00:27:50.469 read: IOPS=481, BW=1925KiB/s (1971kB/s)(18.8MiB/10006msec) 00:27:50.469 slat (nsec): min=5465, max=71416, avg=10105.49, stdev=6411.10 00:27:50.469 clat (usec): min=16096, max=49272, avg=33157.00, stdev=1859.14 00:27:50.469 lat (usec): min=16104, max=49278, avg=33167.11, stdev=1859.41 00:27:50.469 clat percentiles (usec): 00:27:50.469 | 1.00th=[31327], 5.00th=[32113], 10.00th=[32637], 20.00th=[32637], 00:27:50.469 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33424], 00:27:50.469 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:27:50.469 | 99.00th=[35390], 99.50th=[43779], 99.90th=[47973], 99.95th=[47973], 00:27:50.469 | 99.99th=[49021] 00:27:50.469 bw ( KiB/s): min= 1792, max= 2048, per=4.11%, avg=1919.74, stdev=43.02, samples=19 00:27:50.469 iops : min= 448, max= 512, avg=479.89, stdev=10.75, samples=19 00:27:50.469 lat (msec) : 20=0.58%, 50=99.42% 00:27:50.469 cpu : usr=99.16%, sys=0.55%, ctx=26, majf=0, minf=21 00:27:50.469 IO depths : 1=5.6%, 2=11.8%, 4=25.0%, 8=50.7%, 16=6.9%, 32=0.0%, >=64=0.0% 00:27:50.469 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.469 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.469 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.469 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.469 filename1: (groupid=0, jobs=1): err= 0: pid=1812268: Fri Apr 26 15:38:06 2024 00:27:50.469 read: IOPS=480, BW=1921KiB/s (1968kB/s)(18.8MiB/10026msec) 00:27:50.469 slat (nsec): min=5549, max=81902, avg=23243.15, stdev=14458.23 00:27:50.469 clat (usec): min=20398, max=57817, avg=33082.53, stdev=1641.21 00:27:50.469 lat (usec): min=20405, max=57835, avg=33105.77, stdev=1640.89 00:27:50.469 clat percentiles (usec): 00:27:50.469 | 1.00th=[31327], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:27:50.469 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:27:50.469 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:27:50.469 | 99.00th=[35390], 99.50th=[35914], 99.90th=[57934], 99.95th=[57934], 00:27:50.469 | 99.99th=[57934] 00:27:50.469 bw ( KiB/s): min= 1792, max= 2048, per=4.11%, avg=1919.35, stdev=58.41, samples=20 00:27:50.469 iops : min= 448, max= 512, avg=479.80, stdev=14.69, samples=20 00:27:50.469 lat (msec) : 50=99.67%, 100=0.33% 00:27:50.469 cpu : usr=98.58%, sys=0.83%, ctx=59, majf=0, minf=24 00:27:50.469 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:27:50.469 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.469 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.469 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.469 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.469 filename1: (groupid=0, jobs=1): err= 0: pid=1812269: Fri Apr 26 15:38:06 2024 00:27:50.469 read: IOPS=483, BW=1934KiB/s (1981kB/s)(18.9MiB/10025msec) 00:27:50.469 slat (nsec): min=5504, max=78602, avg=15530.25, stdev=13524.26 00:27:50.469 clat (usec): min=18959, max=40161, avg=32965.13, stdev=1484.20 00:27:50.469 lat (usec): min=18968, max=40185, avg=32980.66, stdev=1484.71 00:27:50.469 clat percentiles (usec): 00:27:50.469 | 1.00th=[24773], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:27:50.469 | 30.00th=[32637], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:27:50.469 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:27:50.469 | 99.00th=[35390], 99.50th=[35914], 99.90th=[35914], 99.95th=[35914], 00:27:50.469 | 99.99th=[40109] 00:27:50.469 bw ( KiB/s): min= 1792, max= 2048, per=4.14%, avg=1932.55, stdev=56.72, samples=20 00:27:50.469 iops : min= 448, max= 512, avg=483.10, stdev=14.10, samples=20 00:27:50.469 lat (msec) : 20=0.60%, 50=99.40% 00:27:50.469 cpu : usr=99.30%, sys=0.40%, ctx=9, majf=0, minf=16 00:27:50.469 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:27:50.469 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.469 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.469 issued rwts: total=4848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.469 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.469 filename2: (groupid=0, jobs=1): err= 0: pid=1812270: Fri Apr 26 15:38:06 2024 00:27:50.469 read: IOPS=484, BW=1940KiB/s (1987kB/s)(19.0MiB/10029msec) 00:27:50.469 slat (nsec): min=5515, max=90645, avg=16890.75, stdev=14380.81 00:27:50.469 clat (usec): min=9435, max=35870, avg=32858.51, stdev=1945.39 00:27:50.469 lat (usec): min=9446, max=35877, avg=32875.40, stdev=1945.57 00:27:50.469 clat percentiles (usec): 00:27:50.469 | 1.00th=[22676], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:27:50.469 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:27:50.469 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:27:50.469 | 99.00th=[35390], 99.50th=[35390], 99.90th=[35914], 99.95th=[35914], 00:27:50.469 | 99.99th=[35914] 00:27:50.469 bw ( KiB/s): min= 1916, max= 2048, per=4.15%, avg=1939.00, stdev=46.99, samples=20 00:27:50.469 iops : min= 479, max= 512, avg=484.75, stdev=11.75, samples=20 00:27:50.469 lat (msec) : 10=0.29%, 20=0.37%, 50=99.34% 00:27:50.469 cpu : usr=99.16%, sys=0.51%, ctx=68, majf=0, minf=25 00:27:50.469 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:27:50.469 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.469 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.469 issued rwts: total=4864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.469 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.469 filename2: (groupid=0, jobs=1): err= 0: pid=1812271: Fri Apr 26 15:38:06 2024 00:27:50.469 read: IOPS=485, BW=1941KiB/s (1987kB/s)(19.0MiB/10029msec) 00:27:50.469 slat (nsec): min=5493, max=85707, avg=16820.06, stdev=12794.17 00:27:50.469 clat (usec): min=13388, max=72476, avg=32854.63, stdev=5255.89 00:27:50.469 lat (usec): min=13408, max=72493, avg=32871.45, stdev=5256.82 00:27:50.469 clat percentiles (usec): 00:27:50.469 | 1.00th=[20317], 5.00th=[23200], 10.00th=[26608], 20.00th=[32113], 00:27:50.469 | 30.00th=[32637], 40.00th=[32900], 50.00th=[33162], 60.00th=[33424], 00:27:50.469 | 70.00th=[33424], 80.00th=[33817], 90.00th=[37487], 95.00th=[41681], 00:27:50.469 | 99.00th=[51643], 99.50th=[54264], 99.90th=[72877], 99.95th=[72877], 00:27:50.469 | 99.99th=[72877] 00:27:50.469 bw ( KiB/s): min= 1715, max= 2192, per=4.15%, avg=1939.10, stdev=100.94, samples=20 00:27:50.469 iops : min= 428, max= 548, avg=484.70, stdev=25.27, samples=20 00:27:50.469 lat (msec) : 20=0.78%, 50=97.45%, 100=1.77% 00:27:50.469 cpu : usr=99.15%, sys=0.56%, ctx=10, majf=0, minf=21 00:27:50.469 IO depths : 1=1.8%, 2=3.7%, 4=10.0%, 8=71.5%, 16=12.9%, 32=0.0%, >=64=0.0% 00:27:50.469 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.469 complete : 0=0.0%, 4=89.7%, 8=6.8%, 16=3.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.469 issued rwts: total=4866,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.469 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.469 filename2: (groupid=0, jobs=1): err= 0: pid=1812273: Fri Apr 26 15:38:06 2024 00:27:50.469 read: IOPS=481, BW=1925KiB/s (1971kB/s)(18.8MiB/10008msec) 00:27:50.469 slat (nsec): min=5496, max=79080, avg=17209.69, stdev=12423.37 00:27:50.469 clat (usec): min=17598, max=57016, avg=33102.27, stdev=1580.85 00:27:50.469 lat (usec): min=17628, max=57032, avg=33119.48, stdev=1580.64 00:27:50.469 clat percentiles (usec): 00:27:50.469 | 1.00th=[31327], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:27:50.469 | 30.00th=[32637], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:27:50.469 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:27:50.469 | 99.00th=[35390], 99.50th=[43254], 99.90th=[46924], 99.95th=[47973], 00:27:50.469 | 99.99th=[56886] 00:27:50.469 bw ( KiB/s): min= 1795, max= 2048, per=4.11%, avg=1919.53, stdev=43.53, samples=19 00:27:50.469 iops : min= 448, max= 512, avg=479.84, stdev=11.00, samples=19 00:27:50.469 lat (msec) : 20=0.46%, 50=99.50%, 100=0.04% 00:27:50.469 cpu : usr=98.81%, sys=0.80%, ctx=78, majf=0, minf=20 00:27:50.469 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:27:50.469 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.469 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.469 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.469 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.469 filename2: (groupid=0, jobs=1): err= 0: pid=1812274: Fri Apr 26 15:38:06 2024 00:27:50.469 read: IOPS=480, BW=1921KiB/s (1968kB/s)(18.8MiB/10026msec) 00:27:50.469 slat (nsec): min=5526, max=91296, avg=21905.29, stdev=13522.99 00:27:50.469 clat (usec): min=20384, max=58081, avg=33113.36, stdev=1652.62 00:27:50.469 lat (usec): min=20394, max=58099, avg=33135.27, stdev=1652.14 00:27:50.469 clat percentiles (usec): 00:27:50.469 | 1.00th=[31327], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:27:50.469 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:27:50.469 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:27:50.469 | 99.00th=[35390], 99.50th=[35914], 99.90th=[57934], 99.95th=[57934], 00:27:50.469 | 99.99th=[57934] 00:27:50.469 bw ( KiB/s): min= 1788, max= 2048, per=4.11%, avg=1919.15, stdev=58.88, samples=20 00:27:50.469 iops : min= 447, max= 512, avg=479.75, stdev=14.80, samples=20 00:27:50.469 lat (msec) : 50=99.67%, 100=0.33% 00:27:50.469 cpu : usr=99.19%, sys=0.49%, ctx=44, majf=0, minf=20 00:27:50.469 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:27:50.469 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.469 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.469 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.469 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.469 filename2: (groupid=0, jobs=1): err= 0: pid=1812275: Fri Apr 26 15:38:06 2024 00:27:50.469 read: IOPS=484, BW=1939KiB/s (1986kB/s)(19.0MiB/10007msec) 00:27:50.470 slat (usec): min=5, max=128, avg=17.82, stdev=12.62 00:27:50.470 clat (usec): min=10718, max=55434, avg=32847.68, stdev=4264.28 00:27:50.470 lat (usec): min=10724, max=55441, avg=32865.50, stdev=4265.12 00:27:50.470 clat percentiles (usec): 00:27:50.470 | 1.00th=[19006], 5.00th=[25560], 10.00th=[28967], 20.00th=[32375], 00:27:50.470 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:27:50.470 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34866], 95.00th=[39584], 00:27:50.470 | 99.00th=[47449], 99.50th=[48497], 99.90th=[54789], 99.95th=[55313], 00:27:50.470 | 99.99th=[55313] 00:27:50.470 bw ( KiB/s): min= 1792, max= 2064, per=4.13%, avg=1927.32, stdev=66.15, samples=19 00:27:50.470 iops : min= 448, max= 516, avg=481.79, stdev=16.54, samples=19 00:27:50.470 lat (msec) : 20=2.06%, 50=97.49%, 100=0.45% 00:27:50.470 cpu : usr=98.47%, sys=0.87%, ctx=95, majf=0, minf=21 00:27:50.470 IO depths : 1=3.8%, 2=7.8%, 4=17.1%, 8=61.5%, 16=9.8%, 32=0.0%, >=64=0.0% 00:27:50.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.470 complete : 0=0.0%, 4=92.2%, 8=3.2%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.470 issued rwts: total=4852,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.470 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.470 filename2: (groupid=0, jobs=1): err= 0: pid=1812276: Fri Apr 26 15:38:06 2024 00:27:50.470 read: IOPS=481, BW=1927KiB/s (1974kB/s)(18.8MiB/10007msec) 00:27:50.470 slat (nsec): min=5490, max=81773, avg=17656.49, stdev=13471.07 00:27:50.470 clat (usec): min=14679, max=53954, avg=33087.52, stdev=3751.26 00:27:50.470 lat (usec): min=14686, max=53963, avg=33105.18, stdev=3751.70 00:27:50.470 clat percentiles (usec): 00:27:50.470 | 1.00th=[22938], 5.00th=[26608], 10.00th=[28443], 20.00th=[32375], 00:27:50.470 | 30.00th=[32637], 40.00th=[32900], 50.00th=[33162], 60.00th=[33424], 00:27:50.470 | 70.00th=[33424], 80.00th=[33817], 90.00th=[36439], 95.00th=[39584], 00:27:50.470 | 99.00th=[45351], 99.50th=[50070], 99.90th=[53740], 99.95th=[53740], 00:27:50.470 | 99.99th=[53740] 00:27:50.470 bw ( KiB/s): min= 1792, max= 1984, per=4.12%, avg=1924.79, stdev=47.21, samples=19 00:27:50.470 iops : min= 448, max= 496, avg=481.16, stdev=11.78, samples=19 00:27:50.470 lat (msec) : 20=0.33%, 50=99.09%, 100=0.58% 00:27:50.470 cpu : usr=98.26%, sys=1.13%, ctx=94, majf=0, minf=23 00:27:50.470 IO depths : 1=1.4%, 2=3.4%, 4=9.6%, 8=71.7%, 16=13.9%, 32=0.0%, >=64=0.0% 00:27:50.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.470 complete : 0=0.0%, 4=90.6%, 8=6.5%, 16=2.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.470 issued rwts: total=4822,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.470 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.470 filename2: (groupid=0, jobs=1): err= 0: pid=1812277: Fri Apr 26 15:38:06 2024 00:27:50.470 read: IOPS=480, BW=1922KiB/s (1968kB/s)(18.8MiB/10021msec) 00:27:50.470 slat (nsec): min=5595, max=82442, avg=21839.13, stdev=13465.53 00:27:50.470 clat (usec): min=23242, max=53328, avg=33085.74, stdev=1427.88 00:27:50.470 lat (usec): min=23248, max=53359, avg=33107.58, stdev=1427.89 00:27:50.470 clat percentiles (usec): 00:27:50.470 | 1.00th=[31065], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:27:50.470 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:27:50.470 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:27:50.470 | 99.00th=[35390], 99.50th=[39060], 99.90th=[53216], 99.95th=[53216], 00:27:50.470 | 99.99th=[53216] 00:27:50.470 bw ( KiB/s): min= 1792, max= 2048, per=4.11%, avg=1919.40, stdev=41.55, samples=20 00:27:50.470 iops : min= 448, max= 512, avg=479.85, stdev=10.39, samples=20 00:27:50.470 lat (msec) : 50=99.67%, 100=0.33% 00:27:50.470 cpu : usr=98.61%, sys=0.80%, ctx=127, majf=0, minf=20 00:27:50.470 IO depths : 1=5.3%, 2=11.6%, 4=25.0%, 8=50.9%, 16=7.2%, 32=0.0%, >=64=0.0% 00:27:50.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.470 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.470 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.470 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.470 filename2: (groupid=0, jobs=1): err= 0: pid=1812278: Fri Apr 26 15:38:06 2024 00:27:50.470 read: IOPS=483, BW=1936KiB/s (1982kB/s)(19.0MiB/10029msec) 00:27:50.470 slat (nsec): min=5530, max=54384, avg=10609.21, stdev=6213.77 00:27:50.470 clat (usec): min=8385, max=51450, avg=32970.40, stdev=2339.04 00:27:50.470 lat (usec): min=8399, max=51459, avg=32981.01, stdev=2338.99 00:27:50.470 clat percentiles (usec): 00:27:50.470 | 1.00th=[21365], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:27:50.470 | 30.00th=[32637], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:27:50.470 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34866], 00:27:50.470 | 99.00th=[35390], 99.50th=[38011], 99.90th=[50070], 99.95th=[50070], 00:27:50.470 | 99.99th=[51643] 00:27:50.470 bw ( KiB/s): min= 1916, max= 2048, per=4.14%, avg=1935.00, stdev=40.12, samples=20 00:27:50.470 iops : min= 479, max= 512, avg=483.75, stdev=10.03, samples=20 00:27:50.470 lat (msec) : 10=0.33%, 20=0.04%, 50=99.42%, 100=0.21% 00:27:50.470 cpu : usr=99.10%, sys=0.59%, ctx=12, majf=0, minf=23 00:27:50.470 IO depths : 1=5.9%, 2=12.1%, 4=24.7%, 8=50.7%, 16=6.6%, 32=0.0%, >=64=0.0% 00:27:50.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.470 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.470 issued rwts: total=4853,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.470 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.470 00:27:50.470 Run status group 0 (all jobs): 00:27:50.470 READ: bw=45.6MiB/s (47.8MB/s), 1921KiB/s-2257KiB/s (1968kB/s-2311kB/s), io=458MiB (480MB), run=10001-10037msec 00:27:50.470 15:38:06 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:27:50.470 15:38:06 -- target/dif.sh@43 -- # local sub 00:27:50.470 15:38:06 -- target/dif.sh@45 -- # for sub in "$@" 00:27:50.470 15:38:06 -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:50.470 15:38:06 -- target/dif.sh@36 -- # local sub_id=0 00:27:50.470 15:38:06 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:50.470 15:38:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:50.470 15:38:06 -- common/autotest_common.sh@10 -- # set +x 00:27:50.470 15:38:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:50.470 15:38:06 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:50.470 15:38:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:50.470 15:38:06 -- common/autotest_common.sh@10 -- # set +x 00:27:50.470 15:38:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:50.470 15:38:06 -- target/dif.sh@45 -- # for sub in "$@" 00:27:50.470 15:38:06 -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:50.470 15:38:06 -- target/dif.sh@36 -- # local sub_id=1 00:27:50.470 15:38:06 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:50.470 15:38:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:50.470 15:38:06 -- common/autotest_common.sh@10 -- # set +x 00:27:50.470 15:38:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:50.470 15:38:06 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:50.470 15:38:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:50.470 15:38:06 -- common/autotest_common.sh@10 -- # set +x 00:27:50.470 15:38:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:50.470 15:38:06 -- target/dif.sh@45 -- # for sub in "$@" 00:27:50.470 15:38:06 -- target/dif.sh@46 -- # destroy_subsystem 2 00:27:50.470 15:38:06 -- target/dif.sh@36 -- # local sub_id=2 00:27:50.470 15:38:06 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:50.470 15:38:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:50.470 15:38:06 -- common/autotest_common.sh@10 -- # set +x 00:27:50.470 15:38:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:50.470 15:38:06 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:27:50.470 15:38:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:50.470 15:38:06 -- common/autotest_common.sh@10 -- # set +x 00:27:50.470 15:38:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:50.470 15:38:06 -- target/dif.sh@115 -- # NULL_DIF=1 00:27:50.470 15:38:06 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:27:50.470 15:38:06 -- target/dif.sh@115 -- # numjobs=2 00:27:50.470 15:38:06 -- target/dif.sh@115 -- # iodepth=8 00:27:50.470 15:38:06 -- target/dif.sh@115 -- # runtime=5 00:27:50.470 15:38:06 -- target/dif.sh@115 -- # files=1 00:27:50.470 15:38:06 -- target/dif.sh@117 -- # create_subsystems 0 1 00:27:50.470 15:38:06 -- target/dif.sh@28 -- # local sub 00:27:50.470 15:38:06 -- target/dif.sh@30 -- # for sub in "$@" 00:27:50.470 15:38:06 -- target/dif.sh@31 -- # create_subsystem 0 00:27:50.470 15:38:06 -- target/dif.sh@18 -- # local sub_id=0 00:27:50.470 15:38:06 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:50.470 15:38:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:50.470 15:38:06 -- common/autotest_common.sh@10 -- # set +x 00:27:50.470 bdev_null0 00:27:50.470 15:38:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:50.470 15:38:06 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:50.470 15:38:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:50.470 15:38:06 -- common/autotest_common.sh@10 -- # set +x 00:27:50.470 15:38:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:50.470 15:38:06 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:50.470 15:38:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:50.470 15:38:06 -- common/autotest_common.sh@10 -- # set +x 00:27:50.470 15:38:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:50.470 15:38:06 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:50.470 15:38:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:50.470 15:38:06 -- common/autotest_common.sh@10 -- # set +x 00:27:50.470 [2024-04-26 15:38:06.316816] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:50.470 15:38:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:50.470 15:38:06 -- target/dif.sh@30 -- # for sub in "$@" 00:27:50.470 15:38:06 -- target/dif.sh@31 -- # create_subsystem 1 00:27:50.470 15:38:06 -- target/dif.sh@18 -- # local sub_id=1 00:27:50.470 15:38:06 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:27:50.470 15:38:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:50.470 15:38:06 -- common/autotest_common.sh@10 -- # set +x 00:27:50.470 bdev_null1 00:27:50.470 15:38:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:50.471 15:38:06 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:50.471 15:38:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:50.471 15:38:06 -- common/autotest_common.sh@10 -- # set +x 00:27:50.471 15:38:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:50.471 15:38:06 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:50.471 15:38:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:50.471 15:38:06 -- common/autotest_common.sh@10 -- # set +x 00:27:50.471 15:38:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:50.471 15:38:06 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:50.471 15:38:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:50.471 15:38:06 -- common/autotest_common.sh@10 -- # set +x 00:27:50.471 15:38:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:50.471 15:38:06 -- target/dif.sh@118 -- # fio /dev/fd/62 00:27:50.471 15:38:06 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:27:50.471 15:38:06 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:27:50.471 15:38:06 -- nvmf/common.sh@521 -- # config=() 00:27:50.471 15:38:06 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:50.471 15:38:06 -- nvmf/common.sh@521 -- # local subsystem config 00:27:50.471 15:38:06 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:50.471 15:38:06 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:50.471 15:38:06 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:50.471 { 00:27:50.471 "params": { 00:27:50.471 "name": "Nvme$subsystem", 00:27:50.471 "trtype": "$TEST_TRANSPORT", 00:27:50.471 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:50.471 "adrfam": "ipv4", 00:27:50.471 "trsvcid": "$NVMF_PORT", 00:27:50.471 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:50.471 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:50.471 "hdgst": ${hdgst:-false}, 00:27:50.471 "ddgst": ${ddgst:-false} 00:27:50.471 }, 00:27:50.471 "method": "bdev_nvme_attach_controller" 00:27:50.471 } 00:27:50.471 EOF 00:27:50.471 )") 00:27:50.471 15:38:06 -- target/dif.sh@82 -- # gen_fio_conf 00:27:50.471 15:38:06 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:27:50.471 15:38:06 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:50.471 15:38:06 -- target/dif.sh@54 -- # local file 00:27:50.471 15:38:06 -- common/autotest_common.sh@1325 -- # local sanitizers 00:27:50.471 15:38:06 -- target/dif.sh@56 -- # cat 00:27:50.471 15:38:06 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:50.471 15:38:06 -- common/autotest_common.sh@1327 -- # shift 00:27:50.471 15:38:06 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:27:50.471 15:38:06 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:50.471 15:38:06 -- nvmf/common.sh@543 -- # cat 00:27:50.471 15:38:06 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:50.471 15:38:06 -- target/dif.sh@72 -- # (( file = 1 )) 00:27:50.471 15:38:06 -- common/autotest_common.sh@1331 -- # grep libasan 00:27:50.471 15:38:06 -- target/dif.sh@72 -- # (( file <= files )) 00:27:50.471 15:38:06 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:50.471 15:38:06 -- target/dif.sh@73 -- # cat 00:27:50.471 15:38:06 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:50.471 15:38:06 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:50.471 { 00:27:50.471 "params": { 00:27:50.471 "name": "Nvme$subsystem", 00:27:50.471 "trtype": "$TEST_TRANSPORT", 00:27:50.471 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:50.471 "adrfam": "ipv4", 00:27:50.471 "trsvcid": "$NVMF_PORT", 00:27:50.471 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:50.471 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:50.471 "hdgst": ${hdgst:-false}, 00:27:50.471 "ddgst": ${ddgst:-false} 00:27:50.471 }, 00:27:50.471 "method": "bdev_nvme_attach_controller" 00:27:50.471 } 00:27:50.471 EOF 00:27:50.471 )") 00:27:50.471 15:38:06 -- target/dif.sh@72 -- # (( file++ )) 00:27:50.471 15:38:06 -- nvmf/common.sh@543 -- # cat 00:27:50.471 15:38:06 -- target/dif.sh@72 -- # (( file <= files )) 00:27:50.471 15:38:06 -- nvmf/common.sh@545 -- # jq . 00:27:50.471 15:38:06 -- nvmf/common.sh@546 -- # IFS=, 00:27:50.471 15:38:06 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:27:50.471 "params": { 00:27:50.471 "name": "Nvme0", 00:27:50.471 "trtype": "tcp", 00:27:50.471 "traddr": "10.0.0.2", 00:27:50.471 "adrfam": "ipv4", 00:27:50.471 "trsvcid": "4420", 00:27:50.471 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:50.471 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:50.471 "hdgst": false, 00:27:50.471 "ddgst": false 00:27:50.471 }, 00:27:50.471 "method": "bdev_nvme_attach_controller" 00:27:50.471 },{ 00:27:50.471 "params": { 00:27:50.471 "name": "Nvme1", 00:27:50.471 "trtype": "tcp", 00:27:50.471 "traddr": "10.0.0.2", 00:27:50.471 "adrfam": "ipv4", 00:27:50.471 "trsvcid": "4420", 00:27:50.471 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:50.471 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:50.471 "hdgst": false, 00:27:50.471 "ddgst": false 00:27:50.471 }, 00:27:50.471 "method": "bdev_nvme_attach_controller" 00:27:50.471 }' 00:27:50.471 15:38:06 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:50.471 15:38:06 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:50.471 15:38:06 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:50.471 15:38:06 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:50.471 15:38:06 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:27:50.471 15:38:06 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:50.471 15:38:06 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:50.471 15:38:06 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:50.471 15:38:06 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:50.471 15:38:06 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:50.471 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:27:50.471 ... 00:27:50.471 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:27:50.471 ... 00:27:50.471 fio-3.35 00:27:50.471 Starting 4 threads 00:27:50.471 EAL: No free 2048 kB hugepages reported on node 1 00:27:55.807 00:27:55.807 filename0: (groupid=0, jobs=1): err= 0: pid=1814532: Fri Apr 26 15:38:12 2024 00:27:55.807 read: IOPS=2077, BW=16.2MiB/s (17.0MB/s)(81.2MiB/5003msec) 00:27:55.807 slat (nsec): min=5326, max=68149, avg=6202.71, stdev=2148.19 00:27:55.807 clat (usec): min=2007, max=6866, avg=3833.17, stdev=691.10 00:27:55.807 lat (usec): min=2015, max=6872, avg=3839.37, stdev=690.99 00:27:55.807 clat percentiles (usec): 00:27:55.807 | 1.00th=[ 2638], 5.00th=[ 2999], 10.00th=[ 3195], 20.00th=[ 3359], 00:27:55.807 | 30.00th=[ 3490], 40.00th=[ 3556], 50.00th=[ 3654], 60.00th=[ 3752], 00:27:55.807 | 70.00th=[ 3884], 80.00th=[ 4178], 90.00th=[ 5014], 95.00th=[ 5342], 00:27:55.807 | 99.00th=[ 5866], 99.50th=[ 5997], 99.90th=[ 6456], 99.95th=[ 6587], 00:27:55.807 | 99.99th=[ 6849] 00:27:55.807 bw ( KiB/s): min=16384, max=16880, per=25.21%, avg=16620.90, stdev=164.02, samples=10 00:27:55.807 iops : min= 2048, max= 2110, avg=2077.60, stdev=20.50, samples=10 00:27:55.807 lat (msec) : 4=74.97%, 10=25.03% 00:27:55.807 cpu : usr=96.70%, sys=3.08%, ctx=4, majf=0, minf=36 00:27:55.807 IO depths : 1=0.4%, 2=1.2%, 4=70.8%, 8=27.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:55.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.807 complete : 0=0.0%, 4=92.9%, 8=7.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.807 issued rwts: total=10394,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:55.807 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:55.807 filename0: (groupid=0, jobs=1): err= 0: pid=1814533: Fri Apr 26 15:38:12 2024 00:27:55.807 read: IOPS=2020, BW=15.8MiB/s (16.6MB/s)(79.0MiB/5002msec) 00:27:55.807 slat (nsec): min=5330, max=68046, avg=5964.37, stdev=1936.71 00:27:55.807 clat (usec): min=2054, max=7217, avg=3941.69, stdev=741.09 00:27:55.807 lat (usec): min=2059, max=7223, avg=3947.66, stdev=741.01 00:27:55.807 clat percentiles (usec): 00:27:55.807 | 1.00th=[ 2835], 5.00th=[ 3163], 10.00th=[ 3261], 20.00th=[ 3458], 00:27:55.807 | 30.00th=[ 3523], 40.00th=[ 3621], 50.00th=[ 3720], 60.00th=[ 3818], 00:27:55.807 | 70.00th=[ 3949], 80.00th=[ 4359], 90.00th=[ 5276], 95.00th=[ 5538], 00:27:55.807 | 99.00th=[ 6128], 99.50th=[ 6259], 99.90th=[ 6718], 99.95th=[ 6849], 00:27:55.807 | 99.99th=[ 7242] 00:27:55.807 bw ( KiB/s): min=15840, max=16432, per=24.48%, avg=16142.22, stdev=194.39, samples=9 00:27:55.807 iops : min= 1980, max= 2054, avg=2017.78, stdev=24.30, samples=9 00:27:55.807 lat (msec) : 4=71.43%, 10=28.57% 00:27:55.807 cpu : usr=97.44%, sys=2.32%, ctx=10, majf=0, minf=33 00:27:55.807 IO depths : 1=0.5%, 2=1.0%, 4=71.3%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:55.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.807 complete : 0=0.0%, 4=92.9%, 8=7.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.807 issued rwts: total=10108,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:55.807 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:55.807 filename1: (groupid=0, jobs=1): err= 0: pid=1814534: Fri Apr 26 15:38:12 2024 00:27:55.807 read: IOPS=2114, BW=16.5MiB/s (17.3MB/s)(82.7MiB/5004msec) 00:27:55.807 slat (nsec): min=5325, max=35899, avg=6020.74, stdev=1932.83 00:27:55.807 clat (usec): min=1288, max=43540, avg=3766.59, stdev=1290.06 00:27:55.807 lat (usec): min=1294, max=43575, avg=3772.61, stdev=1290.29 00:27:55.807 clat percentiles (usec): 00:27:55.807 | 1.00th=[ 2442], 5.00th=[ 2737], 10.00th=[ 2933], 20.00th=[ 3195], 00:27:55.807 | 30.00th=[ 3359], 40.00th=[ 3490], 50.00th=[ 3621], 60.00th=[ 3785], 00:27:55.807 | 70.00th=[ 4015], 80.00th=[ 4293], 90.00th=[ 4621], 95.00th=[ 5080], 00:27:55.807 | 99.00th=[ 5604], 99.50th=[ 5866], 99.90th=[ 6456], 99.95th=[43254], 00:27:55.807 | 99.99th=[43779] 00:27:55.807 bw ( KiB/s): min=15856, max=17424, per=25.67%, avg=16923.20, stdev=458.17, samples=10 00:27:55.807 iops : min= 1982, max= 2178, avg=2115.40, stdev=57.27, samples=10 00:27:55.807 lat (msec) : 2=0.12%, 4=69.54%, 10=30.26%, 50=0.08% 00:27:55.807 cpu : usr=97.44%, sys=2.34%, ctx=7, majf=0, minf=83 00:27:55.807 IO depths : 1=0.3%, 2=2.2%, 4=67.4%, 8=30.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:55.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.807 complete : 0=0.0%, 4=94.7%, 8=5.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.807 issued rwts: total=10580,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:55.807 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:55.807 filename1: (groupid=0, jobs=1): err= 0: pid=1814535: Fri Apr 26 15:38:12 2024 00:27:55.807 read: IOPS=2030, BW=15.9MiB/s (16.6MB/s)(79.3MiB/5002msec) 00:27:55.807 slat (nsec): min=5328, max=37253, avg=6011.29, stdev=1946.01 00:27:55.807 clat (usec): min=2164, max=6978, avg=3923.61, stdev=726.88 00:27:55.807 lat (usec): min=2170, max=6984, avg=3929.62, stdev=726.84 00:27:55.807 clat percentiles (usec): 00:27:55.807 | 1.00th=[ 2704], 5.00th=[ 3130], 10.00th=[ 3261], 20.00th=[ 3425], 00:27:55.807 | 30.00th=[ 3523], 40.00th=[ 3621], 50.00th=[ 3752], 60.00th=[ 3818], 00:27:55.807 | 70.00th=[ 3982], 80.00th=[ 4359], 90.00th=[ 5211], 95.00th=[ 5538], 00:27:55.807 | 99.00th=[ 6063], 99.50th=[ 6259], 99.90th=[ 6718], 99.95th=[ 6915], 00:27:55.807 | 99.99th=[ 6980] 00:27:55.807 bw ( KiB/s): min=16032, max=16544, per=24.71%, avg=16289.67, stdev=164.30, samples=9 00:27:55.807 iops : min= 2004, max= 2068, avg=2036.11, stdev=20.42, samples=9 00:27:55.807 lat (msec) : 4=70.77%, 10=29.23% 00:27:55.807 cpu : usr=97.40%, sys=2.38%, ctx=5, majf=0, minf=53 00:27:55.807 IO depths : 1=0.4%, 2=0.8%, 4=71.1%, 8=27.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:55.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.807 complete : 0=0.0%, 4=93.2%, 8=6.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.807 issued rwts: total=10156,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:55.807 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:55.807 00:27:55.807 Run status group 0 (all jobs): 00:27:55.807 READ: bw=64.4MiB/s (67.5MB/s), 15.8MiB/s-16.5MiB/s (16.6MB/s-17.3MB/s), io=322MiB (338MB), run=5002-5004msec 00:27:55.807 15:38:12 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:27:55.807 15:38:12 -- target/dif.sh@43 -- # local sub 00:27:55.807 15:38:12 -- target/dif.sh@45 -- # for sub in "$@" 00:27:55.807 15:38:12 -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:55.807 15:38:12 -- target/dif.sh@36 -- # local sub_id=0 00:27:55.807 15:38:12 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:55.807 15:38:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.807 15:38:12 -- common/autotest_common.sh@10 -- # set +x 00:27:55.807 15:38:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.807 15:38:12 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:55.807 15:38:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.807 15:38:12 -- common/autotest_common.sh@10 -- # set +x 00:27:55.807 15:38:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.807 15:38:12 -- target/dif.sh@45 -- # for sub in "$@" 00:27:55.807 15:38:12 -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:55.807 15:38:12 -- target/dif.sh@36 -- # local sub_id=1 00:27:55.807 15:38:12 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:55.807 15:38:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.807 15:38:12 -- common/autotest_common.sh@10 -- # set +x 00:27:55.807 15:38:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.807 15:38:12 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:55.807 15:38:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.807 15:38:12 -- common/autotest_common.sh@10 -- # set +x 00:27:55.807 15:38:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.807 00:27:55.807 real 0m24.225s 00:27:55.807 user 5m17.140s 00:27:55.807 sys 0m3.768s 00:27:55.807 15:38:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:55.807 15:38:12 -- common/autotest_common.sh@10 -- # set +x 00:27:55.807 ************************************ 00:27:55.807 END TEST fio_dif_rand_params 00:27:55.807 ************************************ 00:27:55.807 15:38:12 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:27:55.807 15:38:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:55.807 15:38:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:55.807 15:38:12 -- common/autotest_common.sh@10 -- # set +x 00:27:55.807 ************************************ 00:27:55.807 START TEST fio_dif_digest 00:27:55.807 ************************************ 00:27:55.807 15:38:12 -- common/autotest_common.sh@1111 -- # fio_dif_digest 00:27:55.807 15:38:12 -- target/dif.sh@123 -- # local NULL_DIF 00:27:55.807 15:38:12 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:27:55.807 15:38:12 -- target/dif.sh@125 -- # local hdgst ddgst 00:27:55.807 15:38:12 -- target/dif.sh@127 -- # NULL_DIF=3 00:27:55.807 15:38:12 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:27:55.807 15:38:12 -- target/dif.sh@127 -- # numjobs=3 00:27:55.807 15:38:12 -- target/dif.sh@127 -- # iodepth=3 00:27:55.807 15:38:12 -- target/dif.sh@127 -- # runtime=10 00:27:55.807 15:38:12 -- target/dif.sh@128 -- # hdgst=true 00:27:55.807 15:38:12 -- target/dif.sh@128 -- # ddgst=true 00:27:55.807 15:38:12 -- target/dif.sh@130 -- # create_subsystems 0 00:27:55.808 15:38:12 -- target/dif.sh@28 -- # local sub 00:27:55.808 15:38:12 -- target/dif.sh@30 -- # for sub in "$@" 00:27:55.808 15:38:12 -- target/dif.sh@31 -- # create_subsystem 0 00:27:55.808 15:38:12 -- target/dif.sh@18 -- # local sub_id=0 00:27:55.808 15:38:12 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:27:55.808 15:38:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.808 15:38:12 -- common/autotest_common.sh@10 -- # set +x 00:27:55.808 bdev_null0 00:27:55.808 15:38:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.808 15:38:12 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:55.808 15:38:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.808 15:38:12 -- common/autotest_common.sh@10 -- # set +x 00:27:55.808 15:38:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.808 15:38:12 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:55.808 15:38:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.808 15:38:12 -- common/autotest_common.sh@10 -- # set +x 00:27:55.808 15:38:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.808 15:38:12 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:55.808 15:38:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.808 15:38:12 -- common/autotest_common.sh@10 -- # set +x 00:27:55.808 [2024-04-26 15:38:12.890741] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:55.808 15:38:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.808 15:38:12 -- target/dif.sh@131 -- # fio /dev/fd/62 00:27:55.808 15:38:12 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:27:55.808 15:38:12 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:55.808 15:38:12 -- nvmf/common.sh@521 -- # config=() 00:27:55.808 15:38:12 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:55.808 15:38:12 -- nvmf/common.sh@521 -- # local subsystem config 00:27:55.808 15:38:12 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:55.808 15:38:12 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:55.808 15:38:12 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:55.808 { 00:27:55.808 "params": { 00:27:55.808 "name": "Nvme$subsystem", 00:27:55.808 "trtype": "$TEST_TRANSPORT", 00:27:55.808 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:55.808 "adrfam": "ipv4", 00:27:55.808 "trsvcid": "$NVMF_PORT", 00:27:55.808 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:55.808 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:55.808 "hdgst": ${hdgst:-false}, 00:27:55.808 "ddgst": ${ddgst:-false} 00:27:55.808 }, 00:27:55.808 "method": "bdev_nvme_attach_controller" 00:27:55.808 } 00:27:55.808 EOF 00:27:55.808 )") 00:27:55.808 15:38:12 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:27:55.808 15:38:12 -- target/dif.sh@82 -- # gen_fio_conf 00:27:55.808 15:38:12 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:55.808 15:38:12 -- target/dif.sh@54 -- # local file 00:27:55.808 15:38:12 -- common/autotest_common.sh@1325 -- # local sanitizers 00:27:55.808 15:38:12 -- target/dif.sh@56 -- # cat 00:27:55.808 15:38:12 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:55.808 15:38:12 -- common/autotest_common.sh@1327 -- # shift 00:27:55.808 15:38:12 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:27:55.808 15:38:12 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:55.808 15:38:12 -- nvmf/common.sh@543 -- # cat 00:27:55.808 15:38:12 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:55.808 15:38:12 -- target/dif.sh@72 -- # (( file = 1 )) 00:27:55.808 15:38:12 -- common/autotest_common.sh@1331 -- # grep libasan 00:27:55.808 15:38:12 -- target/dif.sh@72 -- # (( file <= files )) 00:27:55.808 15:38:12 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:55.808 15:38:12 -- nvmf/common.sh@545 -- # jq . 00:27:55.808 15:38:12 -- nvmf/common.sh@546 -- # IFS=, 00:27:55.808 15:38:12 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:27:55.808 "params": { 00:27:55.808 "name": "Nvme0", 00:27:55.808 "trtype": "tcp", 00:27:55.808 "traddr": "10.0.0.2", 00:27:55.808 "adrfam": "ipv4", 00:27:55.808 "trsvcid": "4420", 00:27:55.808 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:55.808 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:55.808 "hdgst": true, 00:27:55.808 "ddgst": true 00:27:55.808 }, 00:27:55.808 "method": "bdev_nvme_attach_controller" 00:27:55.808 }' 00:27:55.808 15:38:12 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:55.808 15:38:12 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:55.808 15:38:12 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:55.808 15:38:12 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:55.808 15:38:12 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:27:55.808 15:38:12 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:55.808 15:38:12 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:55.808 15:38:12 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:55.808 15:38:12 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:55.808 15:38:12 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:56.068 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:27:56.068 ... 00:27:56.068 fio-3.35 00:27:56.068 Starting 3 threads 00:27:56.068 EAL: No free 2048 kB hugepages reported on node 1 00:28:08.295 00:28:08.295 filename0: (groupid=0, jobs=1): err= 0: pid=1816054: Fri Apr 26 15:38:23 2024 00:28:08.295 read: IOPS=217, BW=27.2MiB/s (28.5MB/s)(273MiB/10046msec) 00:28:08.295 slat (nsec): min=5639, max=42124, avg=7361.67, stdev=1900.68 00:28:08.295 clat (usec): min=8450, max=55896, avg=13758.13, stdev=2281.42 00:28:08.295 lat (usec): min=8456, max=55905, avg=13765.49, stdev=2281.47 00:28:08.295 clat percentiles (usec): 00:28:08.295 | 1.00th=[ 9634], 5.00th=[11338], 10.00th=[12256], 20.00th=[12780], 00:28:08.295 | 30.00th=[13173], 40.00th=[13566], 50.00th=[13829], 60.00th=[14091], 00:28:08.295 | 70.00th=[14353], 80.00th=[14615], 90.00th=[15139], 95.00th=[15533], 00:28:08.295 | 99.00th=[16450], 99.50th=[16712], 99.90th=[55313], 99.95th=[55837], 00:28:08.295 | 99.99th=[55837] 00:28:08.295 bw ( KiB/s): min=25600, max=30208, per=34.01%, avg=27955.20, stdev=967.18, samples=20 00:28:08.295 iops : min= 200, max= 236, avg=218.40, stdev= 7.56, samples=20 00:28:08.295 lat (msec) : 10=1.42%, 20=98.35%, 50=0.05%, 100=0.18% 00:28:08.295 cpu : usr=96.08%, sys=3.70%, ctx=27, majf=0, minf=138 00:28:08.295 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:08.295 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.295 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.295 issued rwts: total=2186,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:08.295 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:08.295 filename0: (groupid=0, jobs=1): err= 0: pid=1816055: Fri Apr 26 15:38:23 2024 00:28:08.295 read: IOPS=229, BW=28.7MiB/s (30.1MB/s)(288MiB/10042msec) 00:28:08.295 slat (nsec): min=5604, max=31699, avg=7345.00, stdev=1719.33 00:28:08.295 clat (usec): min=7740, max=47701, avg=13043.05, stdev=1630.68 00:28:08.295 lat (usec): min=7746, max=47707, avg=13050.39, stdev=1630.62 00:28:08.295 clat percentiles (usec): 00:28:08.295 | 1.00th=[ 8848], 5.00th=[10159], 10.00th=[11600], 20.00th=[12256], 00:28:08.295 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13173], 60.00th=[13435], 00:28:08.296 | 70.00th=[13698], 80.00th=[13960], 90.00th=[14353], 95.00th=[14746], 00:28:08.296 | 99.00th=[15401], 99.50th=[15926], 99.90th=[16712], 99.95th=[45351], 00:28:08.296 | 99.99th=[47449] 00:28:08.296 bw ( KiB/s): min=28416, max=32000, per=35.86%, avg=29478.40, stdev=929.63, samples=20 00:28:08.296 iops : min= 222, max= 250, avg=230.30, stdev= 7.26, samples=20 00:28:08.296 lat (msec) : 10=4.60%, 20=95.31%, 50=0.09% 00:28:08.296 cpu : usr=95.78%, sys=4.00%, ctx=32, majf=0, minf=104 00:28:08.296 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:08.296 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.296 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.296 issued rwts: total=2305,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:08.296 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:08.296 filename0: (groupid=0, jobs=1): err= 0: pid=1816056: Fri Apr 26 15:38:23 2024 00:28:08.296 read: IOPS=195, BW=24.5MiB/s (25.7MB/s)(245MiB/10014msec) 00:28:08.296 slat (nsec): min=5595, max=33223, avg=7445.12, stdev=1665.42 00:28:08.296 clat (usec): min=8845, max=57234, avg=15309.41, stdev=5321.37 00:28:08.296 lat (usec): min=8854, max=57243, avg=15316.85, stdev=5321.33 00:28:08.296 clat percentiles (usec): 00:28:08.296 | 1.00th=[11731], 5.00th=[12780], 10.00th=[13304], 20.00th=[13829], 00:28:08.296 | 30.00th=[14091], 40.00th=[14353], 50.00th=[14615], 60.00th=[14877], 00:28:08.296 | 70.00th=[15270], 80.00th=[15664], 90.00th=[16188], 95.00th=[16712], 00:28:08.296 | 99.00th=[54789], 99.50th=[56361], 99.90th=[56886], 99.95th=[57410], 00:28:08.296 | 99.99th=[57410] 00:28:08.296 bw ( KiB/s): min=19712, max=27136, per=30.49%, avg=25062.40, stdev=1806.19, samples=20 00:28:08.296 iops : min= 154, max= 212, avg=195.80, stdev=14.11, samples=20 00:28:08.296 lat (msec) : 10=0.20%, 20=98.11%, 50=0.05%, 100=1.63% 00:28:08.296 cpu : usr=95.69%, sys=4.08%, ctx=14, majf=0, minf=144 00:28:08.296 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:08.296 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.296 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.296 issued rwts: total=1961,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:08.296 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:08.296 00:28:08.296 Run status group 0 (all jobs): 00:28:08.296 READ: bw=80.3MiB/s (84.2MB/s), 24.5MiB/s-28.7MiB/s (25.7MB/s-30.1MB/s), io=807MiB (846MB), run=10014-10046msec 00:28:08.296 15:38:24 -- target/dif.sh@132 -- # destroy_subsystems 0 00:28:08.296 15:38:24 -- target/dif.sh@43 -- # local sub 00:28:08.296 15:38:24 -- target/dif.sh@45 -- # for sub in "$@" 00:28:08.296 15:38:24 -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:08.296 15:38:24 -- target/dif.sh@36 -- # local sub_id=0 00:28:08.296 15:38:24 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:08.296 15:38:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:08.296 15:38:24 -- common/autotest_common.sh@10 -- # set +x 00:28:08.296 15:38:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:08.296 15:38:24 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:08.296 15:38:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:08.296 15:38:24 -- common/autotest_common.sh@10 -- # set +x 00:28:08.296 15:38:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:08.296 00:28:08.296 real 0m11.222s 00:28:08.296 user 0m40.553s 00:28:08.296 sys 0m1.518s 00:28:08.296 15:38:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:08.296 15:38:24 -- common/autotest_common.sh@10 -- # set +x 00:28:08.296 ************************************ 00:28:08.296 END TEST fio_dif_digest 00:28:08.296 ************************************ 00:28:08.296 15:38:24 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:28:08.296 15:38:24 -- target/dif.sh@147 -- # nvmftestfini 00:28:08.296 15:38:24 -- nvmf/common.sh@477 -- # nvmfcleanup 00:28:08.296 15:38:24 -- nvmf/common.sh@117 -- # sync 00:28:08.296 15:38:24 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:08.296 15:38:24 -- nvmf/common.sh@120 -- # set +e 00:28:08.296 15:38:24 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:08.296 15:38:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:08.296 rmmod nvme_tcp 00:28:08.296 rmmod nvme_fabrics 00:28:08.296 rmmod nvme_keyring 00:28:08.296 15:38:24 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:08.296 15:38:24 -- nvmf/common.sh@124 -- # set -e 00:28:08.296 15:38:24 -- nvmf/common.sh@125 -- # return 0 00:28:08.296 15:38:24 -- nvmf/common.sh@478 -- # '[' -n 1805554 ']' 00:28:08.296 15:38:24 -- nvmf/common.sh@479 -- # killprocess 1805554 00:28:08.296 15:38:24 -- common/autotest_common.sh@936 -- # '[' -z 1805554 ']' 00:28:08.296 15:38:24 -- common/autotest_common.sh@940 -- # kill -0 1805554 00:28:08.296 15:38:24 -- common/autotest_common.sh@941 -- # uname 00:28:08.296 15:38:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:08.296 15:38:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1805554 00:28:08.296 15:38:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:08.296 15:38:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:08.296 15:38:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1805554' 00:28:08.296 killing process with pid 1805554 00:28:08.296 15:38:24 -- common/autotest_common.sh@955 -- # kill 1805554 00:28:08.296 15:38:24 -- common/autotest_common.sh@960 -- # wait 1805554 00:28:08.296 15:38:24 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:28:08.296 15:38:24 -- nvmf/common.sh@482 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:10.847 Waiting for block devices as requested 00:28:10.847 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:10.847 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:10.847 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:10.847 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:10.848 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:10.848 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:10.848 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:11.108 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:11.108 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:28:11.369 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:11.369 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:11.369 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:11.369 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:11.629 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:11.629 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:11.629 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:11.629 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:12.215 15:38:29 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:28:12.215 15:38:29 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:28:12.215 15:38:29 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:12.215 15:38:29 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:12.215 15:38:29 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:12.215 15:38:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:12.215 15:38:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:14.221 15:38:31 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:14.221 00:28:14.221 real 1m17.873s 00:28:14.221 user 8m0.481s 00:28:14.221 sys 0m19.751s 00:28:14.221 15:38:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:14.221 15:38:31 -- common/autotest_common.sh@10 -- # set +x 00:28:14.221 ************************************ 00:28:14.221 END TEST nvmf_dif 00:28:14.221 ************************************ 00:28:14.221 15:38:31 -- spdk/autotest.sh@291 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:14.221 15:38:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:14.221 15:38:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:14.221 15:38:31 -- common/autotest_common.sh@10 -- # set +x 00:28:14.221 ************************************ 00:28:14.221 START TEST nvmf_abort_qd_sizes 00:28:14.221 ************************************ 00:28:14.221 15:38:31 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:14.482 * Looking for test storage... 00:28:14.482 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:14.482 15:38:31 -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:14.482 15:38:31 -- nvmf/common.sh@7 -- # uname -s 00:28:14.482 15:38:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:14.482 15:38:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:14.482 15:38:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:14.482 15:38:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:14.482 15:38:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:14.482 15:38:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:14.482 15:38:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:14.482 15:38:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:14.482 15:38:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:14.482 15:38:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:14.482 15:38:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:14.482 15:38:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:14.482 15:38:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:14.483 15:38:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:14.483 15:38:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:14.483 15:38:31 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:14.483 15:38:31 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:14.483 15:38:31 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:14.483 15:38:31 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:14.483 15:38:31 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:14.483 15:38:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.483 15:38:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.483 15:38:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.483 15:38:31 -- paths/export.sh@5 -- # export PATH 00:28:14.483 15:38:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.483 15:38:31 -- nvmf/common.sh@47 -- # : 0 00:28:14.483 15:38:31 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:14.483 15:38:31 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:14.483 15:38:31 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:14.483 15:38:31 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:14.483 15:38:31 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:14.483 15:38:31 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:14.483 15:38:31 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:14.483 15:38:31 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:14.483 15:38:31 -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:28:14.483 15:38:31 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:28:14.483 15:38:31 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:14.483 15:38:31 -- nvmf/common.sh@437 -- # prepare_net_devs 00:28:14.483 15:38:31 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:28:14.483 15:38:31 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:28:14.483 15:38:31 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:14.483 15:38:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:14.483 15:38:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:14.483 15:38:31 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:28:14.483 15:38:31 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:28:14.483 15:38:31 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:14.483 15:38:31 -- common/autotest_common.sh@10 -- # set +x 00:28:22.620 15:38:38 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:22.620 15:38:38 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:22.620 15:38:38 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:22.620 15:38:38 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:22.620 15:38:38 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:22.620 15:38:38 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:22.620 15:38:38 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:22.620 15:38:38 -- nvmf/common.sh@295 -- # net_devs=() 00:28:22.620 15:38:38 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:22.620 15:38:38 -- nvmf/common.sh@296 -- # e810=() 00:28:22.620 15:38:38 -- nvmf/common.sh@296 -- # local -ga e810 00:28:22.620 15:38:38 -- nvmf/common.sh@297 -- # x722=() 00:28:22.620 15:38:38 -- nvmf/common.sh@297 -- # local -ga x722 00:28:22.620 15:38:38 -- nvmf/common.sh@298 -- # mlx=() 00:28:22.620 15:38:38 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:22.620 15:38:38 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:22.620 15:38:38 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:22.620 15:38:38 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:22.620 15:38:38 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:22.620 15:38:38 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:22.620 15:38:38 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:22.620 15:38:38 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:22.620 15:38:38 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:22.620 15:38:38 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:22.620 15:38:38 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:22.620 15:38:38 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:22.620 15:38:38 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:22.620 15:38:38 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:22.620 15:38:38 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:22.620 15:38:38 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:22.620 15:38:38 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:22.620 15:38:38 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:22.620 15:38:38 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:22.620 15:38:38 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:22.620 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:22.620 15:38:38 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:22.620 15:38:38 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:22.620 15:38:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:22.620 15:38:38 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:22.620 15:38:38 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:22.620 15:38:38 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:22.620 15:38:38 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:22.620 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:22.620 15:38:38 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:22.620 15:38:38 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:22.620 15:38:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:22.620 15:38:38 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:22.620 15:38:38 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:22.620 15:38:38 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:22.620 15:38:38 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:22.620 15:38:38 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:22.620 15:38:38 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:22.620 15:38:38 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:22.620 15:38:38 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:28:22.620 15:38:38 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:22.620 15:38:38 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:22.620 Found net devices under 0000:31:00.0: cvl_0_0 00:28:22.620 15:38:38 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:28:22.620 15:38:38 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:22.620 15:38:38 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:22.620 15:38:38 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:28:22.621 15:38:38 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:22.621 15:38:38 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:22.621 Found net devices under 0000:31:00.1: cvl_0_1 00:28:22.621 15:38:38 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:28:22.621 15:38:38 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:28:22.621 15:38:38 -- nvmf/common.sh@403 -- # is_hw=yes 00:28:22.621 15:38:38 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:28:22.621 15:38:38 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:28:22.621 15:38:38 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:28:22.621 15:38:38 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:22.621 15:38:38 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:22.621 15:38:38 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:22.621 15:38:38 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:22.621 15:38:38 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:22.621 15:38:38 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:22.621 15:38:38 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:22.621 15:38:38 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:22.621 15:38:38 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:22.621 15:38:38 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:22.621 15:38:38 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:22.621 15:38:38 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:22.621 15:38:38 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:22.621 15:38:38 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:22.621 15:38:38 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:22.621 15:38:38 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:22.621 15:38:38 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:22.621 15:38:39 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:22.621 15:38:39 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:22.621 15:38:39 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:22.621 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:22.621 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.631 ms 00:28:22.621 00:28:22.621 --- 10.0.0.2 ping statistics --- 00:28:22.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:22.621 rtt min/avg/max/mdev = 0.631/0.631/0.631/0.000 ms 00:28:22.621 15:38:39 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:22.621 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:22.621 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.337 ms 00:28:22.621 00:28:22.621 --- 10.0.0.1 ping statistics --- 00:28:22.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:22.621 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:28:22.621 15:38:39 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:22.621 15:38:39 -- nvmf/common.sh@411 -- # return 0 00:28:22.621 15:38:39 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:28:22.621 15:38:39 -- nvmf/common.sh@440 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:25.164 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:25.164 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:25.164 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:25.164 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:25.164 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:25.164 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:25.164 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:25.164 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:25.164 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:25.164 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:25.164 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:25.164 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:25.164 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:25.164 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:25.164 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:25.164 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:25.164 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:28:25.164 15:38:42 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:25.164 15:38:42 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:28:25.164 15:38:42 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:28:25.164 15:38:42 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:25.164 15:38:42 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:28:25.164 15:38:42 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:28:25.164 15:38:42 -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:28:25.164 15:38:42 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:28:25.164 15:38:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:28:25.164 15:38:42 -- common/autotest_common.sh@10 -- # set +x 00:28:25.164 15:38:42 -- nvmf/common.sh@470 -- # nvmfpid=1825608 00:28:25.164 15:38:42 -- nvmf/common.sh@471 -- # waitforlisten 1825608 00:28:25.164 15:38:42 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:28:25.164 15:38:42 -- common/autotest_common.sh@817 -- # '[' -z 1825608 ']' 00:28:25.164 15:38:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:25.164 15:38:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:25.164 15:38:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:25.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:25.164 15:38:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:25.164 15:38:42 -- common/autotest_common.sh@10 -- # set +x 00:28:25.426 [2024-04-26 15:38:42.653573] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:28:25.426 [2024-04-26 15:38:42.653625] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:25.426 EAL: No free 2048 kB hugepages reported on node 1 00:28:25.426 [2024-04-26 15:38:42.722601] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:25.426 [2024-04-26 15:38:42.792279] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:25.426 [2024-04-26 15:38:42.792316] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:25.426 [2024-04-26 15:38:42.792326] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:25.426 [2024-04-26 15:38:42.792334] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:25.426 [2024-04-26 15:38:42.792341] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:25.426 [2024-04-26 15:38:42.792503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:25.426 [2024-04-26 15:38:42.792623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:25.426 [2024-04-26 15:38:42.792783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:25.426 [2024-04-26 15:38:42.792783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:25.998 15:38:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:25.998 15:38:43 -- common/autotest_common.sh@850 -- # return 0 00:28:25.998 15:38:43 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:28:25.998 15:38:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:25.998 15:38:43 -- common/autotest_common.sh@10 -- # set +x 00:28:26.260 15:38:43 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:26.260 15:38:43 -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:28:26.260 15:38:43 -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:28:26.260 15:38:43 -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:28:26.260 15:38:43 -- scripts/common.sh@309 -- # local bdf bdfs 00:28:26.260 15:38:43 -- scripts/common.sh@310 -- # local nvmes 00:28:26.260 15:38:43 -- scripts/common.sh@312 -- # [[ -n 0000:65:00.0 ]] 00:28:26.260 15:38:43 -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:28:26.260 15:38:43 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:28:26.260 15:38:43 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:28:26.260 15:38:43 -- scripts/common.sh@320 -- # uname -s 00:28:26.260 15:38:43 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:28:26.260 15:38:43 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:28:26.260 15:38:43 -- scripts/common.sh@325 -- # (( 1 )) 00:28:26.260 15:38:43 -- scripts/common.sh@326 -- # printf '%s\n' 0000:65:00.0 00:28:26.260 15:38:43 -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:28:26.260 15:38:43 -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:28:26.260 15:38:43 -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:28:26.260 15:38:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:26.260 15:38:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:26.260 15:38:43 -- common/autotest_common.sh@10 -- # set +x 00:28:26.260 ************************************ 00:28:26.260 START TEST spdk_target_abort 00:28:26.260 ************************************ 00:28:26.260 15:38:43 -- common/autotest_common.sh@1111 -- # spdk_target 00:28:26.260 15:38:43 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:28:26.260 15:38:43 -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:28:26.260 15:38:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:26.260 15:38:43 -- common/autotest_common.sh@10 -- # set +x 00:28:26.521 spdk_targetn1 00:28:26.521 15:38:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:26.521 15:38:43 -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:26.521 15:38:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:26.521 15:38:43 -- common/autotest_common.sh@10 -- # set +x 00:28:26.521 [2024-04-26 15:38:43.915905] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:26.521 15:38:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:26.521 15:38:43 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:28:26.521 15:38:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:26.521 15:38:43 -- common/autotest_common.sh@10 -- # set +x 00:28:26.521 15:38:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:26.521 15:38:43 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:28:26.521 15:38:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:26.521 15:38:43 -- common/autotest_common.sh@10 -- # set +x 00:28:26.521 15:38:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:26.521 15:38:43 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:28:26.521 15:38:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:26.521 15:38:43 -- common/autotest_common.sh@10 -- # set +x 00:28:26.521 [2024-04-26 15:38:43.956167] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:26.521 15:38:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:26.521 15:38:43 -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:28:26.521 15:38:43 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:28:26.521 15:38:43 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:28:26.521 15:38:43 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:28:26.521 15:38:43 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:28:26.521 15:38:43 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:28:26.522 15:38:43 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:28:26.522 15:38:43 -- target/abort_qd_sizes.sh@24 -- # local target r 00:28:26.522 15:38:43 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:28:26.522 15:38:43 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:26.522 15:38:43 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:28:26.522 15:38:43 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:26.522 15:38:43 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:28:26.522 15:38:43 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:26.522 15:38:43 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:28:26.522 15:38:43 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:26.522 15:38:43 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:26.522 15:38:43 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:26.522 15:38:43 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:26.522 15:38:43 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:26.522 15:38:43 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:26.783 EAL: No free 2048 kB hugepages reported on node 1 00:28:26.783 [2024-04-26 15:38:44.133271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:688 len:8 PRP1 0x2000078c4000 PRP2 0x0 00:28:26.783 [2024-04-26 15:38:44.133295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0057 p:1 m:0 dnr:0 00:28:26.783 [2024-04-26 15:38:44.139340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:840 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:28:26.783 [2024-04-26 15:38:44.139357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:006a p:1 m:0 dnr:0 00:28:26.783 [2024-04-26 15:38:44.139877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:856 len:8 PRP1 0x2000078c4000 PRP2 0x0 00:28:26.783 [2024-04-26 15:38:44.139892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:006d p:1 m:0 dnr:0 00:28:26.783 [2024-04-26 15:38:44.200228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:2976 len:8 PRP1 0x2000078c6000 PRP2 0x0 00:28:26.783 [2024-04-26 15:38:44.200246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:30.087 Initializing NVMe Controllers 00:28:30.087 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:28:30.087 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:30.087 Initialization complete. Launching workers. 00:28:30.087 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12124, failed: 4 00:28:30.087 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 3193, failed to submit 8935 00:28:30.087 success 730, unsuccess 2463, failed 0 00:28:30.087 15:38:47 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:30.087 15:38:47 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:30.087 EAL: No free 2048 kB hugepages reported on node 1 00:28:30.087 [2024-04-26 15:38:47.349005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:174 nsid:1 lba:640 len:8 PRP1 0x200007c58000 PRP2 0x0 00:28:30.087 [2024-04-26 15:38:47.349048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:174 cdw0:0 sqhd:0057 p:1 m:0 dnr:0 00:28:30.657 [2024-04-26 15:38:48.004020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:180 nsid:1 lba:15504 len:8 PRP1 0x200007c4c000 PRP2 0x0 00:28:30.657 [2024-04-26 15:38:48.004053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:180 cdw0:0 sqhd:009a p:0 m:0 dnr:0 00:28:33.203 [2024-04-26 15:38:50.358871] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1107b90 is same with the state(5) to be set 00:28:33.203 [2024-04-26 15:38:50.358905] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1107b90 is same with the state(5) to be set 00:28:33.203 [2024-04-26 15:38:50.358913] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1107b90 is same with the state(5) to be set 00:28:33.203 [2024-04-26 15:38:50.358920] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1107b90 is same with the state(5) to be set 00:28:33.203 [2024-04-26 15:38:50.358927] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1107b90 is same with the state(5) to be set 00:28:33.203 [2024-04-26 15:38:50.358934] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1107b90 is same with the state(5) to be set 00:28:33.203 [2024-04-26 15:38:50.358940] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1107b90 is same with the state(5) to be set 00:28:33.203 [2024-04-26 15:38:50.358946] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1107b90 is same with the state(5) to be set 00:28:33.203 [2024-04-26 15:38:50.358953] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1107b90 is same with the state(5) to be set 00:28:33.203 [2024-04-26 15:38:50.358959] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1107b90 is same with the state(5) to be set 00:28:33.203 [2024-04-26 15:38:50.358965] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1107b90 is same with the state(5) to be set 00:28:33.203 Initializing NVMe Controllers 00:28:33.203 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:28:33.203 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:33.203 Initialization complete. Launching workers. 00:28:33.203 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8452, failed: 2 00:28:33.203 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1214, failed to submit 7240 00:28:33.203 success 387, unsuccess 827, failed 0 00:28:33.203 15:38:50 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:33.203 15:38:50 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:33.203 EAL: No free 2048 kB hugepages reported on node 1 00:28:33.774 [2024-04-26 15:38:51.097485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:150 nsid:1 lba:39296 len:8 PRP1 0x2000078f0000 PRP2 0x0 00:28:33.774 [2024-04-26 15:38:51.097510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:150 cdw0:0 sqhd:0091 p:1 m:0 dnr:0 00:28:37.074 Initializing NVMe Controllers 00:28:37.074 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:28:37.074 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:37.074 Initialization complete. Launching workers. 00:28:37.074 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 41839, failed: 1 00:28:37.074 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2625, failed to submit 39215 00:28:37.074 success 575, unsuccess 2050, failed 0 00:28:37.074 15:38:53 -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:28:37.074 15:38:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:37.074 15:38:53 -- common/autotest_common.sh@10 -- # set +x 00:28:37.074 15:38:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:37.074 15:38:53 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:28:37.074 15:38:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:37.074 15:38:53 -- common/autotest_common.sh@10 -- # set +x 00:28:38.459 15:38:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:38.459 15:38:55 -- target/abort_qd_sizes.sh@61 -- # killprocess 1825608 00:28:38.459 15:38:55 -- common/autotest_common.sh@936 -- # '[' -z 1825608 ']' 00:28:38.459 15:38:55 -- common/autotest_common.sh@940 -- # kill -0 1825608 00:28:38.459 15:38:55 -- common/autotest_common.sh@941 -- # uname 00:28:38.459 15:38:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:38.459 15:38:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1825608 00:28:38.459 15:38:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:38.459 15:38:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:38.459 15:38:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1825608' 00:28:38.459 killing process with pid 1825608 00:28:38.459 15:38:55 -- common/autotest_common.sh@955 -- # kill 1825608 00:28:38.459 15:38:55 -- common/autotest_common.sh@960 -- # wait 1825608 00:28:38.459 00:28:38.459 real 0m12.212s 00:28:38.459 user 0m50.189s 00:28:38.459 sys 0m1.689s 00:28:38.459 15:38:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:38.459 15:38:55 -- common/autotest_common.sh@10 -- # set +x 00:28:38.459 ************************************ 00:28:38.459 END TEST spdk_target_abort 00:28:38.459 ************************************ 00:28:38.459 15:38:55 -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:28:38.459 15:38:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:38.459 15:38:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:38.459 15:38:55 -- common/autotest_common.sh@10 -- # set +x 00:28:38.721 ************************************ 00:28:38.721 START TEST kernel_target_abort 00:28:38.721 ************************************ 00:28:38.721 15:38:56 -- common/autotest_common.sh@1111 -- # kernel_target 00:28:38.721 15:38:56 -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:28:38.721 15:38:56 -- nvmf/common.sh@717 -- # local ip 00:28:38.721 15:38:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:38.721 15:38:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:38.721 15:38:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:38.721 15:38:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:38.721 15:38:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:38.721 15:38:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:38.721 15:38:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:38.721 15:38:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:38.721 15:38:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:38.721 15:38:56 -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:28:38.721 15:38:56 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:28:38.721 15:38:56 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:28:38.721 15:38:56 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:38.721 15:38:56 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:38.721 15:38:56 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:38.721 15:38:56 -- nvmf/common.sh@628 -- # local block nvme 00:28:38.721 15:38:56 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:28:38.721 15:38:56 -- nvmf/common.sh@631 -- # modprobe nvmet 00:28:38.721 15:38:56 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:38.721 15:38:56 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:42.025 Waiting for block devices as requested 00:28:42.025 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:42.285 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:42.285 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:42.285 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:42.545 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:42.545 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:42.545 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:42.804 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:42.804 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:28:42.804 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:43.064 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:43.064 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:43.064 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:43.064 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:43.324 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:43.324 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:43.324 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:43.584 15:39:00 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:28:43.584 15:39:00 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:43.584 15:39:00 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:28:43.584 15:39:00 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:28:43.584 15:39:00 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:43.584 15:39:00 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:28:43.584 15:39:00 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:28:43.584 15:39:00 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:28:43.584 15:39:00 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:43.845 No valid GPT data, bailing 00:28:43.845 15:39:01 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:43.845 15:39:01 -- scripts/common.sh@391 -- # pt= 00:28:43.845 15:39:01 -- scripts/common.sh@392 -- # return 1 00:28:43.845 15:39:01 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:28:43.845 15:39:01 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:28:43.845 15:39:01 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:43.845 15:39:01 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:43.845 15:39:01 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:43.845 15:39:01 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:28:43.845 15:39:01 -- nvmf/common.sh@656 -- # echo 1 00:28:43.845 15:39:01 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:28:43.845 15:39:01 -- nvmf/common.sh@658 -- # echo 1 00:28:43.845 15:39:01 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:28:43.845 15:39:01 -- nvmf/common.sh@661 -- # echo tcp 00:28:43.845 15:39:01 -- nvmf/common.sh@662 -- # echo 4420 00:28:43.845 15:39:01 -- nvmf/common.sh@663 -- # echo ipv4 00:28:43.845 15:39:01 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:43.845 15:39:01 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:28:43.845 00:28:43.845 Discovery Log Number of Records 2, Generation counter 2 00:28:43.845 =====Discovery Log Entry 0====== 00:28:43.845 trtype: tcp 00:28:43.845 adrfam: ipv4 00:28:43.845 subtype: current discovery subsystem 00:28:43.845 treq: not specified, sq flow control disable supported 00:28:43.845 portid: 1 00:28:43.845 trsvcid: 4420 00:28:43.845 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:43.845 traddr: 10.0.0.1 00:28:43.845 eflags: none 00:28:43.845 sectype: none 00:28:43.845 =====Discovery Log Entry 1====== 00:28:43.845 trtype: tcp 00:28:43.845 adrfam: ipv4 00:28:43.845 subtype: nvme subsystem 00:28:43.845 treq: not specified, sq flow control disable supported 00:28:43.845 portid: 1 00:28:43.845 trsvcid: 4420 00:28:43.845 subnqn: nqn.2016-06.io.spdk:testnqn 00:28:43.845 traddr: 10.0.0.1 00:28:43.845 eflags: none 00:28:43.845 sectype: none 00:28:43.845 15:39:01 -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:28:43.845 15:39:01 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:28:43.845 15:39:01 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:28:43.845 15:39:01 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:28:43.845 15:39:01 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:28:43.845 15:39:01 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:28:43.845 15:39:01 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:28:43.845 15:39:01 -- target/abort_qd_sizes.sh@24 -- # local target r 00:28:43.845 15:39:01 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:28:43.845 15:39:01 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:43.845 15:39:01 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:28:43.845 15:39:01 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:43.845 15:39:01 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:28:43.845 15:39:01 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:43.845 15:39:01 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:28:43.845 15:39:01 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:43.845 15:39:01 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:28:43.845 15:39:01 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:43.845 15:39:01 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:43.845 15:39:01 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:43.845 15:39:01 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:43.845 EAL: No free 2048 kB hugepages reported on node 1 00:28:47.145 Initializing NVMe Controllers 00:28:47.145 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:47.145 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:47.145 Initialization complete. Launching workers. 00:28:47.145 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 64177, failed: 0 00:28:47.145 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 64177, failed to submit 0 00:28:47.145 success 0, unsuccess 64177, failed 0 00:28:47.145 15:39:04 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:47.145 15:39:04 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:47.145 EAL: No free 2048 kB hugepages reported on node 1 00:28:50.444 Initializing NVMe Controllers 00:28:50.444 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:50.444 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:50.444 Initialization complete. Launching workers. 00:28:50.444 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 106248, failed: 0 00:28:50.444 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 26742, failed to submit 79506 00:28:50.444 success 0, unsuccess 26742, failed 0 00:28:50.444 15:39:07 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:50.444 15:39:07 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:50.444 EAL: No free 2048 kB hugepages reported on node 1 00:28:53.745 Initializing NVMe Controllers 00:28:53.745 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:53.745 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:53.745 Initialization complete. Launching workers. 00:28:53.745 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 102024, failed: 0 00:28:53.745 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25522, failed to submit 76502 00:28:53.745 success 0, unsuccess 25522, failed 0 00:28:53.745 15:39:10 -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:28:53.745 15:39:10 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:28:53.745 15:39:10 -- nvmf/common.sh@675 -- # echo 0 00:28:53.745 15:39:10 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:53.745 15:39:10 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:53.745 15:39:10 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:53.745 15:39:10 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:53.745 15:39:10 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:28:53.745 15:39:10 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:28:53.745 15:39:10 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:57.075 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:57.075 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:57.075 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:57.075 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:57.075 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:57.075 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:57.075 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:57.075 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:57.075 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:57.075 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:57.075 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:57.075 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:57.075 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:57.075 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:57.075 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:57.075 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:58.559 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:28:58.820 00:28:58.820 real 0m20.256s 00:28:58.820 user 0m9.610s 00:28:58.820 sys 0m6.195s 00:28:58.820 15:39:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:58.820 15:39:16 -- common/autotest_common.sh@10 -- # set +x 00:28:58.820 ************************************ 00:28:58.820 END TEST kernel_target_abort 00:28:58.820 ************************************ 00:28:59.081 15:39:16 -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:28:59.081 15:39:16 -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:28:59.081 15:39:16 -- nvmf/common.sh@477 -- # nvmfcleanup 00:28:59.081 15:39:16 -- nvmf/common.sh@117 -- # sync 00:28:59.081 15:39:16 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:59.081 15:39:16 -- nvmf/common.sh@120 -- # set +e 00:28:59.081 15:39:16 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:59.081 15:39:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:59.081 rmmod nvme_tcp 00:28:59.081 rmmod nvme_fabrics 00:28:59.081 rmmod nvme_keyring 00:28:59.081 15:39:16 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:59.081 15:39:16 -- nvmf/common.sh@124 -- # set -e 00:28:59.081 15:39:16 -- nvmf/common.sh@125 -- # return 0 00:28:59.081 15:39:16 -- nvmf/common.sh@478 -- # '[' -n 1825608 ']' 00:28:59.081 15:39:16 -- nvmf/common.sh@479 -- # killprocess 1825608 00:28:59.081 15:39:16 -- common/autotest_common.sh@936 -- # '[' -z 1825608 ']' 00:28:59.081 15:39:16 -- common/autotest_common.sh@940 -- # kill -0 1825608 00:28:59.081 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (1825608) - No such process 00:28:59.081 15:39:16 -- common/autotest_common.sh@963 -- # echo 'Process with pid 1825608 is not found' 00:28:59.081 Process with pid 1825608 is not found 00:28:59.081 15:39:16 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:28:59.081 15:39:16 -- nvmf/common.sh@482 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:02.382 Waiting for block devices as requested 00:29:02.382 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:29:02.643 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:29:02.643 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:29:02.643 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:29:02.914 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:29:02.914 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:29:02.914 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:29:03.175 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:29:03.175 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:29:03.435 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:29:03.435 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:29:03.435 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:29:03.435 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:29:03.695 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:29:03.695 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:29:03.695 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:29:03.695 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:29:03.956 15:39:21 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:29:03.956 15:39:21 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:29:03.956 15:39:21 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:03.956 15:39:21 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:03.956 15:39:21 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:03.956 15:39:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:03.956 15:39:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:06.506 15:39:23 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:06.506 00:29:06.506 real 0m51.847s 00:29:06.506 user 1m4.940s 00:29:06.506 sys 0m18.665s 00:29:06.506 15:39:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:06.506 15:39:23 -- common/autotest_common.sh@10 -- # set +x 00:29:06.506 ************************************ 00:29:06.506 END TEST nvmf_abort_qd_sizes 00:29:06.506 ************************************ 00:29:06.506 15:39:23 -- spdk/autotest.sh@293 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:29:06.506 15:39:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:06.506 15:39:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:06.506 15:39:23 -- common/autotest_common.sh@10 -- # set +x 00:29:06.506 ************************************ 00:29:06.506 START TEST keyring_file 00:29:06.506 ************************************ 00:29:06.506 15:39:23 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:29:06.506 * Looking for test storage... 00:29:06.506 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:29:06.506 15:39:23 -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:29:06.506 15:39:23 -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:06.506 15:39:23 -- nvmf/common.sh@7 -- # uname -s 00:29:06.506 15:39:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:06.506 15:39:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:06.506 15:39:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:06.506 15:39:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:06.506 15:39:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:06.506 15:39:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:06.506 15:39:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:06.506 15:39:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:06.506 15:39:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:06.506 15:39:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:06.506 15:39:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:06.506 15:39:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:06.506 15:39:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:06.506 15:39:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:06.506 15:39:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:06.506 15:39:23 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:06.506 15:39:23 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:06.506 15:39:23 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:06.506 15:39:23 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:06.506 15:39:23 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:06.506 15:39:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:06.506 15:39:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:06.506 15:39:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:06.506 15:39:23 -- paths/export.sh@5 -- # export PATH 00:29:06.506 15:39:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:06.506 15:39:23 -- nvmf/common.sh@47 -- # : 0 00:29:06.506 15:39:23 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:06.506 15:39:23 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:06.506 15:39:23 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:06.506 15:39:23 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:06.506 15:39:23 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:06.506 15:39:23 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:06.506 15:39:23 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:06.506 15:39:23 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:06.506 15:39:23 -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:29:06.506 15:39:23 -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:29:06.506 15:39:23 -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:29:06.506 15:39:23 -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:29:06.506 15:39:23 -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:29:06.506 15:39:23 -- keyring/file.sh@24 -- # trap cleanup EXIT 00:29:06.506 15:39:23 -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:29:06.506 15:39:23 -- keyring/common.sh@15 -- # local name key digest path 00:29:06.506 15:39:23 -- keyring/common.sh@17 -- # name=key0 00:29:06.506 15:39:23 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:06.506 15:39:23 -- keyring/common.sh@17 -- # digest=0 00:29:06.506 15:39:23 -- keyring/common.sh@18 -- # mktemp 00:29:06.506 15:39:23 -- keyring/common.sh@18 -- # path=/tmp/tmp.7MC00K79YB 00:29:06.506 15:39:23 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:06.506 15:39:23 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:06.506 15:39:23 -- nvmf/common.sh@691 -- # local prefix key digest 00:29:06.506 15:39:23 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:29:06.506 15:39:23 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:29:06.506 15:39:23 -- nvmf/common.sh@693 -- # digest=0 00:29:06.506 15:39:23 -- nvmf/common.sh@694 -- # python - 00:29:06.506 15:39:23 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.7MC00K79YB 00:29:06.506 15:39:23 -- keyring/common.sh@23 -- # echo /tmp/tmp.7MC00K79YB 00:29:06.506 15:39:23 -- keyring/file.sh@26 -- # key0path=/tmp/tmp.7MC00K79YB 00:29:06.506 15:39:23 -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:29:06.506 15:39:23 -- keyring/common.sh@15 -- # local name key digest path 00:29:06.506 15:39:23 -- keyring/common.sh@17 -- # name=key1 00:29:06.506 15:39:23 -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:29:06.506 15:39:23 -- keyring/common.sh@17 -- # digest=0 00:29:06.506 15:39:23 -- keyring/common.sh@18 -- # mktemp 00:29:06.506 15:39:23 -- keyring/common.sh@18 -- # path=/tmp/tmp.M1pLtXlZMg 00:29:06.506 15:39:23 -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:29:06.506 15:39:23 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:29:06.506 15:39:23 -- nvmf/common.sh@691 -- # local prefix key digest 00:29:06.506 15:39:23 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:29:06.506 15:39:23 -- nvmf/common.sh@693 -- # key=112233445566778899aabbccddeeff00 00:29:06.506 15:39:23 -- nvmf/common.sh@693 -- # digest=0 00:29:06.506 15:39:23 -- nvmf/common.sh@694 -- # python - 00:29:06.506 15:39:23 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.M1pLtXlZMg 00:29:06.506 15:39:23 -- keyring/common.sh@23 -- # echo /tmp/tmp.M1pLtXlZMg 00:29:06.506 15:39:23 -- keyring/file.sh@27 -- # key1path=/tmp/tmp.M1pLtXlZMg 00:29:06.506 15:39:23 -- keyring/file.sh@30 -- # tgtpid=1836778 00:29:06.506 15:39:23 -- keyring/file.sh@32 -- # waitforlisten 1836778 00:29:06.506 15:39:23 -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:29:06.506 15:39:23 -- common/autotest_common.sh@817 -- # '[' -z 1836778 ']' 00:29:06.506 15:39:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:06.506 15:39:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:06.506 15:39:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:06.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:06.506 15:39:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:06.506 15:39:23 -- common/autotest_common.sh@10 -- # set +x 00:29:06.767 [2024-04-26 15:39:23.962841] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:29:06.767 [2024-04-26 15:39:23.962922] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1836778 ] 00:29:06.767 EAL: No free 2048 kB hugepages reported on node 1 00:29:06.767 [2024-04-26 15:39:24.027518] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:06.767 [2024-04-26 15:39:24.100237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:07.337 15:39:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:07.337 15:39:24 -- common/autotest_common.sh@850 -- # return 0 00:29:07.337 15:39:24 -- keyring/file.sh@33 -- # rpc_cmd 00:29:07.337 15:39:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:07.337 15:39:24 -- common/autotest_common.sh@10 -- # set +x 00:29:07.337 [2024-04-26 15:39:24.728170] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:07.337 null0 00:29:07.337 [2024-04-26 15:39:24.760229] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:07.337 [2024-04-26 15:39:24.760472] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:07.337 [2024-04-26 15:39:24.768234] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:29:07.337 15:39:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:07.337 15:39:24 -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:07.337 15:39:24 -- common/autotest_common.sh@638 -- # local es=0 00:29:07.337 15:39:24 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:07.337 15:39:24 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:29:07.337 15:39:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:07.337 15:39:24 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:29:07.337 15:39:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:07.337 15:39:24 -- common/autotest_common.sh@641 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:07.337 15:39:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:07.338 15:39:24 -- common/autotest_common.sh@10 -- # set +x 00:29:07.597 [2024-04-26 15:39:24.784281] nvmf_rpc.c: 769:nvmf_rpc_listen_paused: *ERROR*: A listener already exists with different secure channel option.request: 00:29:07.597 { 00:29:07.597 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:29:07.597 "secure_channel": false, 00:29:07.597 "listen_address": { 00:29:07.597 "trtype": "tcp", 00:29:07.597 "traddr": "127.0.0.1", 00:29:07.597 "trsvcid": "4420" 00:29:07.597 }, 00:29:07.597 "method": "nvmf_subsystem_add_listener", 00:29:07.597 "req_id": 1 00:29:07.597 } 00:29:07.597 Got JSON-RPC error response 00:29:07.597 response: 00:29:07.597 { 00:29:07.597 "code": -32602, 00:29:07.597 "message": "Invalid parameters" 00:29:07.597 } 00:29:07.597 15:39:24 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:29:07.597 15:39:24 -- common/autotest_common.sh@641 -- # es=1 00:29:07.597 15:39:24 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:29:07.597 15:39:24 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:29:07.597 15:39:24 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:29:07.597 15:39:24 -- keyring/file.sh@46 -- # bperfpid=1836807 00:29:07.597 15:39:24 -- keyring/file.sh@48 -- # waitforlisten 1836807 /var/tmp/bperf.sock 00:29:07.597 15:39:24 -- common/autotest_common.sh@817 -- # '[' -z 1836807 ']' 00:29:07.597 15:39:24 -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:29:07.597 15:39:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:07.597 15:39:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:07.597 15:39:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:07.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:07.597 15:39:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:07.597 15:39:24 -- common/autotest_common.sh@10 -- # set +x 00:29:07.597 [2024-04-26 15:39:24.837389] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:29:07.597 [2024-04-26 15:39:24.837436] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1836807 ] 00:29:07.597 EAL: No free 2048 kB hugepages reported on node 1 00:29:07.597 [2024-04-26 15:39:24.913570] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:07.597 [2024-04-26 15:39:24.976168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:08.167 15:39:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:08.167 15:39:25 -- common/autotest_common.sh@850 -- # return 0 00:29:08.167 15:39:25 -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.7MC00K79YB 00:29:08.167 15:39:25 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.7MC00K79YB 00:29:08.427 15:39:25 -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.M1pLtXlZMg 00:29:08.427 15:39:25 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.M1pLtXlZMg 00:29:08.686 15:39:25 -- keyring/file.sh@51 -- # get_key key0 00:29:08.686 15:39:25 -- keyring/file.sh@51 -- # jq -r .path 00:29:08.686 15:39:25 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:08.686 15:39:25 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:08.686 15:39:25 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:08.686 15:39:26 -- keyring/file.sh@51 -- # [[ /tmp/tmp.7MC00K79YB == \/\t\m\p\/\t\m\p\.\7\M\C\0\0\K\7\9\Y\B ]] 00:29:08.686 15:39:26 -- keyring/file.sh@52 -- # get_key key1 00:29:08.686 15:39:26 -- keyring/file.sh@52 -- # jq -r .path 00:29:08.686 15:39:26 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:08.686 15:39:26 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:08.686 15:39:26 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:08.945 15:39:26 -- keyring/file.sh@52 -- # [[ /tmp/tmp.M1pLtXlZMg == \/\t\m\p\/\t\m\p\.\M\1\p\L\t\X\l\Z\M\g ]] 00:29:08.945 15:39:26 -- keyring/file.sh@53 -- # get_refcnt key0 00:29:08.945 15:39:26 -- keyring/common.sh@12 -- # get_key key0 00:29:08.945 15:39:26 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:08.945 15:39:26 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:08.945 15:39:26 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:08.945 15:39:26 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:09.205 15:39:26 -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:29:09.205 15:39:26 -- keyring/file.sh@54 -- # get_refcnt key1 00:29:09.205 15:39:26 -- keyring/common.sh@12 -- # get_key key1 00:29:09.205 15:39:26 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:09.205 15:39:26 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:09.205 15:39:26 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:09.205 15:39:26 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:09.205 15:39:26 -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:29:09.205 15:39:26 -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:09.205 15:39:26 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:09.465 [2024-04-26 15:39:26.697129] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:09.465 nvme0n1 00:29:09.465 15:39:26 -- keyring/file.sh@59 -- # get_refcnt key0 00:29:09.465 15:39:26 -- keyring/common.sh@12 -- # get_key key0 00:29:09.465 15:39:26 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:09.465 15:39:26 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:09.465 15:39:26 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:09.465 15:39:26 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:09.725 15:39:26 -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:29:09.725 15:39:26 -- keyring/file.sh@60 -- # get_refcnt key1 00:29:09.725 15:39:26 -- keyring/common.sh@12 -- # get_key key1 00:29:09.725 15:39:26 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:09.725 15:39:26 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:09.725 15:39:26 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:09.725 15:39:26 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:09.725 15:39:27 -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:29:09.725 15:39:27 -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:09.985 Running I/O for 1 seconds... 00:29:10.923 00:29:10.923 Latency(us) 00:29:10.923 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:10.923 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:29:10.923 nvme0n1 : 1.01 13746.87 53.70 0.00 0.00 9266.57 3522.56 14854.83 00:29:10.923 =================================================================================================================== 00:29:10.923 Total : 13746.87 53.70 0.00 0.00 9266.57 3522.56 14854.83 00:29:10.923 0 00:29:10.923 15:39:28 -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:10.923 15:39:28 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:11.182 15:39:28 -- keyring/file.sh@65 -- # get_refcnt key0 00:29:11.182 15:39:28 -- keyring/common.sh@12 -- # get_key key0 00:29:11.182 15:39:28 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:11.182 15:39:28 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:11.182 15:39:28 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:11.182 15:39:28 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:11.182 15:39:28 -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:29:11.182 15:39:28 -- keyring/file.sh@66 -- # get_refcnt key1 00:29:11.182 15:39:28 -- keyring/common.sh@12 -- # get_key key1 00:29:11.182 15:39:28 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:11.183 15:39:28 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:11.183 15:39:28 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:11.183 15:39:28 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:11.442 15:39:28 -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:29:11.442 15:39:28 -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:11.442 15:39:28 -- common/autotest_common.sh@638 -- # local es=0 00:29:11.442 15:39:28 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:11.442 15:39:28 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:29:11.442 15:39:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:11.442 15:39:28 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:29:11.442 15:39:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:11.442 15:39:28 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:11.442 15:39:28 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:11.442 [2024-04-26 15:39:28.883526] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:29:11.442 [2024-04-26 15:39:28.884308] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2139440 (107): Transport endpoint is not connected 00:29:11.442 [2024-04-26 15:39:28.885305] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2139440 (9): Bad file descriptor 00:29:11.442 [2024-04-26 15:39:28.886306] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:11.442 [2024-04-26 15:39:28.886314] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:29:11.442 [2024-04-26 15:39:28.886319] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:11.442 request: 00:29:11.442 { 00:29:11.442 "name": "nvme0", 00:29:11.442 "trtype": "tcp", 00:29:11.442 "traddr": "127.0.0.1", 00:29:11.442 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:11.442 "adrfam": "ipv4", 00:29:11.442 "trsvcid": "4420", 00:29:11.442 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:11.442 "psk": "key1", 00:29:11.442 "method": "bdev_nvme_attach_controller", 00:29:11.442 "req_id": 1 00:29:11.442 } 00:29:11.442 Got JSON-RPC error response 00:29:11.442 response: 00:29:11.442 { 00:29:11.442 "code": -32602, 00:29:11.442 "message": "Invalid parameters" 00:29:11.442 } 00:29:11.703 15:39:28 -- common/autotest_common.sh@641 -- # es=1 00:29:11.703 15:39:28 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:29:11.703 15:39:28 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:29:11.703 15:39:28 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:29:11.703 15:39:28 -- keyring/file.sh@71 -- # get_refcnt key0 00:29:11.703 15:39:28 -- keyring/common.sh@12 -- # get_key key0 00:29:11.703 15:39:28 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:11.703 15:39:28 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:11.703 15:39:28 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:11.703 15:39:28 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:11.703 15:39:29 -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:29:11.703 15:39:29 -- keyring/file.sh@72 -- # get_refcnt key1 00:29:11.703 15:39:29 -- keyring/common.sh@12 -- # get_key key1 00:29:11.703 15:39:29 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:11.703 15:39:29 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:11.703 15:39:29 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:11.703 15:39:29 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:11.964 15:39:29 -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:29:11.964 15:39:29 -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:29:11.964 15:39:29 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:11.964 15:39:29 -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:29:11.964 15:39:29 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:29:12.225 15:39:29 -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:29:12.225 15:39:29 -- keyring/file.sh@77 -- # jq length 00:29:12.225 15:39:29 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:12.485 15:39:29 -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:29:12.485 15:39:29 -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.7MC00K79YB 00:29:12.485 15:39:29 -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.7MC00K79YB 00:29:12.485 15:39:29 -- common/autotest_common.sh@638 -- # local es=0 00:29:12.485 15:39:29 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.7MC00K79YB 00:29:12.485 15:39:29 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:29:12.485 15:39:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:12.485 15:39:29 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:29:12.485 15:39:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:12.485 15:39:29 -- common/autotest_common.sh@641 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.7MC00K79YB 00:29:12.485 15:39:29 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.7MC00K79YB 00:29:12.485 [2024-04-26 15:39:29.835038] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.7MC00K79YB': 0100660 00:29:12.485 [2024-04-26 15:39:29.835053] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:29:12.485 request: 00:29:12.485 { 00:29:12.485 "name": "key0", 00:29:12.485 "path": "/tmp/tmp.7MC00K79YB", 00:29:12.485 "method": "keyring_file_add_key", 00:29:12.485 "req_id": 1 00:29:12.485 } 00:29:12.485 Got JSON-RPC error response 00:29:12.485 response: 00:29:12.485 { 00:29:12.485 "code": -1, 00:29:12.485 "message": "Operation not permitted" 00:29:12.485 } 00:29:12.485 15:39:29 -- common/autotest_common.sh@641 -- # es=1 00:29:12.485 15:39:29 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:29:12.485 15:39:29 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:29:12.485 15:39:29 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:29:12.485 15:39:29 -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.7MC00K79YB 00:29:12.485 15:39:29 -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.7MC00K79YB 00:29:12.485 15:39:29 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.7MC00K79YB 00:29:12.745 15:39:30 -- keyring/file.sh@86 -- # rm -f /tmp/tmp.7MC00K79YB 00:29:12.745 15:39:30 -- keyring/file.sh@88 -- # get_refcnt key0 00:29:12.745 15:39:30 -- keyring/common.sh@12 -- # get_key key0 00:29:12.745 15:39:30 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:12.745 15:39:30 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:12.745 15:39:30 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:12.745 15:39:30 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:12.745 15:39:30 -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:29:12.745 15:39:30 -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:12.745 15:39:30 -- common/autotest_common.sh@638 -- # local es=0 00:29:12.745 15:39:30 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:12.745 15:39:30 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:29:12.745 15:39:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:12.745 15:39:30 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:29:12.745 15:39:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:12.745 15:39:30 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:12.745 15:39:30 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:13.006 [2024-04-26 15:39:30.312242] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.7MC00K79YB': No such file or directory 00:29:13.006 [2024-04-26 15:39:30.312259] nvme_tcp.c:2570:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:29:13.006 [2024-04-26 15:39:30.312275] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:29:13.006 [2024-04-26 15:39:30.312280] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:13.006 [2024-04-26 15:39:30.312290] bdev_nvme.c:6208:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:29:13.006 request: 00:29:13.006 { 00:29:13.006 "name": "nvme0", 00:29:13.006 "trtype": "tcp", 00:29:13.006 "traddr": "127.0.0.1", 00:29:13.006 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:13.006 "adrfam": "ipv4", 00:29:13.006 "trsvcid": "4420", 00:29:13.006 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:13.006 "psk": "key0", 00:29:13.006 "method": "bdev_nvme_attach_controller", 00:29:13.006 "req_id": 1 00:29:13.006 } 00:29:13.006 Got JSON-RPC error response 00:29:13.006 response: 00:29:13.006 { 00:29:13.006 "code": -19, 00:29:13.006 "message": "No such device" 00:29:13.006 } 00:29:13.006 15:39:30 -- common/autotest_common.sh@641 -- # es=1 00:29:13.006 15:39:30 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:29:13.006 15:39:30 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:29:13.006 15:39:30 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:29:13.006 15:39:30 -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:29:13.006 15:39:30 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:13.267 15:39:30 -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:29:13.267 15:39:30 -- keyring/common.sh@15 -- # local name key digest path 00:29:13.267 15:39:30 -- keyring/common.sh@17 -- # name=key0 00:29:13.267 15:39:30 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:13.267 15:39:30 -- keyring/common.sh@17 -- # digest=0 00:29:13.267 15:39:30 -- keyring/common.sh@18 -- # mktemp 00:29:13.267 15:39:30 -- keyring/common.sh@18 -- # path=/tmp/tmp.hkrIpzWd0E 00:29:13.267 15:39:30 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:13.267 15:39:30 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:13.267 15:39:30 -- nvmf/common.sh@691 -- # local prefix key digest 00:29:13.267 15:39:30 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:29:13.267 15:39:30 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:29:13.267 15:39:30 -- nvmf/common.sh@693 -- # digest=0 00:29:13.267 15:39:30 -- nvmf/common.sh@694 -- # python - 00:29:13.267 15:39:30 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.hkrIpzWd0E 00:29:13.267 15:39:30 -- keyring/common.sh@23 -- # echo /tmp/tmp.hkrIpzWd0E 00:29:13.267 15:39:30 -- keyring/file.sh@95 -- # key0path=/tmp/tmp.hkrIpzWd0E 00:29:13.267 15:39:30 -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.hkrIpzWd0E 00:29:13.267 15:39:30 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.hkrIpzWd0E 00:29:13.528 15:39:30 -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:13.528 15:39:30 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:13.528 nvme0n1 00:29:13.528 15:39:30 -- keyring/file.sh@99 -- # get_refcnt key0 00:29:13.528 15:39:30 -- keyring/common.sh@12 -- # get_key key0 00:29:13.528 15:39:30 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:13.528 15:39:30 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:13.528 15:39:30 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:13.528 15:39:30 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:13.788 15:39:31 -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:29:13.788 15:39:31 -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:29:13.788 15:39:31 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:14.048 15:39:31 -- keyring/file.sh@101 -- # get_key key0 00:29:14.048 15:39:31 -- keyring/file.sh@101 -- # jq -r .removed 00:29:14.049 15:39:31 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:14.049 15:39:31 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:14.049 15:39:31 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:14.049 15:39:31 -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:29:14.049 15:39:31 -- keyring/file.sh@102 -- # get_refcnt key0 00:29:14.049 15:39:31 -- keyring/common.sh@12 -- # get_key key0 00:29:14.049 15:39:31 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:14.049 15:39:31 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:14.049 15:39:31 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:14.049 15:39:31 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:14.309 15:39:31 -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:29:14.309 15:39:31 -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:14.309 15:39:31 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:14.568 15:39:31 -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:29:14.568 15:39:31 -- keyring/file.sh@104 -- # jq length 00:29:14.568 15:39:31 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:14.568 15:39:31 -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:29:14.568 15:39:31 -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.hkrIpzWd0E 00:29:14.568 15:39:31 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.hkrIpzWd0E 00:29:14.827 15:39:32 -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.M1pLtXlZMg 00:29:14.827 15:39:32 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.M1pLtXlZMg 00:29:14.827 15:39:32 -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:14.827 15:39:32 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:15.088 nvme0n1 00:29:15.088 15:39:32 -- keyring/file.sh@112 -- # bperf_cmd save_config 00:29:15.088 15:39:32 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:29:15.384 15:39:32 -- keyring/file.sh@112 -- # config='{ 00:29:15.384 "subsystems": [ 00:29:15.384 { 00:29:15.384 "subsystem": "keyring", 00:29:15.384 "config": [ 00:29:15.384 { 00:29:15.384 "method": "keyring_file_add_key", 00:29:15.384 "params": { 00:29:15.384 "name": "key0", 00:29:15.384 "path": "/tmp/tmp.hkrIpzWd0E" 00:29:15.384 } 00:29:15.384 }, 00:29:15.384 { 00:29:15.384 "method": "keyring_file_add_key", 00:29:15.384 "params": { 00:29:15.384 "name": "key1", 00:29:15.384 "path": "/tmp/tmp.M1pLtXlZMg" 00:29:15.384 } 00:29:15.384 } 00:29:15.384 ] 00:29:15.384 }, 00:29:15.384 { 00:29:15.384 "subsystem": "iobuf", 00:29:15.384 "config": [ 00:29:15.384 { 00:29:15.384 "method": "iobuf_set_options", 00:29:15.384 "params": { 00:29:15.384 "small_pool_count": 8192, 00:29:15.384 "large_pool_count": 1024, 00:29:15.384 "small_bufsize": 8192, 00:29:15.384 "large_bufsize": 135168 00:29:15.384 } 00:29:15.384 } 00:29:15.384 ] 00:29:15.384 }, 00:29:15.384 { 00:29:15.384 "subsystem": "sock", 00:29:15.384 "config": [ 00:29:15.384 { 00:29:15.384 "method": "sock_impl_set_options", 00:29:15.384 "params": { 00:29:15.384 "impl_name": "posix", 00:29:15.384 "recv_buf_size": 2097152, 00:29:15.385 "send_buf_size": 2097152, 00:29:15.385 "enable_recv_pipe": true, 00:29:15.385 "enable_quickack": false, 00:29:15.385 "enable_placement_id": 0, 00:29:15.385 "enable_zerocopy_send_server": true, 00:29:15.385 "enable_zerocopy_send_client": false, 00:29:15.385 "zerocopy_threshold": 0, 00:29:15.385 "tls_version": 0, 00:29:15.385 "enable_ktls": false 00:29:15.385 } 00:29:15.385 }, 00:29:15.385 { 00:29:15.385 "method": "sock_impl_set_options", 00:29:15.385 "params": { 00:29:15.385 "impl_name": "ssl", 00:29:15.385 "recv_buf_size": 4096, 00:29:15.385 "send_buf_size": 4096, 00:29:15.385 "enable_recv_pipe": true, 00:29:15.385 "enable_quickack": false, 00:29:15.385 "enable_placement_id": 0, 00:29:15.385 "enable_zerocopy_send_server": true, 00:29:15.385 "enable_zerocopy_send_client": false, 00:29:15.385 "zerocopy_threshold": 0, 00:29:15.385 "tls_version": 0, 00:29:15.385 "enable_ktls": false 00:29:15.385 } 00:29:15.385 } 00:29:15.385 ] 00:29:15.385 }, 00:29:15.385 { 00:29:15.385 "subsystem": "vmd", 00:29:15.385 "config": [] 00:29:15.385 }, 00:29:15.385 { 00:29:15.385 "subsystem": "accel", 00:29:15.385 "config": [ 00:29:15.385 { 00:29:15.385 "method": "accel_set_options", 00:29:15.385 "params": { 00:29:15.385 "small_cache_size": 128, 00:29:15.385 "large_cache_size": 16, 00:29:15.385 "task_count": 2048, 00:29:15.385 "sequence_count": 2048, 00:29:15.385 "buf_count": 2048 00:29:15.385 } 00:29:15.385 } 00:29:15.385 ] 00:29:15.385 }, 00:29:15.385 { 00:29:15.385 "subsystem": "bdev", 00:29:15.385 "config": [ 00:29:15.385 { 00:29:15.385 "method": "bdev_set_options", 00:29:15.385 "params": { 00:29:15.385 "bdev_io_pool_size": 65535, 00:29:15.385 "bdev_io_cache_size": 256, 00:29:15.385 "bdev_auto_examine": true, 00:29:15.385 "iobuf_small_cache_size": 128, 00:29:15.385 "iobuf_large_cache_size": 16 00:29:15.385 } 00:29:15.385 }, 00:29:15.385 { 00:29:15.385 "method": "bdev_raid_set_options", 00:29:15.385 "params": { 00:29:15.385 "process_window_size_kb": 1024 00:29:15.385 } 00:29:15.385 }, 00:29:15.385 { 00:29:15.385 "method": "bdev_iscsi_set_options", 00:29:15.385 "params": { 00:29:15.385 "timeout_sec": 30 00:29:15.385 } 00:29:15.385 }, 00:29:15.385 { 00:29:15.385 "method": "bdev_nvme_set_options", 00:29:15.385 "params": { 00:29:15.385 "action_on_timeout": "none", 00:29:15.385 "timeout_us": 0, 00:29:15.385 "timeout_admin_us": 0, 00:29:15.385 "keep_alive_timeout_ms": 10000, 00:29:15.385 "arbitration_burst": 0, 00:29:15.385 "low_priority_weight": 0, 00:29:15.385 "medium_priority_weight": 0, 00:29:15.385 "high_priority_weight": 0, 00:29:15.385 "nvme_adminq_poll_period_us": 10000, 00:29:15.385 "nvme_ioq_poll_period_us": 0, 00:29:15.385 "io_queue_requests": 512, 00:29:15.385 "delay_cmd_submit": true, 00:29:15.385 "transport_retry_count": 4, 00:29:15.385 "bdev_retry_count": 3, 00:29:15.385 "transport_ack_timeout": 0, 00:29:15.385 "ctrlr_loss_timeout_sec": 0, 00:29:15.385 "reconnect_delay_sec": 0, 00:29:15.385 "fast_io_fail_timeout_sec": 0, 00:29:15.385 "disable_auto_failback": false, 00:29:15.385 "generate_uuids": false, 00:29:15.385 "transport_tos": 0, 00:29:15.385 "nvme_error_stat": false, 00:29:15.385 "rdma_srq_size": 0, 00:29:15.385 "io_path_stat": false, 00:29:15.385 "allow_accel_sequence": false, 00:29:15.385 "rdma_max_cq_size": 0, 00:29:15.385 "rdma_cm_event_timeout_ms": 0, 00:29:15.385 "dhchap_digests": [ 00:29:15.385 "sha256", 00:29:15.385 "sha384", 00:29:15.385 "sha512" 00:29:15.385 ], 00:29:15.385 "dhchap_dhgroups": [ 00:29:15.385 "null", 00:29:15.385 "ffdhe2048", 00:29:15.385 "ffdhe3072", 00:29:15.385 "ffdhe4096", 00:29:15.385 "ffdhe6144", 00:29:15.385 "ffdhe8192" 00:29:15.385 ] 00:29:15.385 } 00:29:15.385 }, 00:29:15.385 { 00:29:15.385 "method": "bdev_nvme_attach_controller", 00:29:15.385 "params": { 00:29:15.385 "name": "nvme0", 00:29:15.385 "trtype": "TCP", 00:29:15.385 "adrfam": "IPv4", 00:29:15.385 "traddr": "127.0.0.1", 00:29:15.385 "trsvcid": "4420", 00:29:15.385 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:15.385 "prchk_reftag": false, 00:29:15.385 "prchk_guard": false, 00:29:15.385 "ctrlr_loss_timeout_sec": 0, 00:29:15.385 "reconnect_delay_sec": 0, 00:29:15.385 "fast_io_fail_timeout_sec": 0, 00:29:15.385 "psk": "key0", 00:29:15.385 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:15.385 "hdgst": false, 00:29:15.385 "ddgst": false 00:29:15.385 } 00:29:15.385 }, 00:29:15.385 { 00:29:15.385 "method": "bdev_nvme_set_hotplug", 00:29:15.385 "params": { 00:29:15.385 "period_us": 100000, 00:29:15.385 "enable": false 00:29:15.385 } 00:29:15.385 }, 00:29:15.385 { 00:29:15.385 "method": "bdev_wait_for_examine" 00:29:15.385 } 00:29:15.385 ] 00:29:15.385 }, 00:29:15.385 { 00:29:15.385 "subsystem": "nbd", 00:29:15.385 "config": [] 00:29:15.385 } 00:29:15.385 ] 00:29:15.385 }' 00:29:15.385 15:39:32 -- keyring/file.sh@114 -- # killprocess 1836807 00:29:15.385 15:39:32 -- common/autotest_common.sh@936 -- # '[' -z 1836807 ']' 00:29:15.385 15:39:32 -- common/autotest_common.sh@940 -- # kill -0 1836807 00:29:15.385 15:39:32 -- common/autotest_common.sh@941 -- # uname 00:29:15.385 15:39:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:15.385 15:39:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1836807 00:29:15.385 15:39:32 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:29:15.385 15:39:32 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:29:15.385 15:39:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1836807' 00:29:15.385 killing process with pid 1836807 00:29:15.385 15:39:32 -- common/autotest_common.sh@955 -- # kill 1836807 00:29:15.385 Received shutdown signal, test time was about 1.000000 seconds 00:29:15.385 00:29:15.385 Latency(us) 00:29:15.385 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:15.385 =================================================================================================================== 00:29:15.385 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:15.385 15:39:32 -- common/autotest_common.sh@960 -- # wait 1836807 00:29:15.647 15:39:32 -- keyring/file.sh@117 -- # bperfpid=1838608 00:29:15.647 15:39:32 -- keyring/file.sh@119 -- # waitforlisten 1838608 /var/tmp/bperf.sock 00:29:15.647 15:39:32 -- common/autotest_common.sh@817 -- # '[' -z 1838608 ']' 00:29:15.647 15:39:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:15.647 15:39:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:15.647 15:39:32 -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:29:15.647 15:39:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:15.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:15.647 15:39:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:15.647 15:39:32 -- keyring/file.sh@115 -- # echo '{ 00:29:15.647 "subsystems": [ 00:29:15.647 { 00:29:15.647 "subsystem": "keyring", 00:29:15.647 "config": [ 00:29:15.647 { 00:29:15.647 "method": "keyring_file_add_key", 00:29:15.647 "params": { 00:29:15.647 "name": "key0", 00:29:15.647 "path": "/tmp/tmp.hkrIpzWd0E" 00:29:15.647 } 00:29:15.647 }, 00:29:15.647 { 00:29:15.647 "method": "keyring_file_add_key", 00:29:15.647 "params": { 00:29:15.647 "name": "key1", 00:29:15.647 "path": "/tmp/tmp.M1pLtXlZMg" 00:29:15.647 } 00:29:15.647 } 00:29:15.647 ] 00:29:15.647 }, 00:29:15.647 { 00:29:15.647 "subsystem": "iobuf", 00:29:15.647 "config": [ 00:29:15.647 { 00:29:15.647 "method": "iobuf_set_options", 00:29:15.647 "params": { 00:29:15.647 "small_pool_count": 8192, 00:29:15.647 "large_pool_count": 1024, 00:29:15.647 "small_bufsize": 8192, 00:29:15.647 "large_bufsize": 135168 00:29:15.647 } 00:29:15.647 } 00:29:15.647 ] 00:29:15.647 }, 00:29:15.647 { 00:29:15.647 "subsystem": "sock", 00:29:15.647 "config": [ 00:29:15.647 { 00:29:15.647 "method": "sock_impl_set_options", 00:29:15.647 "params": { 00:29:15.647 "impl_name": "posix", 00:29:15.647 "recv_buf_size": 2097152, 00:29:15.647 "send_buf_size": 2097152, 00:29:15.647 "enable_recv_pipe": true, 00:29:15.647 "enable_quickack": false, 00:29:15.647 "enable_placement_id": 0, 00:29:15.647 "enable_zerocopy_send_server": true, 00:29:15.647 "enable_zerocopy_send_client": false, 00:29:15.647 "zerocopy_threshold": 0, 00:29:15.647 "tls_version": 0, 00:29:15.647 "enable_ktls": false 00:29:15.647 } 00:29:15.647 }, 00:29:15.647 { 00:29:15.647 "method": "sock_impl_set_options", 00:29:15.647 "params": { 00:29:15.647 "impl_name": "ssl", 00:29:15.647 "recv_buf_size": 4096, 00:29:15.647 "send_buf_size": 4096, 00:29:15.647 "enable_recv_pipe": true, 00:29:15.647 "enable_quickack": false, 00:29:15.647 "enable_placement_id": 0, 00:29:15.647 "enable_zerocopy_send_server": true, 00:29:15.647 "enable_zerocopy_send_client": false, 00:29:15.647 "zerocopy_threshold": 0, 00:29:15.647 "tls_version": 0, 00:29:15.647 "enable_ktls": false 00:29:15.647 } 00:29:15.647 } 00:29:15.647 ] 00:29:15.647 }, 00:29:15.647 { 00:29:15.647 "subsystem": "vmd", 00:29:15.647 "config": [] 00:29:15.647 }, 00:29:15.647 { 00:29:15.647 "subsystem": "accel", 00:29:15.647 "config": [ 00:29:15.647 { 00:29:15.647 "method": "accel_set_options", 00:29:15.647 "params": { 00:29:15.647 "small_cache_size": 128, 00:29:15.647 "large_cache_size": 16, 00:29:15.647 "task_count": 2048, 00:29:15.647 "sequence_count": 2048, 00:29:15.647 "buf_count": 2048 00:29:15.647 } 00:29:15.647 } 00:29:15.647 ] 00:29:15.647 }, 00:29:15.647 { 00:29:15.647 "subsystem": "bdev", 00:29:15.647 "config": [ 00:29:15.647 { 00:29:15.647 "method": "bdev_set_options", 00:29:15.647 "params": { 00:29:15.647 "bdev_io_pool_size": 65535, 00:29:15.647 "bdev_io_cache_size": 256, 00:29:15.647 "bdev_auto_examine": true, 00:29:15.647 "iobuf_small_cache_size": 128, 00:29:15.647 "iobuf_large_cache_size": 16 00:29:15.647 } 00:29:15.647 }, 00:29:15.647 { 00:29:15.647 "method": "bdev_raid_set_options", 00:29:15.647 "params": { 00:29:15.647 "process_window_size_kb": 1024 00:29:15.647 } 00:29:15.647 }, 00:29:15.647 { 00:29:15.647 "method": "bdev_iscsi_set_options", 00:29:15.647 "params": { 00:29:15.647 "timeout_sec": 30 00:29:15.647 } 00:29:15.647 }, 00:29:15.647 { 00:29:15.647 "method": "bdev_nvme_set_options", 00:29:15.647 "params": { 00:29:15.647 "action_on_timeout": "none", 00:29:15.647 "timeout_us": 0, 00:29:15.647 "timeout_admin_us": 0, 00:29:15.647 "keep_alive_timeout_ms": 10000, 00:29:15.647 "arbitration_burst": 0, 00:29:15.647 "low_priority_weight": 0, 00:29:15.647 "medium_priority_weight": 0, 00:29:15.647 "high_priority_weight": 0, 00:29:15.647 "nvme_adminq_poll_period_us": 10000, 00:29:15.647 "nvme_ioq_poll_period_us": 0, 00:29:15.647 "io_queue_requests": 512, 00:29:15.647 "delay_cmd_submit": true, 00:29:15.647 "transport_retry_count": 4, 00:29:15.647 "bdev_retry_count": 3, 00:29:15.647 "transport_ack_timeout": 0, 00:29:15.647 "ctrlr_loss_timeout_sec": 0, 00:29:15.647 "reconnect_delay_sec": 0, 00:29:15.647 "fast_io_fail_timeout_sec": 0, 00:29:15.647 "disable_auto_failback": false, 00:29:15.647 "generate_uuids": false, 00:29:15.647 "transport_tos": 0, 00:29:15.647 "nvme_error_stat": false, 00:29:15.647 "rdma_srq_size": 0, 00:29:15.647 "io_path_stat": false, 00:29:15.647 "allow_accel_sequence": false, 00:29:15.647 "rdma_max_cq_size": 0, 00:29:15.647 "rdma_cm_event_timeout_ms": 0, 00:29:15.647 "dhchap_digests": [ 00:29:15.647 "sha256", 00:29:15.647 "sha384", 00:29:15.647 "sha512" 00:29:15.647 ], 00:29:15.647 "dhchap_dhgroups": [ 00:29:15.647 "null", 00:29:15.647 "ffdhe2048", 00:29:15.647 "ffdhe3072", 00:29:15.647 "ffdhe4096", 00:29:15.647 "ffdhe6144", 00:29:15.647 "ffdhe8192" 00:29:15.647 ] 00:29:15.647 } 00:29:15.647 }, 00:29:15.647 { 00:29:15.647 "method": "bdev_nvme_attach_controller", 00:29:15.647 "params": { 00:29:15.647 "name": "nvme0", 00:29:15.647 "trtype": "TCP", 00:29:15.647 "adrfam": "IPv4", 00:29:15.647 "traddr": "127.0.0.1", 00:29:15.647 "trsvcid": "4420", 00:29:15.647 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:15.647 "prchk_reftag": false, 00:29:15.647 "prchk_guard": false, 00:29:15.647 "ctrlr_loss_timeout_sec": 0, 00:29:15.647 "reconnect_delay_sec": 0, 00:29:15.647 "fast_io_fail_timeout_sec": 0, 00:29:15.647 "psk": "key0", 00:29:15.647 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:15.647 "hdgst": false, 00:29:15.647 "ddgst": false 00:29:15.647 } 00:29:15.647 }, 00:29:15.647 { 00:29:15.647 "method": "bdev_nvme_set_hotplug", 00:29:15.647 "params": { 00:29:15.647 "period_us": 100000, 00:29:15.647 "enable": false 00:29:15.647 } 00:29:15.647 }, 00:29:15.647 { 00:29:15.647 "method": "bdev_wait_for_examine" 00:29:15.647 } 00:29:15.647 ] 00:29:15.647 }, 00:29:15.647 { 00:29:15.647 "subsystem": "nbd", 00:29:15.647 "config": [] 00:29:15.647 } 00:29:15.647 ] 00:29:15.647 }' 00:29:15.647 15:39:32 -- common/autotest_common.sh@10 -- # set +x 00:29:15.647 [2024-04-26 15:39:32.890525] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:29:15.648 [2024-04-26 15:39:32.890582] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1838608 ] 00:29:15.648 EAL: No free 2048 kB hugepages reported on node 1 00:29:15.648 [2024-04-26 15:39:32.965347] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:15.648 [2024-04-26 15:39:33.017304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:15.909 [2024-04-26 15:39:33.151005] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:16.480 15:39:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:16.480 15:39:33 -- common/autotest_common.sh@850 -- # return 0 00:29:16.480 15:39:33 -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:29:16.480 15:39:33 -- keyring/file.sh@120 -- # jq length 00:29:16.480 15:39:33 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:16.480 15:39:33 -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:29:16.480 15:39:33 -- keyring/file.sh@121 -- # get_refcnt key0 00:29:16.480 15:39:33 -- keyring/common.sh@12 -- # get_key key0 00:29:16.480 15:39:33 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:16.480 15:39:33 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:16.480 15:39:33 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:16.480 15:39:33 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:16.741 15:39:33 -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:29:16.741 15:39:33 -- keyring/file.sh@122 -- # get_refcnt key1 00:29:16.741 15:39:33 -- keyring/common.sh@12 -- # get_key key1 00:29:16.741 15:39:33 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:16.741 15:39:33 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:16.741 15:39:33 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:16.741 15:39:33 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:16.741 15:39:34 -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:29:16.741 15:39:34 -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:29:16.741 15:39:34 -- keyring/file.sh@123 -- # jq -r '.[].name' 00:29:16.741 15:39:34 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:29:17.002 15:39:34 -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:29:17.002 15:39:34 -- keyring/file.sh@1 -- # cleanup 00:29:17.002 15:39:34 -- keyring/file.sh@19 -- # rm -f /tmp/tmp.hkrIpzWd0E /tmp/tmp.M1pLtXlZMg 00:29:17.002 15:39:34 -- keyring/file.sh@20 -- # killprocess 1838608 00:29:17.002 15:39:34 -- common/autotest_common.sh@936 -- # '[' -z 1838608 ']' 00:29:17.002 15:39:34 -- common/autotest_common.sh@940 -- # kill -0 1838608 00:29:17.002 15:39:34 -- common/autotest_common.sh@941 -- # uname 00:29:17.002 15:39:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:17.002 15:39:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1838608 00:29:17.002 15:39:34 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:29:17.002 15:39:34 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:29:17.002 15:39:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1838608' 00:29:17.002 killing process with pid 1838608 00:29:17.002 15:39:34 -- common/autotest_common.sh@955 -- # kill 1838608 00:29:17.002 Received shutdown signal, test time was about 1.000000 seconds 00:29:17.002 00:29:17.002 Latency(us) 00:29:17.002 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:17.002 =================================================================================================================== 00:29:17.002 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:17.002 15:39:34 -- common/autotest_common.sh@960 -- # wait 1838608 00:29:17.262 15:39:34 -- keyring/file.sh@21 -- # killprocess 1836778 00:29:17.262 15:39:34 -- common/autotest_common.sh@936 -- # '[' -z 1836778 ']' 00:29:17.262 15:39:34 -- common/autotest_common.sh@940 -- # kill -0 1836778 00:29:17.262 15:39:34 -- common/autotest_common.sh@941 -- # uname 00:29:17.262 15:39:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:17.262 15:39:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1836778 00:29:17.262 15:39:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:17.262 15:39:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:17.262 15:39:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1836778' 00:29:17.262 killing process with pid 1836778 00:29:17.262 15:39:34 -- common/autotest_common.sh@955 -- # kill 1836778 00:29:17.262 [2024-04-26 15:39:34.532445] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:29:17.262 15:39:34 -- common/autotest_common.sh@960 -- # wait 1836778 00:29:17.523 00:29:17.523 real 0m11.098s 00:29:17.523 user 0m26.444s 00:29:17.523 sys 0m2.574s 00:29:17.523 15:39:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:17.523 15:39:34 -- common/autotest_common.sh@10 -- # set +x 00:29:17.523 ************************************ 00:29:17.523 END TEST keyring_file 00:29:17.523 ************************************ 00:29:17.523 15:39:34 -- spdk/autotest.sh@294 -- # [[ n == y ]] 00:29:17.523 15:39:34 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:29:17.523 15:39:34 -- spdk/autotest.sh@310 -- # '[' 0 -eq 1 ']' 00:29:17.523 15:39:34 -- spdk/autotest.sh@314 -- # '[' 0 -eq 1 ']' 00:29:17.523 15:39:34 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:29:17.523 15:39:34 -- spdk/autotest.sh@328 -- # '[' 0 -eq 1 ']' 00:29:17.523 15:39:34 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:29:17.523 15:39:34 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:29:17.523 15:39:34 -- spdk/autotest.sh@341 -- # '[' 0 -eq 1 ']' 00:29:17.523 15:39:34 -- spdk/autotest.sh@345 -- # '[' 0 -eq 1 ']' 00:29:17.523 15:39:34 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:29:17.523 15:39:34 -- spdk/autotest.sh@354 -- # '[' 0 -eq 1 ']' 00:29:17.523 15:39:34 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:29:17.523 15:39:34 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:29:17.523 15:39:34 -- spdk/autotest.sh@369 -- # [[ 0 -eq 1 ]] 00:29:17.523 15:39:34 -- spdk/autotest.sh@373 -- # [[ 0 -eq 1 ]] 00:29:17.523 15:39:34 -- spdk/autotest.sh@378 -- # trap - SIGINT SIGTERM EXIT 00:29:17.523 15:39:34 -- spdk/autotest.sh@380 -- # timing_enter post_cleanup 00:29:17.523 15:39:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:29:17.523 15:39:34 -- common/autotest_common.sh@10 -- # set +x 00:29:17.523 15:39:34 -- spdk/autotest.sh@381 -- # autotest_cleanup 00:29:17.523 15:39:34 -- common/autotest_common.sh@1378 -- # local autotest_es=0 00:29:17.523 15:39:34 -- common/autotest_common.sh@1379 -- # xtrace_disable 00:29:17.523 15:39:34 -- common/autotest_common.sh@10 -- # set +x 00:29:25.666 INFO: APP EXITING 00:29:25.666 INFO: killing all VMs 00:29:25.666 INFO: killing vhost app 00:29:25.666 INFO: EXIT DONE 00:29:28.211 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:29:28.211 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:29:28.211 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:29:28.211 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:29:28.211 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:29:28.471 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:29:28.471 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:29:28.471 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:29:28.471 0000:65:00.0 (144d a80a): Already using the nvme driver 00:29:28.471 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:29:28.471 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:29:28.471 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:29:28.471 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:29:28.471 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:29:28.471 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:29:28.471 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:29:28.733 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:29:32.942 Cleaning 00:29:32.942 Removing: /var/run/dpdk/spdk0/config 00:29:32.942 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:29:32.942 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:29:32.942 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:29:32.942 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:29:32.942 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:29:32.942 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:29:32.942 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:29:32.942 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:29:32.942 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:29:32.942 Removing: /var/run/dpdk/spdk0/hugepage_info 00:29:32.942 Removing: /var/run/dpdk/spdk1/config 00:29:32.942 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:29:32.942 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:29:32.942 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:29:32.942 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:29:32.942 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:29:32.942 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:29:32.942 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:29:32.942 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:29:32.942 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:29:32.942 Removing: /var/run/dpdk/spdk1/hugepage_info 00:29:32.942 Removing: /var/run/dpdk/spdk1/mp_socket 00:29:32.942 Removing: /var/run/dpdk/spdk2/config 00:29:32.942 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:29:32.942 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:29:32.942 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:29:32.942 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:29:32.942 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:29:32.942 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:29:32.942 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:29:32.942 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:29:32.942 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:29:32.942 Removing: /var/run/dpdk/spdk2/hugepage_info 00:29:32.942 Removing: /var/run/dpdk/spdk3/config 00:29:32.942 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:29:32.942 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:29:32.942 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:29:32.942 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:29:32.942 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:29:32.942 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:29:32.942 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:29:32.942 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:29:32.942 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:29:32.942 Removing: /var/run/dpdk/spdk3/hugepage_info 00:29:32.942 Removing: /var/run/dpdk/spdk4/config 00:29:32.942 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:29:32.942 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:29:32.942 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:29:32.942 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:29:32.942 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:29:32.942 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:29:32.942 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:29:32.942 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:29:32.942 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:29:32.942 Removing: /var/run/dpdk/spdk4/hugepage_info 00:29:32.942 Removing: /dev/shm/bdev_svc_trace.1 00:29:32.942 Removing: /dev/shm/nvmf_trace.0 00:29:32.942 Removing: /dev/shm/spdk_tgt_trace.pid1415605 00:29:32.942 Removing: /var/run/dpdk/spdk0 00:29:32.942 Removing: /var/run/dpdk/spdk1 00:29:32.942 Removing: /var/run/dpdk/spdk2 00:29:32.942 Removing: /var/run/dpdk/spdk3 00:29:32.942 Removing: /var/run/dpdk/spdk4 00:29:32.942 Removing: /var/run/dpdk/spdk_pid1413709 00:29:32.942 Removing: /var/run/dpdk/spdk_pid1415605 00:29:32.942 Removing: /var/run/dpdk/spdk_pid1416598 00:29:32.942 Removing: /var/run/dpdk/spdk_pid1417718 00:29:32.942 Removing: /var/run/dpdk/spdk_pid1417983 00:29:32.942 Removing: /var/run/dpdk/spdk_pid1419234 00:29:32.942 Removing: /var/run/dpdk/spdk_pid1419397 00:29:32.942 Removing: /var/run/dpdk/spdk_pid1419852 00:29:32.942 Removing: /var/run/dpdk/spdk_pid1420718 00:29:32.942 Removing: /var/run/dpdk/spdk_pid1421445 00:29:32.942 Removing: /var/run/dpdk/spdk_pid1421834 00:29:32.942 Removing: /var/run/dpdk/spdk_pid1422236 00:29:32.942 Removing: /var/run/dpdk/spdk_pid1422648 00:29:32.942 Removing: /var/run/dpdk/spdk_pid1423057 00:29:32.942 Removing: /var/run/dpdk/spdk_pid1423297 00:29:32.942 Removing: /var/run/dpdk/spdk_pid1423522 00:29:32.942 Removing: /var/run/dpdk/spdk_pid1423851 00:29:32.942 Removing: /var/run/dpdk/spdk_pid1425409 00:29:32.942 Removing: /var/run/dpdk/spdk_pid1428855 00:29:32.942 Removing: /var/run/dpdk/spdk_pid1429233 00:29:32.942 Removing: /var/run/dpdk/spdk_pid1429605 00:29:32.942 Removing: /var/run/dpdk/spdk_pid1429931 00:29:32.942 Removing: /var/run/dpdk/spdk_pid1430319 00:29:32.942 Removing: /var/run/dpdk/spdk_pid1430530 00:29:32.942 Removing: /var/run/dpdk/spdk_pid1431034 00:29:32.942 Removing: /var/run/dpdk/spdk_pid1431069 00:29:32.942 Removing: /var/run/dpdk/spdk_pid1431418 00:29:32.942 Removing: /var/run/dpdk/spdk_pid1431751 00:29:32.942 Removing: /var/run/dpdk/spdk_pid1431893 00:29:32.942 Removing: /var/run/dpdk/spdk_pid1432131 00:29:32.942 Removing: /var/run/dpdk/spdk_pid1432587 00:29:32.942 Removing: /var/run/dpdk/spdk_pid1432941 00:29:32.942 Removing: /var/run/dpdk/spdk_pid1433346 00:29:32.942 Removing: /var/run/dpdk/spdk_pid1433726 00:29:32.942 Removing: /var/run/dpdk/spdk_pid1433766 00:29:32.942 Removing: /var/run/dpdk/spdk_pid1434158 00:29:32.942 Removing: /var/run/dpdk/spdk_pid1434406 00:29:32.942 Removing: /var/run/dpdk/spdk_pid1434653 00:29:32.942 Removing: /var/run/dpdk/spdk_pid1434936 00:29:32.942 Removing: /var/run/dpdk/spdk_pid1435292 00:29:32.942 Removing: /var/run/dpdk/spdk_pid1435654 00:29:32.942 Removing: /var/run/dpdk/spdk_pid1436011 00:29:32.942 Removing: /var/run/dpdk/spdk_pid1436372 00:29:32.942 Removing: /var/run/dpdk/spdk_pid1436727 00:29:32.942 Removing: /var/run/dpdk/spdk_pid1436977 00:29:32.942 Removing: /var/run/dpdk/spdk_pid1437238 00:29:32.942 Removing: /var/run/dpdk/spdk_pid1437508 00:29:32.942 Removing: /var/run/dpdk/spdk_pid1437851 00:29:32.942 Removing: /var/run/dpdk/spdk_pid1438210 00:29:32.942 Removing: /var/run/dpdk/spdk_pid1438566 00:29:32.942 Removing: /var/run/dpdk/spdk_pid1438922 00:29:32.942 Removing: /var/run/dpdk/spdk_pid1439262 00:29:32.942 Removing: /var/run/dpdk/spdk_pid1439522 00:29:32.942 Removing: /var/run/dpdk/spdk_pid1439804 00:29:32.942 Removing: /var/run/dpdk/spdk_pid1440067 00:29:32.942 Removing: /var/run/dpdk/spdk_pid1440418 00:29:32.942 Removing: /var/run/dpdk/spdk_pid1440753 00:29:32.942 Removing: /var/run/dpdk/spdk_pid1441218 00:29:32.942 Removing: /var/run/dpdk/spdk_pid1445781 00:29:32.942 Removing: /var/run/dpdk/spdk_pid1501564 00:29:32.942 Removing: /var/run/dpdk/spdk_pid1506669 00:29:32.942 Removing: /var/run/dpdk/spdk_pid1518144 00:29:32.942 Removing: /var/run/dpdk/spdk_pid1524633 00:29:32.942 Removing: /var/run/dpdk/spdk_pid1529701 00:29:32.942 Removing: /var/run/dpdk/spdk_pid1530379 00:29:32.942 Removing: /var/run/dpdk/spdk_pid1544336 00:29:32.942 Removing: /var/run/dpdk/spdk_pid1544411 00:29:32.942 Removing: /var/run/dpdk/spdk_pid1545435 00:29:32.942 Removing: /var/run/dpdk/spdk_pid1546464 00:29:32.942 Removing: /var/run/dpdk/spdk_pid1547542 00:29:32.942 Removing: /var/run/dpdk/spdk_pid1548156 00:29:32.942 Removing: /var/run/dpdk/spdk_pid1548230 00:29:32.942 Removing: /var/run/dpdk/spdk_pid1548502 00:29:32.942 Removing: /var/run/dpdk/spdk_pid1548577 00:29:32.942 Removing: /var/run/dpdk/spdk_pid1548579 00:29:32.942 Removing: /var/run/dpdk/spdk_pid1549583 00:29:32.943 Removing: /var/run/dpdk/spdk_pid1550589 00:29:32.943 Removing: /var/run/dpdk/spdk_pid1551615 00:29:32.943 Removing: /var/run/dpdk/spdk_pid1552269 00:29:32.943 Removing: /var/run/dpdk/spdk_pid1552357 00:29:32.943 Removing: /var/run/dpdk/spdk_pid1552630 00:29:32.943 Removing: /var/run/dpdk/spdk_pid1554053 00:29:32.943 Removing: /var/run/dpdk/spdk_pid1555456 00:29:32.943 Removing: /var/run/dpdk/spdk_pid1566114 00:29:32.943 Removing: /var/run/dpdk/spdk_pid1566654 00:29:32.943 Removing: /var/run/dpdk/spdk_pid1571921 00:29:32.943 Removing: /var/run/dpdk/spdk_pid1578731 00:29:32.943 Removing: /var/run/dpdk/spdk_pid1581816 00:29:32.943 Removing: /var/run/dpdk/spdk_pid1594226 00:29:32.943 Removing: /var/run/dpdk/spdk_pid1605250 00:29:32.943 Removing: /var/run/dpdk/spdk_pid1607263 00:29:32.943 Removing: /var/run/dpdk/spdk_pid1608279 00:29:32.943 Removing: /var/run/dpdk/spdk_pid1629618 00:29:32.943 Removing: /var/run/dpdk/spdk_pid1634244 00:29:32.943 Removing: /var/run/dpdk/spdk_pid1639561 00:29:32.943 Removing: /var/run/dpdk/spdk_pid1641551 00:29:32.943 Removing: /var/run/dpdk/spdk_pid1643897 00:29:32.943 Removing: /var/run/dpdk/spdk_pid1643974 00:29:32.943 Removing: /var/run/dpdk/spdk_pid1644258 00:29:32.943 Removing: /var/run/dpdk/spdk_pid1644593 00:29:32.943 Removing: /var/run/dpdk/spdk_pid1645241 00:29:32.943 Removing: /var/run/dpdk/spdk_pid1647328 00:29:32.943 Removing: /var/run/dpdk/spdk_pid1648403 00:29:32.943 Removing: /var/run/dpdk/spdk_pid1648780 00:29:32.943 Removing: /var/run/dpdk/spdk_pid1651397 00:29:32.943 Removing: /var/run/dpdk/spdk_pid1652195 00:29:32.943 Removing: /var/run/dpdk/spdk_pid1652908 00:29:32.943 Removing: /var/run/dpdk/spdk_pid1657925 00:29:32.943 Removing: /var/run/dpdk/spdk_pid1670089 00:29:32.943 Removing: /var/run/dpdk/spdk_pid1675479 00:29:32.943 Removing: /var/run/dpdk/spdk_pid1682758 00:29:32.943 Removing: /var/run/dpdk/spdk_pid1684267 00:29:32.943 Removing: /var/run/dpdk/spdk_pid1686112 00:29:32.943 Removing: /var/run/dpdk/spdk_pid1691273 00:29:32.943 Removing: /var/run/dpdk/spdk_pid1696368 00:29:32.943 Removing: /var/run/dpdk/spdk_pid1705263 00:29:32.943 Removing: /var/run/dpdk/spdk_pid1705268 00:29:32.943 Removing: /var/run/dpdk/spdk_pid1710375 00:29:32.943 Removing: /var/run/dpdk/spdk_pid1710614 00:29:32.943 Removing: /var/run/dpdk/spdk_pid1710769 00:29:32.943 Removing: /var/run/dpdk/spdk_pid1711382 00:29:32.943 Removing: /var/run/dpdk/spdk_pid1711387 00:29:32.943 Removing: /var/run/dpdk/spdk_pid1716519 00:29:32.943 Removing: /var/run/dpdk/spdk_pid1717326 00:29:32.943 Removing: /var/run/dpdk/spdk_pid1722569 00:29:32.943 Removing: /var/run/dpdk/spdk_pid1725922 00:29:32.943 Removing: /var/run/dpdk/spdk_pid1733076 00:29:32.943 Removing: /var/run/dpdk/spdk_pid1739527 00:29:32.943 Removing: /var/run/dpdk/spdk_pid1747944 00:29:32.943 Removing: /var/run/dpdk/spdk_pid1747946 00:29:32.943 Removing: /var/run/dpdk/spdk_pid1770637 00:29:33.204 Removing: /var/run/dpdk/spdk_pid1771318 00:29:33.204 Removing: /var/run/dpdk/spdk_pid1772054 00:29:33.204 Removing: /var/run/dpdk/spdk_pid1772838 00:29:33.204 Removing: /var/run/dpdk/spdk_pid1773822 00:29:33.204 Removing: /var/run/dpdk/spdk_pid1774643 00:29:33.204 Removing: /var/run/dpdk/spdk_pid1775414 00:29:33.204 Removing: /var/run/dpdk/spdk_pid1776121 00:29:33.204 Removing: /var/run/dpdk/spdk_pid1781343 00:29:33.204 Removing: /var/run/dpdk/spdk_pid1781680 00:29:33.204 Removing: /var/run/dpdk/spdk_pid1789443 00:29:33.204 Removing: /var/run/dpdk/spdk_pid1789642 00:29:33.204 Removing: /var/run/dpdk/spdk_pid1792442 00:29:33.204 Removing: /var/run/dpdk/spdk_pid1799640 00:29:33.204 Removing: /var/run/dpdk/spdk_pid1799649 00:29:33.204 Removing: /var/run/dpdk/spdk_pid1805716 00:29:33.204 Removing: /var/run/dpdk/spdk_pid1808141 00:29:33.204 Removing: /var/run/dpdk/spdk_pid1810487 00:29:33.204 Removing: /var/run/dpdk/spdk_pid1811857 00:29:33.204 Removing: /var/run/dpdk/spdk_pid1814337 00:29:33.204 Removing: /var/run/dpdk/spdk_pid1815652 00:29:33.204 Removing: /var/run/dpdk/spdk_pid1825974 00:29:33.204 Removing: /var/run/dpdk/spdk_pid1826532 00:29:33.204 Removing: /var/run/dpdk/spdk_pid1827099 00:29:33.204 Removing: /var/run/dpdk/spdk_pid1830116 00:29:33.204 Removing: /var/run/dpdk/spdk_pid1830735 00:29:33.204 Removing: /var/run/dpdk/spdk_pid1831405 00:29:33.204 Removing: /var/run/dpdk/spdk_pid1836778 00:29:33.204 Removing: /var/run/dpdk/spdk_pid1836807 00:29:33.204 Removing: /var/run/dpdk/spdk_pid1838608 00:29:33.204 Clean 00:29:33.465 15:39:50 -- common/autotest_common.sh@1437 -- # return 0 00:29:33.465 15:39:50 -- spdk/autotest.sh@382 -- # timing_exit post_cleanup 00:29:33.465 15:39:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:33.465 15:39:50 -- common/autotest_common.sh@10 -- # set +x 00:29:33.465 15:39:50 -- spdk/autotest.sh@384 -- # timing_exit autotest 00:29:33.465 15:39:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:33.465 15:39:50 -- common/autotest_common.sh@10 -- # set +x 00:29:33.465 15:39:50 -- spdk/autotest.sh@385 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:29:33.465 15:39:50 -- spdk/autotest.sh@387 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:29:33.465 15:39:50 -- spdk/autotest.sh@387 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:29:33.465 15:39:50 -- spdk/autotest.sh@389 -- # hash lcov 00:29:33.465 15:39:50 -- spdk/autotest.sh@389 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:29:33.465 15:39:50 -- spdk/autotest.sh@391 -- # hostname 00:29:33.465 15:39:50 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-12 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:29:33.725 geninfo: WARNING: invalid characters removed from testname! 00:30:00.384 15:40:14 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:00.384 15:40:17 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:01.322 15:40:18 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:03.233 15:40:20 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:04.618 15:40:21 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:06.533 15:40:23 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:07.918 15:40:25 -- spdk/autotest.sh@398 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:30:07.918 15:40:25 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:07.918 15:40:25 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:30:07.918 15:40:25 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:07.918 15:40:25 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:07.918 15:40:25 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.918 15:40:25 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.918 15:40:25 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.918 15:40:25 -- paths/export.sh@5 -- $ export PATH 00:30:07.918 15:40:25 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.918 15:40:25 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:30:07.918 15:40:25 -- common/autobuild_common.sh@435 -- $ date +%s 00:30:07.918 15:40:25 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1714138825.XXXXXX 00:30:07.918 15:40:25 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1714138825.GlzNvN 00:30:07.918 15:40:25 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:30:07.918 15:40:25 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:30:07.918 15:40:25 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:30:07.918 15:40:25 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:30:07.918 15:40:25 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:30:07.918 15:40:25 -- common/autobuild_common.sh@451 -- $ get_config_params 00:30:07.918 15:40:25 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:30:07.918 15:40:25 -- common/autotest_common.sh@10 -- $ set +x 00:30:07.918 15:40:25 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:30:07.918 15:40:25 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:30:07.918 15:40:25 -- pm/common@17 -- $ local monitor 00:30:07.918 15:40:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:07.918 15:40:25 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=1850479 00:30:07.918 15:40:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:07.918 15:40:25 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=1850481 00:30:07.918 15:40:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:07.918 15:40:25 -- pm/common@21 -- $ date +%s 00:30:07.918 15:40:25 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=1850483 00:30:07.918 15:40:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:07.918 15:40:25 -- pm/common@21 -- $ date +%s 00:30:07.918 15:40:25 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=1850486 00:30:07.918 15:40:25 -- pm/common@26 -- $ sleep 1 00:30:07.918 15:40:25 -- pm/common@21 -- $ date +%s 00:30:07.918 15:40:25 -- pm/common@21 -- $ date +%s 00:30:07.918 15:40:25 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1714138825 00:30:07.918 15:40:25 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1714138825 00:30:07.918 15:40:25 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1714138825 00:30:07.918 15:40:25 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1714138825 00:30:07.918 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1714138825_collect-vmstat.pm.log 00:30:07.918 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1714138825_collect-cpu-temp.pm.log 00:30:07.918 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1714138825_collect-bmc-pm.bmc.pm.log 00:30:07.918 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1714138825_collect-cpu-load.pm.log 00:30:08.861 15:40:26 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:30:08.861 15:40:26 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j144 00:30:08.861 15:40:26 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:08.861 15:40:26 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:30:08.861 15:40:26 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:30:08.861 15:40:26 -- spdk/autopackage.sh@19 -- $ timing_finish 00:30:08.861 15:40:26 -- common/autotest_common.sh@722 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:30:08.861 15:40:26 -- common/autotest_common.sh@723 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:30:08.861 15:40:26 -- common/autotest_common.sh@725 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:30:08.861 15:40:26 -- spdk/autopackage.sh@20 -- $ exit 0 00:30:08.861 15:40:26 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:30:08.861 15:40:26 -- pm/common@30 -- $ signal_monitor_resources TERM 00:30:08.861 15:40:26 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:30:08.861 15:40:26 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:08.861 15:40:26 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:30:08.861 15:40:26 -- pm/common@45 -- $ pid=1850497 00:30:08.861 15:40:26 -- pm/common@52 -- $ sudo kill -TERM 1850497 00:30:08.861 15:40:26 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:08.861 15:40:26 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:30:08.861 15:40:26 -- pm/common@45 -- $ pid=1850499 00:30:08.861 15:40:26 -- pm/common@52 -- $ sudo kill -TERM 1850499 00:30:09.122 15:40:26 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:09.122 15:40:26 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:30:09.122 15:40:26 -- pm/common@45 -- $ pid=1850498 00:30:09.122 15:40:26 -- pm/common@52 -- $ sudo kill -TERM 1850498 00:30:09.122 15:40:26 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:09.122 15:40:26 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:30:09.122 15:40:26 -- pm/common@45 -- $ pid=1850500 00:30:09.122 15:40:26 -- pm/common@52 -- $ sudo kill -TERM 1850500 00:30:09.122 + [[ -n 1293505 ]] 00:30:09.122 + sudo kill 1293505 00:30:09.133 [Pipeline] } 00:30:09.151 [Pipeline] // stage 00:30:09.156 [Pipeline] } 00:30:09.176 [Pipeline] // timeout 00:30:09.185 [Pipeline] } 00:30:09.202 [Pipeline] // catchError 00:30:09.208 [Pipeline] } 00:30:09.226 [Pipeline] // wrap 00:30:09.232 [Pipeline] } 00:30:09.248 [Pipeline] // catchError 00:30:09.258 [Pipeline] stage 00:30:09.261 [Pipeline] { (Epilogue) 00:30:09.280 [Pipeline] catchError 00:30:09.282 [Pipeline] { 00:30:09.298 [Pipeline] echo 00:30:09.300 Cleanup processes 00:30:09.307 [Pipeline] sh 00:30:09.599 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:09.599 1850595 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:30:09.599 1851053 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:09.615 [Pipeline] sh 00:30:09.905 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:09.905 ++ grep -v 'sudo pgrep' 00:30:09.905 ++ awk '{print $1}' 00:30:09.905 + sudo kill -9 1850595 00:30:09.918 [Pipeline] sh 00:30:10.205 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:30:20.228 [Pipeline] sh 00:30:20.513 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:30:20.513 Artifacts sizes are good 00:30:20.531 [Pipeline] archiveArtifacts 00:30:20.539 Archiving artifacts 00:30:20.732 [Pipeline] sh 00:30:21.060 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:30:21.078 [Pipeline] cleanWs 00:30:21.089 [WS-CLEANUP] Deleting project workspace... 00:30:21.089 [WS-CLEANUP] Deferred wipeout is used... 00:30:21.096 [WS-CLEANUP] done 00:30:21.098 [Pipeline] } 00:30:21.117 [Pipeline] // catchError 00:30:21.128 [Pipeline] sh 00:30:21.412 + logger -p user.info -t JENKINS-CI 00:30:21.421 [Pipeline] } 00:30:21.436 [Pipeline] // stage 00:30:21.439 [Pipeline] } 00:30:21.450 [Pipeline] // node 00:30:21.454 [Pipeline] End of Pipeline 00:30:21.489 Finished: SUCCESS